WO2019015522A1 - Emoticon image generation method and device, electronic device, and storage medium - Google Patents

Emoticon image generation method and device, electronic device, and storage medium Download PDF

Info

Publication number
WO2019015522A1
WO2019015522A1 PCT/CN2018/095360 CN2018095360W WO2019015522A1 WO 2019015522 A1 WO2019015522 A1 WO 2019015522A1 CN 2018095360 W CN2018095360 W CN 2018095360W WO 2019015522 A1 WO2019015522 A1 WO 2019015522A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
image
message
emoticon
page
Prior art date
Application number
PCT/CN2018/095360
Other languages
French (fr)
Chinese (zh)
Inventor
梁睿思
张雨涵
徐杨
李强
邢起源
沃迪
赵婷
刘晓峰
陆思音
李靓
金媛媛
邹放
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019015522A1 publication Critical patent/WO2019015522A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the field of computer technologies, and in particular, to an expression picture generating method, apparatus, electronic device, and storage medium.
  • the expression language is also a kind of information that can be transmitted.
  • the expression language is represented by the expression portrayed in the picture. Corresponding information.
  • the expression language is usually provided to the user through various expression pictures, and the user can select an expression picture to express the mood at the moment, and then convey some information to the other party through the expression picture.
  • the ready-made emoticons are not only limited in number, but also have fixed content, which is difficult to meet the diversified needs of users. Users often need to customize the emoticons.
  • the existing expression image generation method is that the user actively performs a series of processing such as mapping, editing, and synthesis on the favorite image, and then generates and saves an expression image related to the image for subsequent use.
  • the present application provides a method, an apparatus, an electronic device, and a storage medium for generating an emoticon, which can solve the problem that the user is too cumbersome in the process of generating an emoticon.
  • an emoticon image generating method for use in an electronic device, the method comprising:
  • an emoticon image generation entry corresponding to the picture is displayed in the page;
  • an emoticon image generating apparatus comprising:
  • a display module configured to: when the picture in the page includes a face area, display an expression picture generation entry corresponding to the picture in the page;
  • a receiving module configured to receive an generated emoticon image triggering operation on the emoticon image generation entry
  • a synthesizing module configured to synthesize a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generate an emoticon image related to the image.
  • an electronic device comprising: a processor and a memory, the memory having computer readable instructions stored thereon, the computer readable instructions being executed by the processor to implement an emoticon generation as described above method.
  • a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement an emoticon generation method as described above.
  • the picture is automatically triggered to form an associated expression picture generation entry, thereby intercepting the generated expression picture operation triggered by the expression picture generation entry, and then performing the operation according to the generated expression picture operation.
  • the face area is synthesized to the background image to be synthesized, and finally an expression picture related to the picture is generated.
  • the user when browsing the favorite picture, the user does not need to leave the page displaying the picture, and can complete the series of operations of generating the above-mentioned expression picture by automatically generating the generated expression picture in the page, which is simple, fast and effective. Improved the efficiency of emoticon image generation.
  • FIG. 1 is a block diagram showing the hardware structure of an electronic device according to an exemplary embodiment.
  • FIG. 2 is a flowchart of an emoticon image generation method according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of a picture and its associated emoticon picture display in a picture message according to an exemplary embodiment.
  • FIG. 4 is a schematic diagram of an emoticon picture forming a new picture message posted on a message page according to an exemplary embodiment.
  • Figure 5 is a flow diagram of an embodiment of step 250 in the corresponding embodiment of Figure 2.
  • FIG. 6 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
  • FIG. 7 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
  • FIG. 8 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
  • FIG. 9 is a schematic diagram of a specific implementation of an expression image generation method in an application scenario.
  • FIG. 10 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
  • FIG. 11 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
  • FIG. 12 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
  • FIG. 13 is a schematic flowchart of a method for generating an emoticon in the application scenario of FIG. 9 to FIG.
  • FIG. 14 is a schematic diagram of a specific process of face recognition and expression picture synthesis in an emoticon image generation method in the application scenario of FIG. 9 to FIG.
  • FIG. 15 is a block diagram of an emoticon image generating apparatus according to an exemplary embodiment.
  • FIG. 16 is a block diagram of an emoticon image generating apparatus according to another exemplary embodiment.
  • FIG. 1 is a block diagram of an electronic device according to an exemplary embodiment.
  • the electronic device can be a desktop computer, a laptop portable computer, a tablet computer, a smart phone, an e-book reader, a computer device for image processing, or the like.
  • the electronic device 100 is only an example that is suitable for the present application, and is not considered to provide any limitation on the scope of use of the present application.
  • the electronic device 100 is also not to be construed as having to rely on or must have one or more of the exemplary electronic devices 100 illustrated in FIG.
  • the electronic device 100 includes a memory 101, a memory controller 103, one or more (only one shown in FIG. 1) processor 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, and a camera module. 113.
  • the memory 101 can be used to store software programs and modules, such as the emoticon image generating method and device corresponding to the program instructions and modules in the exemplary embodiment of the present application, and the processor 105 executes by executing the program instructions stored in the memory 101.
  • Various functions and data processing that is, the method of generating an expression picture.
  • the memory 101 serves as a carrier for resource storage, and may be a random storage medium such as a high speed random access memory, a nonvolatile memory such as one or more magnetic storage devices, flash memory, or other solid state memory.
  • the storage method can be short-term storage or permanent storage.
  • the peripheral interface 107 can include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input-output interface, and at least one USB interface, etc., for coupling external external input/output devices to the memory 101 and processing
  • the device 105 is configured to communicate with various external input/output devices.
  • the radio frequency module 109 is configured to transmit and receive electromagnetic waves, and realize mutual conversion between electromagnetic waves and electric signals, thereby communicating with other devices through a communication network.
  • the communication network includes a cellular telephone network, a wireless local area network, or a metropolitan area network, and the above communication networks can use various communication standards, protocols, and technologies.
  • the positioning module 111 is configured to acquire a geographic location where the electronic device 100 is currently located.
  • Examples of positioning module 111 include, but are not limited to, Global Positioning System (GPS), wireless local area network or mobile communication network based positioning technology.
  • the camera module 113 is attached to the camera for taking pictures or videos.
  • the captured picture or video can be stored in the memory 101, and can also be sent to the host computer through the radio frequency module 109.
  • the audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more earphone interfaces. Audio data interaction with other devices through the audio interface.
  • the audio data can be stored in the memory 101 and can also be transmitted through the radio frequency module 109.
  • the touch screen 117 provides an input and output interface between the electronic device 100 and the user. Specifically, the user can perform an input operation, such as a click, a touch, a slide, or the like, through the touch screen 117 to cause the electronic device to respond to the input operation.
  • the electronic device 100 displays the output content formed by any one or combination of text, picture or video to the user through the touch screen 117.
  • the button module 119 includes at least one button for providing an interface for the user to input to the electronic device 100, and the user can cause the electronic device 100 to perform different functions by pressing different buttons.
  • the sound adjustment button can be used by the user to adjust the volume of the sound played by the electronic device 100.
  • FIG. 1 is merely illustrative, and the electronic device 100 may further include more or less components than those shown in FIG. 1, or have components different from those shown in FIG.
  • the components shown in Figure 1 can be implemented in hardware, software, or a combination thereof.
  • the method for generating an emoticon is applied to the electronic device 100 shown in FIG. 1 as an example.
  • the method includes the following steps:
  • Step 210 When a picture in the page includes a face area, an emoticon image generation entry corresponding to the picture is displayed in the page.
  • the above page refers to displaying the content to the user by using the touch screen as a carrier.
  • the content displayed by the touch screen to the user includes a picture, and at this time, the page is displayed.
  • the page for displaying the image includes, but is not limited to, any one of a picture browser page, a message page, and a picture selection page. Accordingly, the picture displayed in the page may be a single picture (such as a picture browser page) or multiple pictures (such as a message page).
  • the message page is used for continuously publishing messages, and corresponds to an application that can continuously publish messages, such as a social application and an instant messaging application.
  • the message includes a text message, a picture message, a graphic message, and the like. It should be noted that the text message only contains text, the picture message only contains the picture, and the picture message refers to the message formed by the picture and the text mixed.
  • the emoticon image generation entry is used to generate an emoticon image corresponding to the image, where the emoticon image is related to the image displayed in the webpage, and is a photo with the entire content in the image as the background, or only the image A picture of the expression contained in the face area.
  • the face area in the picture may be a face area of a person, a face area of an animal, or a face area of a character such as a cartoon or anime, which is not limited herein.
  • the facial region can be acquired by detecting by biometric technology, including but not limited to face recognition, animal facial recognition, and the like.
  • the electronic device can perform a process of generating a related expression picture for the picture.
  • the emoticon generation portal is an entry provided by the electronic device to the user for performing the emoticon generation process, that is, the user may cause the electronic device to generate a correlation for the image by performing related operations triggered by the emoticon generation portal.
  • Emoticon picture is an entry provided by the electronic device to the user for performing the emoticon generation process, that is, the user may cause the electronic device to generate a correlation for the image by performing related operations triggered by the emoticon generation portal. Emoticon picture.
  • the emoticon image generation portal is an entry provided by an application corresponding to the page for generating an emoticon image corresponding to the image.
  • the emoticon image generation portal is automatically triggered to be formed in the page according to the facial region included in the image, that is, when the image of the page includes the facial region, the emoticon image generation portal is automatically triggered to be formed in the webpage.
  • a message message when a message message is included in a message page, face recognition is performed on the image in the image message, thereby detecting a face region in the image, thereby automatically displaying an associated expression image generation portal for the image, and the text message is generated. And the text message does not automatically display the emoticon image generation entry.
  • the user only needs to generate an entry trigger related operation on the displayed emoticon image, and the electronic device can complete a series of operations of the subsequent emoticon image generation, which greatly simplifies the excessively cumbersome operation of the user in the prior art.
  • the emoticon image generation entry is associated with the image displayed in the page, which means that each image displayed in the page can form an emoticon image generation portal, such as the virtual icons 1 and 4 shown in FIG.
  • the emoticon image is related to the image associated with the emoticon image generation portal.
  • Step 230 Receive an generated emoticon picture operation triggered by an emoticon image generation entry.
  • the generating an emoticon picture operation is used to generate an emoticon picture according to the picture.
  • the user After the emoticon image generation entry is displayed, the user generates an emoticon image triggered by the emoticon image generation portal to enable the electronic device to know that the user wants to generate a related emoticon image for the image displayed in the page, thereby performing the image displayed on the page.
  • the process of generating an emoticon is used to generate an emoticon picture according to the picture.
  • the expression picture generation entry may be a virtual icon 1 in the message page, and the virtual icon 1 is associated with the picture message in the message page.
  • the user generates a related emoticon picture for the picture in the picture message by clicking the virtual icon 1 , and the click operation is an generated emoticon picture operation triggered by the emoticon image generation entry.
  • the electronic device can listen to the generated emoticon image operation and perform subsequent image synthesizing processes.
  • Step 250 Synthesize a face region in the image to a background image to be synthesized according to the generated emoticon image operation, and generate an emoticon image related to the image.
  • the background image to be synthesized may be randomly generated by the background image material in the preset background image material library, or may be randomly generated by the image in the local storage specified by the user, or may be a default fixed image, that is, each expression
  • the fixed picture is set to the background image to be synthesized during the image generation process.
  • the acquisition of the background image to be synthesized may be completed before the execution of step 250, or may be completed during the execution of step 250, which is not limited herein.
  • the picture synthesis process After obtaining the face area in the picture and the background picture to be synthesized, the picture synthesis process can be performed.
  • Gaussian Blur processing also known as Gaussian Blur, is an image processing technique used to reduce image noise and reduce image detail levels. Emoticon image.
  • the area on the background image to be synthesized also includes a face area
  • the face area in the background image to be synthesized may affect the face area in the picture, thereby affecting the face area in the picture and The composite effect of the background image to be synthesized.
  • the face region in the background image to be synthesized is also mastered. Tonal fill.
  • the main color used for the filling refers to the main color of the remaining facial area except the facial features in the facial region to be synthesized in the background image.
  • the primary color tone is related to the saturation of the remaining face regions, the average color of the remaining face regions, and the grayscale distribution of the remaining face regions.
  • the emoticon image can be stored locally for use by subsequent users, and can also be displayed for preview by the user, or a new photo message formed by the emoticon image is posted in the message page.
  • the generated emoticon image can be displayed on the page simultaneously with the image displayed on the page, and can also be displayed in another new page.
  • the message page is used as the page for displaying the picture.
  • the picture in the expression picture and the picture message are displayed side by side in the message page, and the user can confirm whether the expression picture meets the requirement by previewing the displayed picture.
  • the user can click the emoticon image to cause the emoticon image to form a new photo message to be posted in the message page, as shown in FIG.
  • the electronic device also cancels the display of the emoticon image associated with the picture in the old picture message in the message page.
  • the emoticon image previewing process further provides the user with a photo editing open entry and a background image replacement entry to be triggered by the user opening the portal at the photo editing.
  • the related operation starts a picture editing process on the emoticon image, or replaces the background of the emoticon image by the related operation triggered by the user at the background image replacement entry until the user is satisfied.
  • the picture editing open entry 2 and the background picture replacement entry 3 are displayed on the side of the expression picture in the form of a virtual icon.
  • the emoticon image can also be displayed in another new page.
  • the new page is a photo editing page.
  • the user in addition to previewing the emoticon image in the photo editing page, the user can directly view the image.
  • the image editing process is performed on the edit page.
  • the automatic triggering of the expression image generation portal forms a guide image for the user to generate a related image for the image displayed in the page, and is not limited to relying on the user to actively generate the expression image. In this way, automatic generation of emoticons is achieved.
  • the user can complete the process of generating the expression image simply and conveniently without leaving the page displaying the image, thereby effectively reducing the operation steps of the user, thereby effectively improving the efficiency of generating the expression image.
  • the page includes a message page that can continuously publish a message, and correspondingly, before step 210, the method as described above may further include the following steps:
  • Picture message detection refers to detecting whether a picture message is included in a message page according to a preset rule.
  • the preset rule can be flexibly adjusted according to a specific application scenario.
  • the preset rule includes a preset time period, that is, the image message is detected in the message page according to the preset time period, and the image page is included in the message page in the preset time period.
  • the message is used to perform face recognition on the picture in the picture message included in the message page to obtain a face area. Once the picture message exceeds the range of the preset time period, the real-time performance of the picture message is considered to be too low, and thus is not executed.
  • the preset time includes the number of picture messages, that is, whether the detection message page includes at least two picture messages, that is, only when detecting that the message page includes at least two picture messages,
  • the user has the desire to perform a fight with the other party, and then performs subsequent related emoticon generation processes for all the picture messages included in the message page.
  • the preset rule may further include a preset message publishing quantity, that is, performing a picture message detection on the continuously published message according to the preset message publishing quantity.
  • a preset message publishing quantity that is, performing a picture message detection on the continuously published message according to the preset message publishing quantity.
  • the continuously published message in the preset message release quantity includes a picture message, performing face recognition on the picture in the picture message included in the continuous release message to detect a face area in the picture; or
  • face recognition is performed on the pictures in the picture message included in the continuous release message to detect the face area in the picture.
  • the detection of the face area in the picture may be to detect whether the face area is included in the picture, or directly obtain the face area in the picture.
  • the preset rule may be combined according to a specific application scenario.
  • the preset rule includes a preset time period and a number of pictures, that is, the picture message is detected according to the number of picture messages included in the message page in the preset time period.
  • the preset time period can be flexibly adjusted according to a specific application scenario.
  • the preset time period is 2 minutes.
  • the picture in the picture message is automatically triggered to form an associated expression picture generation entry.
  • the emoticon image generation portal is displayed in the form of the virtual icon 1 on one side of the picture in the picture message, as shown in FIG.
  • the picture message may be continuously detected in the message page according to the preset time period, and the picture may be further performed in the message page according to other preset rules. Message detection.
  • step 210 the method as described above may further include the following steps:
  • the picture message detection is performed on the continuously published message according to the preset message release quantity.
  • the preset number of message issuance is not for the message page, but for the continuously published message, that is, the picture message detection is performed for the message of the number of consecutively issued preset messages.
  • the number of preset message advertisements can be flexibly adjusted according to specific application scenarios. For example, the number of preset message advertisements is 5. Accordingly, the picture message detection is performed for five consecutively published messages.
  • the picture in the picture message is automatically triggered to form an associated expression picture generation entry.
  • the step of performing picture message detection in the message page according to the preset time period is returned.
  • the automatic trigger formation of the emoticon generation portal depends on detecting that at least two picture messages are included in the message page within a preset time period, and further relies on if the preset rule is not satisfied And detecting, according to the preset message publishing quantity, whether at least two picture messages are included in the continuously published message, thereby the electronic device will consider that the user has a desire to transmit a message through the expression picture, thereby fully ensuring the expression picture.
  • the probability that the entry is triggered is generated, for example, in the scenario application scenario.
  • the setting of the preset rule such as the preset time period, the number of picture messages, and the number of preset message releases, provides a sufficient basis for automatic triggering of the expression image generation entry, thereby avoiding the execution of the electronic device.
  • the necessary processing tasks are conducive to improving the processing efficiency of electronic devices.
  • the face image displayed in the page is also subjected to face recognition to detect the face region in the image, and then automatically triggered by the detected face region in the image.
  • An emoticon image generation portal is formed to ensure that the emoticon image is related to the facial region in the image.
  • the page includes a message page, a picture browser page, a picture selection page, or other page that can display the picture.
  • the face area in the picture is used as an expression of the expression picture, that is, the face area is extracted from the picture and synthesized into a background picture to be synthesized.
  • the background image to be synthesized may be randomly generated by the background image material to be synthesized in the background image material library to be synthesized, and may be generated by the user to specify the background image material to be synthesized in the background image material library, or may be locally stored by the user.
  • the image is generated, or, set with a default fixed image.
  • the image enhancement processing is performed on the face region in the image, and the image enhancement processing includes, but is not limited to, black and white processing, color gradation adjustment processing, Edge feathering and the like, thereby improving the composite effect of the face region and the background image to be synthesized in the picture, thereby improving the user's expression image generation experience.
  • the face recognition and image enhancement processing is implemented by an image processing plug-in embedded in the application, so that the user can generate relevant expression images for the pictures displayed on the page, thereby facilitating the improvement of the generation efficiency of the expression images.
  • the user can also perform the expression picture generation process for the picture through the preset expression picture generation entry.
  • the page for displaying the image is a picture browser page
  • the image browser page is preset with an expression image generation entry, and it is assumed that the picture displayed in the picture browser page does not detect the face area, and when the user is in the expression picture
  • the portal trigger related operation is generated, the displayed image is used as the background of the emoticon image, that is, the entire content in the image is used as the background image to be synthesized, and accordingly, a blank image is synthesized as the facial region to the background image to be synthesized.
  • a blank image is synthesized as the facial region to the background image to be synthesized.
  • the generated emoticon image will be displayed in another new page, for example, the new page is a photo editing page, thereby enabling the user to preview the emoticon image and perform image editing processing in the photo editing page.
  • the electronic device can automatically generate an expression picture for the user according to the expression picture generation entry, which greatly improves the applicability and compatibility of the expression picture generation process. .
  • the manner of forming the emoticon image generation portal may be flexibly adjusted according to the application scenario.
  • the message page may be automatically triggered to form an expression image generation entry, and an expression picture generation entry may be further set in the pull-up session box popped up in the message page, and only one expression picture generation entry is preset in the picture browser page.
  • a face region in a picture is detected by face recognition, and when a face region is synthesized to a background image to be synthesized and an expression image is generated, the generated expression is first generated according to The picture operation determines the face area in the picture by facial feature extraction, crops the picture according to the face area, obtains a face area picture, and performs image enhancement processing on the face area picture to obtain a face picture, and finally The face image is synthesized to the background image to be synthesized, and an image corresponding to the image is generated.
  • the step 250 synthesizing the face region in the picture to the background image to be synthesized according to the generating the emoticon picture operation may include the following steps:
  • Step 251 determining a face region in the picture by facial feature extraction.
  • the facial region is determined in the picture by facial feature extraction according to the generated emoticon image operation.
  • the facial features include a facial contour, a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and the like.
  • the face region determined by the facial feature extraction includes a facial contour region and a left eye. Area, right eye area, left eye area, right eye area, nose area, mouth area, etc.
  • each area in the picture is identified by a coordinate value, that is, the determined face area in the picture is uniquely represented in the form of coordinate values, so that the face is subsequently cropped according to the coordinate value corresponding to the face area.
  • a coordinate value that is, the determined face area in the picture is uniquely represented in the form of coordinate values, so that the face is subsequently cropped according to the coordinate value corresponding to the face area.
  • step 253 the picture is cropped according to the face area to obtain a picture of the face area.
  • the picture when the picture is cropped according to the face area, the picture may be cropped by the coordinate value in step 251 to obtain a face area.
  • step 255 the facial region image is synthesized to the background image to be synthesized, and an emoticon image related to the image is generated.
  • the image of the face region may be subjected to image enhancement processing to obtain a face image, and then the face image is synthesized to the background image to be synthesized. Generate an emoticon image.
  • the image enhancement processing process may include at least one of the following steps:
  • the face region picture is black-whitened by calculating the gray value of the face region picture.
  • the gray value of the face area picture is related to the gray value of each pixel point in the face area picture and the number of non-transparent pixel points. Therefore, before calculating the gradation value of the face region picture, it is first necessary to obtain the gradation value of each pixel in the face region picture.
  • m represents the gray value of the pixel
  • R represents the red value corresponding to the pixel
  • G represents the green value corresponding to the pixel
  • B represents the blue value corresponding to the pixel.
  • the gray value of the image of the face region can be calculated, and the calculation formula is as follows:
  • x represents the gray value of the face region picture
  • m i represents the gray value of the i-th pixel point in the face region picture
  • N represents the total number of pixel points in the face region picture
  • N′ represents the face region picture The number of transparent pixels in Central Africa.
  • the gray value of the pixel and the gray value range of the face region picture are still 0 to 255, wherein 0 represents black, 255 represents white, and between 0 and 255. Indicates the corresponding gray.
  • black and white processing may be implemented by calling a grayscale processing plug-in preset by the system, or may be implemented by an image processing plug-in embedded in the application.
  • the gradation adjustment parameter corresponding to the gradation value of the face region picture is obtained according to the correspondence between the gradation value and the gradation adjustment parameter, and the gradation adjustment is performed on the face region picture according to the gradation adjustment parameter. Get an intermediate result picture.
  • the corresponding color gradation adjustment parameter is obtained from the correspondence between the gray value and the gradation adjustment parameter, and the gradation of the face region image is performed according to the corresponding gradation adjustment parameter. Adjustment.
  • the correspondence between the gray value and the color tone adjustment parameter is formed according to the visual requirement, and the relationship between the gray value of the image of the massive face region and the different color tone adjustment parameters is statistically formed, and can be used in practical applications. Flexible adjustment, so that the image effects of subsequent generated emoticons can be dynamically changed, thereby enhancing the user's custom experience.
  • the visual requirement refers to an emoticon style designed according to the user's aesthetic standard for the emoticon image.
  • the tone scale adjustment parameters include a black value, a white value, and a gamma value.
  • the corresponding gradation adjustment parameter can be confirmed by Table 1, and the corresponding gradation adjustment parameter is input to the image processing plug-in embedded in the application, thereby realizing Level adjustment of the face area picture.
  • the intermediate result picture is subjected to edge feathering to obtain a face picture.
  • the intermediate result picture is further subjected to edge feathering processing, thereby further ensuring the synthesis effect of the subsequent face picture and the background picture to be synthesized.
  • each pixel point in the intermediate result picture is traversed, and the transparency of each pixel point is adjusted.
  • the transparency adjustment process includes: maintaining the transparency of the pixel having zero transparency, and adjusting the transparency of the pixel whose transparency is not zero according to the following calculation formula.
  • n R ⁇ 0.3 + G ⁇ 0.59 + B ⁇ 0.11.
  • n is the transparency of the pixel
  • R is the red value corresponding to the pixel
  • G is the green value corresponding to the pixel
  • B is the blue value corresponding to the pixel.
  • a corresponding mask pattern is created, and at the same time, Gaussian blur processing is performed on the mask pattern, and the gray value of each pixel in the mask pattern of the Gaussian blur processing is calculated.
  • the calculation formula of the gray value of each pixel in the mask is consistent with the calculation formula of the gray value of each pixel in the image of the face region, and will not be described in detail herein.
  • s i represents the transparency of the i-th pixel in the face image
  • n i represents the transparency of the i-th pixel in the intermediate result picture
  • t i represents the gray value of the i-th pixel in the mask
  • N represents The total number of pixels in the face image.
  • the acquisition of the face image is realized, and the image enhancement processing provides a sufficient guarantee for the perfect fusion of the face image and the background image to be synthesized.
  • the method as described above may further include the following steps:
  • Step 310 Receive a picture editing open operation triggered for an emoticon picture.
  • the image editing open operation is used to jump to the image editing page to edit the expression image.
  • the user when performing an emoticon preview for a user, if the user is dissatisfied with the generated emoticon image, the user is provided with a photo editing open entry to initiate execution of the emoticon image by the related operation triggered by the user in the photo editing open portal. Picture editing process.
  • a virtual icon 2 is set as a picture editing opening entrance on the side of the generated expression picture, and the user can pass the electronic when the user needs to perform image editing processing on the expression picture.
  • Step 330 Jump to the picture editing page according to the picture editing open operation.
  • the electronic device jumps to the image editing page by the message page in response to the user-triggered picture editing opening operation, so that the subsequent user performs image editing processing on the expression image in the image editing page.
  • Step 350 Receive a picture editing operation triggered in the picture editing page.
  • Step 370 Perform image editing processing on the emoticon image according to the picture editing operation.
  • the user can perform image editing processing on the emoticon image. Since the emoticon image includes an emoticon and a background, the photo editing process includes, but is not limited to, changing the emoticon, changing the background, adding text, and the like.
  • the user When the user performs the expression replacement, the user first selects a picture in the local storage or uses the camera module configured by the electronic device to take a picture to obtain a corresponding picture, and then obtains a face included in the picture through face recognition, and then performs the face image in the face with the face.
  • Expression replacement When the user performs the expression replacement, the user first selects a picture in the local storage or uses the camera module configured by the electronic device to take a picture to obtain a corresponding picture, and then obtains a face included in the picture through face recognition, and then performs the face image in the face with the face.
  • the background image material in the preset background image material library may be displayed for the user, and the background image in the emoticon image may be replaced according to the background image material selected by the user, or the user may directly select the image or use the electronic in the local storage.
  • the camera module configured by the device performs a photo to obtain a corresponding background image to be synthesized, thereby replacing the background in the emoticon image.
  • the user will continue to perform steps 350 and 370, receive a photo editing operation triggered by the emoticon image, and perform a photo editing process on the emoticon image.
  • step 390 the emoticon image is formed into a new photo message and posted in the message page.
  • Step 390 Forming a new picture message by using the picture edited image to be published in a message page of the continuously publishable message.
  • the function of changing the expression, changing the background, and adding text is also provided for the user, which not only satisfies the custom requirement of the original user, but also helps to improve the emoticon image of the user. Generate experience.
  • step 330 the following steps may be further included before step 330 as described above:
  • Step 410 Generate a face selection message when at least two face regions are synthesized into a background image to generate an emoticon image.
  • each face region picture corresponds to at least one face region in the emoticon picture.
  • the face selection message is used to prompt selection in at least two face regions in the emoticon picture.
  • the face selection message includes related information of at least two face regions.
  • the related information may be a location of a face region in an emoticon picture, a feature of a face region, or a face region identification identifier.
  • the face area identification mark when face recognition is performed in the picture to obtain a plurality of face areas, the face area identification mark is used to uniquely identify the face area, for example, the face area is digitally performed. Identifying and packaging the facial area identification identifier and the corresponding facial area to obtain a face selection message, and displaying the face selection message in a pop-up dialog box, thereby prompting the user to perform face selection.
  • a face region identification identifier corresponding to the plurality of face regions is displayed, for example, the face region identification identifier is displayed on the corresponding face in a highlighted form.
  • the user can select the corresponding face area by selecting the face area identification mark.
  • Step 430 Generate a face selection instruction according to the face selection operation triggered in the face selection message, and display the face area indicated by the face selection instruction on the picture editing page.
  • the picture editing page is used to perform image editing processing on the face area.
  • the face selection instruction can be generated according to the selection operation (ie, the face selection operation) made by the user, that is, the face selection instruction includes the face area corresponding to the face area. Identify the logo.
  • the corresponding face area can be displayed according to the face area identification mark included in the face selection instruction, so as to perform subsequent image editing processing on the face area.
  • the generated face selection instruction reflects the user's requirement, and is favorable for subsequently generating an expression picture that meets the user's needs.
  • step 255 as described above, the following steps may also be included:
  • Step 510 Acquire a replacement background picture operation triggered by the emoticon picture
  • Step 530 Acquire a replacement background image to be synthesized according to the replacement background image operation.
  • a virtual icon 3 is set as a background image replacement entry on the side of the generated emoticon.
  • the user can click the virtual icon through an input device or a touch screen configured by the electronic device. 3.
  • the click operation is regarded as a replacement background image operation triggered by the user for the expression picture in the background image replacement entry.
  • the electronic device obtains the replaced background image to be synthesized by responding to the user-replaced background image replacement operation, so that the subsequent user can perform background replacement in the expression image.
  • the replaced background image to be synthesized may be randomly generated by the background image material in the preset background image material library, may be specified by the user in the preset background image material library, or may be specified by the user.
  • the picture in the specified local storage may also be randomly obtained in the locally stored picture, which is not limited in this embodiment.
  • Step 550 Perform re-synthesis of the expression image according to the replaced background image to be synthesized and the face region in the picture to obtain the re-synthesized expression picture.
  • Step 570 displaying the re-formed emoticon image as an emoticon image preview.
  • the re-composed emoticon image can be displayed on the page where the image is displayed, or can be displayed on the new page.
  • the re-synthesized expression picture is displayed in a picture editing page, which is different from the picture browser page.
  • the re-formed emoticon image is still displayed side by side in the message page with the picture in the picture message, so that the user can preview again and determine whether the re-formed emoticon image meets the requirement.
  • the user can post the re-formed emoticon image into a new photo message by clicking the re-formed emoticon image.
  • the user can continue to change the entrance and/or the photo editing page through the photo editing to open the entrance and/or the photo editing page to continue to change the expression, change the background, Add text and other processing.
  • the user in the process of generating the expression image, the user is also provided with the function of changing the background, which not only satisfies the customization requirement of the original user, but also improves the user's expression picture generation experience.
  • FIG. 9 to FIG. 14 are related diagrams of a method for generating an emoticon in a specific application scenario.
  • an application that can continuously publish a message such as a social application, an instant messaging application, or the like, is run in the electronic device, and the currently displayed page in the touch screen is a message that can continuously publish a message. page.
  • the emoticon generation portal 602 associated with the photo message 601 is automatically triggered to be generated, and the image in the photo message 601 is triggered and executed according to the emoticon generation portal 602.
  • the related emoticon generation process, and the pictures in the picture message 601 and the related emoticons 603 are displayed side by side in the message page 600 for the user to preview.
  • the emoticon 603 is clicked to cause the emoticon 603 to form a new photo message 604 to be posted in the message page 600.
  • the emoticon image generation entry 602 disappears in the message page 600.
  • the photo editing open entry 605 can be triggered, and the jump enters the photo editing page 606 to change the emoticon 603, change the background 608, and the like, and edit the emoticon 609.
  • the image editing page 606 is displayed for the user to preview, or the user can trigger the background image replacement entry 607 to re-synthesize the emoticon image for previewing in the message page 600 until the generated emoticon image meets the user's needs.
  • the emoticon image creation portal, the photo editing open portal, and the background image replacement portal associated with the emoticon will always exist in the message page 600.
  • the user may also initiate an expression picture generation process by using an preset expression picture generation entry 701.
  • the expression picture generation entry is preset to be popped up in the message page. Pull up in the dialog box.
  • the image editing page 706 will be jumped accordingly, so as to generate an emoticon image that meets the user's needs through the image editing process such as changing the expression and changing the background 708 in the photo editing page 706, thereby forming a new photo message 704.
  • the image editing page 706 will be jumped accordingly, so as to generate an emoticon image that meets the user's needs through the image editing process such as changing the expression and changing the background 708 in the photo editing page 706, thereby forming a new photo message 704.
  • the message page 700 In the message page 700.
  • the emoticon and background of the emoticon image may be displayed according to the specified default photo, or may be performed according to the user's previous image editing process.
  • the expression image used is displayed and is not limited herein.
  • an application capable of displaying a picture is run in an electronic device, for example, a picture browser application, and the currently displayed page in the touch screen is a picture browser page that can display a picture.
  • the pictures 813 and 815 displayed in the picture browser page 811 can initiate the expression picture generation process through the preset expression picture generation entry 817 regardless of whether the face is included or not.
  • the image editing page 812 is displayed, and the pictures 813 and 815 in the picture browser page are displayed in the picture editing page 812, and the picture 813, 815 is replaced with the image 813, and the background 814 is edited by the picture editing page 812. Processing, generating an emoticon image that meets the user's needs.
  • the related application is not run in the electronic device, and the currently displayed page in the touch screen is a picture selection page.
  • the picture selection page is a page where the user views the photo.
  • the expression picture generation process can be started through the preset expression picture generation entry 920. At this time, the corresponding jump will be entered.
  • the image editing page 912 displays the user-selected picture 919 in the picture editing page 912, and then performs image editing processing on the picture 919 by changing the expression, changing the background 914, etc., to generate an expression that meets the user's needs. image.
  • the generated emoticon image may be stored locally for use by subsequent users, and may also trigger an application running to continuously post a message to form a new image of the emoticon.
  • the image message is posted on the app's message page.
  • FIG. 13 is a schematic flowchart of a method for generating an emoticon in the specific application scenario.
  • FIG. 14 is a schematic diagram of a specific process of face recognition and expression image synthesis in an expression image generation method in the above specific application scenario.
  • the emoticon generating method includes:
  • step 1301 the process begins.
  • step 1302 the chat window is opened.
  • the electronic device performs image message detection for the current chat window.
  • Step 1303 determine whether the current chat information needs to quickly generate a bucket map.
  • the rule is the foregoing rule for performing image message detection in a message page, and details are not described herein again.
  • Step 1304 Perform face recognition on the image when it is required to quickly generate the bucket map.
  • step 1305 it is determined whether there are multiple face images in the image.
  • Step 1306 when a plurality of face images are included in the image, the selection is made in the plurality of face images.
  • Step 1307 When a plurality of face images are not included in the image, the face region is extracted.
  • step 1308 the face area and the background picture are edited.
  • Step 1309 synthesizing the face area and the background picture to obtain an expression image.
  • the user may further re-edit the expression image, such as: adjusting the face image area in the expression image, modifying the background image, and the like.
  • the method for generating face recognition and emoticon image in the emoticon generating method includes:
  • step 1401 the face task starts.
  • step 1402 the user selects a face.
  • the user selects the target face region.
  • step 1403 a range of facial maps is determined.
  • the result of identifying the range of the face region in the image may be displayed in the user interface, and the user may determine the accuracy of the recognition result, and when the recognition result is inaccurate , the recognition result is adjusted.
  • the range of the face area is displayed in the user interface in the form of point coordinates, such as: using point coordinates to surround the face area.
  • Step 1404 licking the face and cropping.
  • the face area is cropped according to the point coordinates of the face area to obtain a face area image.
  • step 1405 the face area image is black and white.
  • gray values and contrast are calculated.
  • step 1407 the face region image is enhanced.
  • step 1408 the background image is transparently processed.
  • Step 1409 performing edge feathering on the background image.
  • step 1405 to step 1409 has been described in detail in step 255 above, and details are not described herein again.
  • step 1410 a result picture is determined.
  • the result picture is a background picture for performing transparent processing and edge feathering effect processing.
  • step 1411 a synthesis logic of the face region image and the resulting image is determined.
  • Step 1412 Perform image size matching on the face area image and the result picture.
  • step 1413 the face area image and the result picture are combined.
  • the automatic trigger formation by the expression picture generation entry not only simplifies the user's cumbersome operation, but also makes the expression picture generate more quickly, and also subtly combines face recognition, picture synthesis, and picture editing. Through the interesting connection of technology, users can experience a more active and enjoyable way of communication.
  • the program in the embodiments of the present application can be used to be compatible with various existing image display pages, so that the current Some display image pages can also guide users to quickly generate emoticons, and realize the rapid release of emoticons. They have very high versatility, which not only realizes the ingenious integration with the prior art, but also facilitates the implementation of emoticons in different applications. The rapid interaction between them further enhances the user's expression picture generation experience.
  • the following is an embodiment of the device of the present application, which can be used to execute the method for generating an emoticon image according to the present application.
  • the device of the present application can be used to execute the method for generating an emoticon image according to the present application.
  • the embodiment of the emoticon generating method according to the present application please refer to the embodiment of the emoticon generating method according to the present application.
  • an emoticon generating device includes, but is not limited to, a display module 1510 , a receiving module 1520 , and a synthesizing module 1530 .
  • the display module 1510 is configured to: when the picture in the page includes a face area, display an expression picture generation entry corresponding to the picture in the page;
  • the receiving module 1520 is configured to receive an generated emoticon picture triggered by the emoticon image generation entry triggering;
  • the synthesizing module 1530 is configured to synthesize a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generate an emoticon image related to the image.
  • the page includes a message page that can continuously publish a message
  • the device further includes:
  • the detecting module 1540 is configured to perform image message detection in the message page according to a preset time period
  • the face recognition module 1550 is configured to perform a face recognition on a picture in the picture message included in the message page to detect the picture in the picture page. Or the face page of the picture message included in the message page is subjected to face recognition to detect the The face area in the picture.
  • the detecting module 1540 is configured to perform a picture message detection on the continuously published message according to the preset number of message releases;
  • the face recognition module 1550 is configured to perform face recognition on the picture in the picture message included in the continuous release message when the continuously published message includes the picture message in the preset number of published messages, And detecting a face region in the picture; or, when the continuously published message includes at least two picture messages in the preset message publishing quantity, the picture in the picture message included in the continuous release message Face recognition is performed to detect a face area in the picture.
  • the synthesizing module 1530 includes:
  • a face region determining unit 1531 configured to determine a face region in the picture by facial feature extraction according to the generating an emoticon image operation
  • a picture cropping unit 1532 configured to crop the picture according to the face area to obtain a picture of a face area
  • the synthesizing module 1530 is further configured to synthesize the facial region picture to the background image to be synthesized, and generate the emoticon image related to the picture.
  • the receiving module 1520 is further configured to receive a picture editing open operation triggered by the emoticon picture;
  • the device further includes:
  • the jump module 1560 is configured to jump to a picture editing page according to the picture editing open operation, where the picture editing page is used to perform editing processing on the expression picture;
  • the receiving module 1520 is further configured to receive a picture editing operation triggered in the picture editing page;
  • the editing module 1570 is configured to perform image editing processing on the emoticon image according to the photo editing operation
  • the publishing module 1580 is configured to form a new picture message by using the image editing processed image to be published in the message page of the continuously publishable message.
  • the apparatus further includes:
  • a generating module 1590 configured to generate a face selection message, where the face selection message is used to prompt at least two of the emoticons when the at least two facial region images are synthesized to the background image to generate an emoticon image Selecting in the face region, wherein each of the face region pictures corresponds to at least one face region in the emoticon image;
  • a generating module 1590 configured to generate a face selection instruction according to the face selection operation triggered in the face selection message, and display the face area indicated by the face selection instruction on the picture editing page.
  • the picture editing page is used to perform image editing processing on the face area.
  • the receiving module 1520 is further configured to acquire a replacement background picture operation triggered by the emoticon picture;
  • the receiving module 1520 is further configured to obtain the replaced background image to be synthesized according to the replacing the background image operation;
  • the synthesizing module 1530 is further configured to perform re-synthesis of the emoticon image according to the replaced background image to be synthesized and the facial region in the image, to obtain a recombined emoticon image;
  • the display module 1510 is further configured to display the re-formed emoticon image as an emoticon image preview
  • the expression picture generation device when performing the expression picture generation process, the expression picture generation device provided by the above embodiment is only illustrated by the division of each function module. In actual application, the function may be assigned differently according to needs.
  • the function module is completed, that is, the internal structure of the emoticon image generating device is divided into different functional modules to complete all or part of the functions described above.
  • an electronic device includes, but is not limited to, a processor and a memory.
  • the memory readable instructions are stored on the memory, and when the computer readable instructions are executed by the processor, the expression image generating method in each embodiment as described above is implemented.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an emoticon generating method in various embodiments as described above.

Abstract

The application discloses an emoticon image generation method and device, an electronic device, and a storage medium, relating to the technical field of computers. The method comprises: when an image on a page comprises a facial region, displaying an emoticon image generation entry, corresponding to the image, on the page; receiving an emoticon image generation operation triggered by the emoticon image generation entry; and synthesizing the facial regions of the image, according to the emoticon image generation operation, into a background image to be synthesized, and generating an emoticon image associated with the image. The method automatically generates an emoticon image without requiring a user to leave the image display page. The method simply and conveniently completes the emoticon image generation process by effectively reducing the number of user operated steps, thereby improving the effectiveness of emoticon image generation.

Description

表情图片生成方法、装置、电子设备及存储介质Emoticon image generation method, device, electronic device and storage medium
本申请要求于2017年07月18日提交中华人民共和国国家知识产权局、申请号为201710586647.0、发明名称为“表情图片生成方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese Patent Application submitted by the State Intellectual Property Office of the People's Republic of China, the application number is 201710586647.0, and the invention name is "Expression Picture Generation Method, Apparatus and Electronic Equipment" on July 18, 2017. The citations are incorporated herein by reference.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种表情图片生成方法、装置、电子设备及存储介质。The present application relates to the field of computer technologies, and in particular, to an expression picture generating method, apparatus, electronic device, and storage medium.
背景技术Background technique
随着计算机技术的不断发展,用户在进行信息传输时已不再局限于传统意义上的文字,例如,表情语言也属于可传输的一种信息,该种表情语言以图片中刻画的表情来表示相应的信息。With the continuous development of computer technology, users are no longer limited to text in the traditional sense when transmitting information. For example, the expression language is also a kind of information that can be transmitted. The expression language is represented by the expression portrayed in the picture. Corresponding information.
目前,表情语言通常通过各种表情图片来向用户提供,用户可以选择一个表情图片来表达此时此刻的心情,进而通过该表情图片向对方传达某种信息。但是,现成的表情图片不仅数量有限,而且内容固定,难以满足用户的多元化需求,用户往往需要自定义生成表情图片。At present, the expression language is usually provided to the user through various expression pictures, and the user can select an expression picture to express the mood at the moment, and then convey some information to the other party through the expression picture. However, the ready-made emoticons are not only limited in number, but also have fixed content, which is difficult to meet the diversified needs of users. Users often need to customize the emoticons.
现有的表情图片生成方法是由用户主动对心仪的图片进行抠图、编辑、合成等一系列处理,进而生成并保存与该图片相关的表情图片,以供后续使用。The existing expression image generation method is that the user actively performs a series of processing such as mapping, editing, and synthesis on the favorite image, and then generates and saves an expression image related to the image for subsequent use.
由此可知,上述现有技术虽然能够为用户生成表情图片,但是操作过于繁琐,仍然存在表情图片生成效率较低的问题。It can be seen that although the above prior art can generate an emoticon for the user, the operation is too cumbersome, and there is still a problem that the emoticon image generation efficiency is low.
发明内容Summary of the invention
本申请提供了一种表情图片生成方法、装置、电子设备及存储介质,可以解决上述用户生成表情图片的过程中操作过于繁琐的问题。The present application provides a method, an apparatus, an electronic device, and a storage medium for generating an emoticon, which can solve the problem that the user is too cumbersome in the process of generating an emoticon.
其中,本申请所采用的技术方案为:Among them, the technical solution adopted in this application is:
一方面,提供了一种表情图片生成方法,应用于电子设备中,该方法包括:In one aspect, an emoticon image generating method is provided for use in an electronic device, the method comprising:
当页面中的图片包含脸部区域时,在所述页面中显示与所述图片对应的表情图片生成入口;When the picture in the page includes a face area, an emoticon image generation entry corresponding to the picture is displayed in the page;
接收对所述表情图片生成入口触发的生成表情图片操作;Receiving an generated emoticon picture operation triggered by the emoticon image generation entry;
根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片。Generating a face region in the picture to a background image to be synthesized according to the generating an emoticon picture operation, and generating an emoticon picture related to the picture.
另一方面,提供了一种表情图片生成装置,该装置包括:In another aspect, an emoticon image generating apparatus is provided, the apparatus comprising:
显示模块,用于当页面中的图片包含脸部区域时,在所述页面中显示与所述图片对应的表情图片生成入口;a display module, configured to: when the picture in the page includes a face area, display an expression picture generation entry corresponding to the picture in the page;
接收模块,用于接收对所述表情图片生成入口触发的生成表情图片操作;a receiving module, configured to receive an generated emoticon image triggering operation on the emoticon image generation entry;
合成模块,用于根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片。And a synthesizing module, configured to synthesize a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generate an emoticon image related to the image.
另一方面,提供了一种电子设备,包括:处理器及存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如上所述的表情图片生成方法。In another aspect, an electronic device is provided, comprising: a processor and a memory, the memory having computer readable instructions stored thereon, the computer readable instructions being executed by the processor to implement an emoticon generation as described above method.
另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如上所述的表情图片生成方法。In another aspect, a computer readable storage medium is provided having stored thereon a computer program, wherein the computer program is executed by a processor to implement an emoticon generation method as described above.
本申请实施例提供的技术方案带来的有益效果至少包括:The beneficial effects brought by the technical solutions provided by the embodiments of the present application include at least:
根据页面中图片包含的脸部区域为该图片自动触发形成相关联的表情图片生成入口,以此侦听得到该表情图片生成入口触发的生成表情图片操作,进而根据该生成表情图片操作将图片中的脸部区域合成至待合成背景图片,最终生成与图片相关的表情图片。According to the face area included in the picture in the page, the picture is automatically triggered to form an associated expression picture generation entry, thereby intercepting the generated expression picture operation triggered by the expression picture generation entry, and then performing the operation according to the generated expression picture operation. The face area is synthesized to the background image to be synthesized, and finally an expression picture related to the picture is generated.
由此,用户在浏览到心仪的图片时,不需要离开显示有该图片的页面,即可通过该页面中自动触发形成的表情图片生成入口完成上述表情图片生成的一系列操作,简单快捷,有效地提高了表情图片生成效率。Therefore, when browsing the favorite picture, the user does not need to leave the page displaying the picture, and can complete the series of operations of generating the above-mentioned expression picture by automatically generating the generated expression picture in the page, which is simple, fast and effective. Improved the efficiency of emoticon image generation.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。The above general description and the following detailed description are intended to be illustrative and not restrictive.
附图说明DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请 的实施例,并于说明书一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute in the claims
图1是根据一示例性实施例示出的一种电子设备的硬件结构框图。FIG. 1 is a block diagram showing the hardware structure of an electronic device according to an exemplary embodiment.
图2是根据一示例性实施例示出的一种表情图片生成方法的流程图。FIG. 2 is a flowchart of an emoticon image generation method according to an exemplary embodiment.
图3是根据一示例性实施例所涉及的图片消息中图片及其相关的表情图片显示的示意图。FIG. 3 is a schematic diagram of a picture and its associated emoticon picture display in a picture message according to an exemplary embodiment.
图4是根据一示例性实施例所涉及的表情图片形成新的图片消息发布在消息页面的示意图。FIG. 4 is a schematic diagram of an emoticon picture forming a new picture message posted on a message page according to an exemplary embodiment.
图5是图2对应实施例中步骤250在一个实施例的流程图。Figure 5 is a flow diagram of an embodiment of step 250 in the corresponding embodiment of Figure 2.
图6是根据一示例性实施例示出的另一种表情图片生成方法的流程图。FIG. 6 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
图7是根据一示例性实施例示出的另一种表情图片生成方法的流程图。FIG. 7 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
图8是根据一示例性实施例示出的另一种表情图片生成方法的流程图。FIG. 8 is a flowchart of another method for generating an emoticon according to an exemplary embodiment.
图9是一应用场景中一种表情图片生成方法的具体实现示意图。FIG. 9 is a schematic diagram of a specific implementation of an expression image generation method in an application scenario.
图10是另一应用场景中一种表情图片生成方法的具体实现示意图。FIG. 10 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
图11是另一应用场景中一种表情图片生成方法的具体实现示意图。FIG. 11 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
图12是另一应用场景中一种表情图片生成方法的具体实现示意图。FIG. 12 is a schematic diagram of a specific implementation of an expression image generating method in another application scenario.
图13为图9至图12应用场景中一种表情图片生成方法的具体流程示意图。FIG. 13 is a schematic flowchart of a method for generating an emoticon in the application scenario of FIG. 9 to FIG.
图14为图9至图12应用场景中一种表情图片生成方法中人脸识别以及表情图片合成的具体流程示意图。FIG. 14 is a schematic diagram of a specific process of face recognition and expression picture synthesis in an emoticon image generation method in the application scenario of FIG. 9 to FIG.
图15是根据一示例性实施例示出的一种表情图片生成装置的框图。FIG. 15 is a block diagram of an emoticon image generating apparatus according to an exemplary embodiment.
图16是根据另一示例性实施例示出的一种表情图片生成装置的框图。FIG. 16 is a block diagram of an emoticon image generating apparatus according to another exemplary embodiment.
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述,这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。The embodiments of the present invention have been described in detail with reference to the accompanying drawings. The concepts of the present application are described by those skilled in the art.
具体实施方式Detailed ways
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相 一致的装置和方法的例子。The description will be made in detail herein with respect to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the same or similar elements in the different figures unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Instead, they are merely examples of devices and methods consistent with aspects of the present application as detailed in the appended claims.
请参阅图1,图1是根据一示例性实施例示出的一种电子设备的框图。该电子设备可以为台式计算机、膝上型便携计算机、平板电脑、智能手机、电子书阅读器、图像处理专用电脑设备等。Please refer to FIG. 1. FIG. 1 is a block diagram of an electronic device according to an exemplary embodiment. The electronic device can be a desktop computer, a laptop portable computer, a tablet computer, a smart phone, an e-book reader, a computer device for image processing, or the like.
需要说明的是,该电子设备100只是一个适配于本申请的示例,不能认为是提供了对本申请的使用范围的任何限制。该电子设备100也不能解释为需要依赖于或者必须具有图1中示出的示例性的电子设备100中的一个或者多个组件。It should be noted that the electronic device 100 is only an example that is suitable for the present application, and is not considered to provide any limitation on the scope of use of the present application. The electronic device 100 is also not to be construed as having to rely on or must have one or more of the exemplary electronic devices 100 illustrated in FIG.
如图1所示,电子设备100包括存储器101、存储控制器103、一个或多个(图1中仅示出一个)处理器105、外设接口107、射频模块109、定位模块111、摄像模块113、音频模块115、触控屏幕117以及按键模块119。这些组件通过一条或多条通讯总线/信号线121相互通讯。As shown in FIG. 1, the electronic device 100 includes a memory 101, a memory controller 103, one or more (only one shown in FIG. 1) processor 105, a peripheral interface 107, a radio frequency module 109, a positioning module 111, and a camera module. 113. The audio module 115, the touch screen 117, and the button module 119. These components communicate with one another via one or more communication bus/signal lines 121.
其中,存储器101可用于存储软件程序以及模块,如本申请示例性实施例中的表情图片生成方法及装置对应的程序指令及模块,处理器105通过运行存储在存储器101内的程序指令,从而执行各种功能以及数据处理,即实现表情图片生成方法。The memory 101 can be used to store software programs and modules, such as the emoticon image generating method and device corresponding to the program instructions and modules in the exemplary embodiment of the present application, and the processor 105 executes by executing the program instructions stored in the memory 101. Various functions and data processing, that is, the method of generating an expression picture.
存储器101作为资源存储的载体,可以是随机存储介质、例如高速随机存储器、非易失性存储器,如一个或多个磁性存储装置、闪存、或者其它固态存储器。存储方式可以是短暂存储或者永久存储。The memory 101 serves as a carrier for resource storage, and may be a random storage medium such as a high speed random access memory, a nonvolatile memory such as one or more magnetic storage devices, flash memory, or other solid state memory. The storage method can be short-term storage or permanent storage.
外设接口107可以包括至少一有线或无线网络接口、至少一串并联转换接口、至少一输入输出接口以及至少一USB接口等等,用于将外部各种输入/输出装置耦合至存储器101以及处理器105,以实现与外部各种输入/输出装置的通信。The peripheral interface 107 can include at least one wired or wireless network interface, at least one serial-to-parallel conversion interface, at least one input-output interface, and at least one USB interface, etc., for coupling external external input/output devices to the memory 101 and processing The device 105 is configured to communicate with various external input/output devices.
射频模块109用于收发电磁波,实现电磁波与电信号的相互转换,从而通过通讯网络与其他设备进行通讯。通信网络包括蜂窝式电话网、无线局域网或者城域网,上述通信网络可以使用各种通信标准、协议及技术。The radio frequency module 109 is configured to transmit and receive electromagnetic waves, and realize mutual conversion between electromagnetic waves and electric signals, thereby communicating with other devices through a communication network. The communication network includes a cellular telephone network, a wireless local area network, or a metropolitan area network, and the above communication networks can use various communication standards, protocols, and technologies.
定位模块111用于获取电子设备100的当前所在的地理位置。定位模块111的实例包括但不限于全球卫星定位系统(GPS)、基于无线局域网或者移动通信网的定位技术。The positioning module 111 is configured to acquire a geographic location where the electronic device 100 is currently located. Examples of positioning module 111 include, but are not limited to, Global Positioning System (GPS), wireless local area network or mobile communication network based positioning technology.
摄像模块113隶属于摄像头,用于拍摄图片或者视频。拍摄的图片或者视 频可以存储至存储器101内,还可以通过射频模块109发送至上位机。The camera module 113 is attached to the camera for taking pictures or videos. The captured picture or video can be stored in the memory 101, and can also be sent to the host computer through the radio frequency module 109.
音频模块115向用户提供音频接口,其可包括一个或多个麦克风接口、一个或多个扬声器接口以及一个或多个耳机接口。通过音频接口与其它设备进行音频数据的交互。音频数据可以存储至存储器101内,还可以通过射频模块109发送。The audio module 115 provides an audio interface to the user, which may include one or more microphone interfaces, one or more speaker interfaces, and one or more earphone interfaces. Audio data interaction with other devices through the audio interface. The audio data can be stored in the memory 101 and can also be transmitted through the radio frequency module 109.
触控屏幕117在电子设备100与用户之间提供一个输入输出界面。具体地,用户可通过触控屏幕117进行输入操作,例如点击、触摸、滑动等手势操作,以使电子设备对该输入操作进行响应。电子设备100则将文字、图片或者视频任意一种形式或者组合所形成的输出内容通过触控屏幕117向用户显示输出。The touch screen 117 provides an input and output interface between the electronic device 100 and the user. Specifically, the user can perform an input operation, such as a click, a touch, a slide, or the like, through the touch screen 117 to cause the electronic device to respond to the input operation. The electronic device 100 displays the output content formed by any one or combination of text, picture or video to the user through the touch screen 117.
按键模块119包括至少一个按键,用以提供用户向电子设备100进行输入的接口,用户可以通过按下不同的按键使电子设备100执行不同的功能。例如,声音调节按键可供用户实现对电子设备100播放的声音音量的调节。The button module 119 includes at least one button for providing an interface for the user to input to the electronic device 100, and the user can cause the electronic device 100 to perform different functions by pressing different buttons. For example, the sound adjustment button can be used by the user to adjust the volume of the sound played by the electronic device 100.
可以理解,图1所示的结构仅为示意,电子设备100还可包括比图1中所示更多或更少的组件,或者具有与图1所示不同的组件。图1中所示的各组件可以采用硬件、软件或者其组合来实现。It will be understood that the structure shown in FIG. 1 is merely illustrative, and the electronic device 100 may further include more or less components than those shown in FIG. 1, or have components different from those shown in FIG. The components shown in Figure 1 can be implemented in hardware, software, or a combination thereof.
请参阅图2,在一示例性实施例中,以该表情图片生成方法应用于图1所示的电子设备100中为例进行说明,该方法包括以下步骤:Referring to FIG. 2, in an exemplary embodiment, the method for generating an emoticon is applied to the electronic device 100 shown in FIG. 1 as an example. The method includes the following steps:
步骤210,当页面中的图片包含脸部区域,在页面中显示与所述图片对应的表情图片生成入口。Step 210: When a picture in the page includes a face area, an emoticon image generation entry corresponding to the picture is displayed in the page.
上述页面,相对于电子设备所配置的触控屏幕而言,即是指以触控屏幕为载体向用户展示内容。例如,触控屏幕向用户展示的内容包含图片,此时,页面在进行图片显示。The above page, relative to the touch screen configured by the electronic device, refers to displaying the content to the user by using the touch screen as a carrier. For example, the content displayed by the touch screen to the user includes a picture, and at this time, the page is displayed.
进行图片显示的页面包括但不限于图片浏览器页面、消息页面、图片选择页面中的任意一种。相应地,页面中显示的图片可以是单张图片(例如图片浏览器页面),也可以是多张图片(例如消息页面)。The page for displaying the image includes, but is not limited to, any one of a picture browser page, a message page, and a picture selection page. Accordingly, the picture displayed in the page may be a single picture (such as a picture browser page) or multiple pictures (such as a message page).
其中,消息页面用于连续发布消息,对应于可连续发布消息的应用,例如社交应用、即时通讯应用。该消息包括文字消息、图片消息、图文消息等。应当说明的是,文字消息中仅包含文字,图片消息中仅包含图片,而图文消息是指图片和文字混排形成的消息。The message page is used for continuously publishing messages, and corresponds to an application that can continuously publish messages, such as a social application and an instant messaging application. The message includes a text message, a picture message, a graphic message, and the like. It should be noted that the text message only contains text, the picture message only contains the picture, and the picture message refers to the message formed by the picture and the text mixed.
可选地,上述表情图片生成入口用于生成与上述图片对应的表情图片,其 中,该表情图片,与页面中显示的图片相关,是以图片中的全部内容作为背景的图片,或者仅以图片中包含的脸部区域作为的表情的图片。其中,图片中的脸部区域可以是人的脸部区域,还可以是动物的脸部区域,也可以是卡通动漫等人物形象的脸部区域,在此不进行限定。Optionally, the emoticon image generation entry is used to generate an emoticon image corresponding to the image, where the emoticon image is related to the image displayed in the webpage, and is a photo with the entire content in the image as the background, or only the image A picture of the expression contained in the face area. The face area in the picture may be a face area of a person, a face area of an animal, or a face area of a character such as a cartoon or anime, which is not limited herein.
所述脸部区域可以通过生物识别技术进行检测来获取,生物识别技术包括但不限于人脸识别、动物面部识别等等。The facial region can be acquired by detecting by biometric technology, including but not limited to face recognition, animal facial recognition, and the like.
由此,当页面中显示了图片且该图片被检测到脸部区域,电子设备便可执行为该图片生成相关表情图片的过程。Thus, when a picture is displayed in the page and the picture is detected in the face area, the electronic device can perform a process of generating a related expression picture for the picture.
可选地,该表情图片生成入口,是电子设备为执行表情图片生成过程而向用户提供的入口,也就是说,用户可通过在该表情图片生成入口触发的相关操作促使电子设备为图片生成相关的表情图片。Optionally, the emoticon generation portal is an entry provided by the electronic device to the user for performing the emoticon generation process, that is, the user may cause the electronic device to generate a correlation for the image by performing related operations triggered by the emoticon generation portal. Emoticon picture.
可选地,该表情图片生成入口,是所述页面对应的应用程序所提供的用于生成与上述图片对应的表情图片的入口。Optionally, the emoticon image generation portal is an entry provided by an application corresponding to the page for generating an emoticon image corresponding to the image.
具体地,表情图片生成入口根据图片中包含的脸部区域自动触发形成于页面中,即当页面的图片包含脸部区域,则表情图片生成入口自动触发形成在该页面中。Specifically, the emoticon image generation portal is automatically triggered to be formed in the page according to the facial region included in the image, that is, when the image of the page includes the facial region, the emoticon image generation portal is automatically triggered to be formed in the webpage.
例如,当消息页面中包含图片消息时,则对图片消息中的图片进行人脸识别,以此检测图片中的脸部区域,进而为图片自动显示出相关联的表情图片生成入口,而文字消息和图文消息则未能自动显示出表情图片生成入口。For example, when a message message is included in a message page, face recognition is performed on the image in the image message, thereby detecting a face region in the image, thereby automatically displaying an associated expression image generation portal for the image, and the text message is generated. And the text message does not automatically display the emoticon image generation entry.
由此,用户仅需要在显示出的表情图片生成入口触发相关操作,便可由电子设备完成后续表情图片生成的一系列操作,大大地简化了现有技术中用户过于繁琐的操作。Therefore, the user only needs to generate an entry trigger related operation on the displayed emoticon image, and the electronic device can complete a series of operations of the subsequent emoticon image generation, which greatly simplifies the excessively cumbersome operation of the user in the prior art.
需要说明的是,表情图片生成入口与页面中显示的图片相关联,是指页面中显示的每一个图片均能够形成一个表情图片生成入口,如图3所示出的虚拟图标①和④。相应地,在通过该表情图片生成入口执行的表情图片生成过程中,表情图片与该表情图片生成入口相关联的图片有关。It should be noted that the emoticon image generation entry is associated with the image displayed in the page, which means that each image displayed in the page can form an emoticon image generation portal, such as the virtual icons 1 and 4 shown in FIG. Correspondingly, in the emoticon image generation process performed by the emoticon image generation portal, the emoticon image is related to the image associated with the emoticon image generation portal.
步骤230,接收对表情图片生成入口触发的生成表情图片操作。Step 230: Receive an generated emoticon picture operation triggered by an emoticon image generation entry.
可选地,该生成表情图片操作用于根据图片生成表情图片。在表情图片生成入口显示之后,通过用户在该表情图片生成入口触发的生成表情图片操作便能够使得电子设备获知用户欲为页面中显示的图片生成相关的表情图片,进而 为页面中显示的图片执行生成表情图片的过程。Optionally, the generating an emoticon picture operation is used to generate an emoticon picture according to the picture. After the emoticon image generation entry is displayed, the user generates an emoticon image triggered by the emoticon image generation portal to enable the electronic device to know that the user wants to generate a related emoticon image for the image displayed in the page, thereby performing the image displayed on the page. The process of generating an emoticon.
举例来说,在进行图片显示的消息页面中,如图3所示,该表情图片生成入口可以是消息页面中的虚拟图标①,并且该虚拟图标①与消息页面中图片消息相关联。用户通过点击该虚拟图标①为图片消息中的图片生成相关的表情图片,该点击操作即为表情图片生成入口触发的生成表情图片操作For example, in the message page for displaying the picture, as shown in FIG. 3, the expression picture generation entry may be a virtual icon 1 in the message page, and the virtual icon 1 is associated with the picture message in the message page. The user generates a related emoticon picture for the picture in the picture message by clicking the virtual icon 1 , and the click operation is an generated emoticon picture operation triggered by the emoticon image generation entry.
在表情图片生成入口被触发后,电子设备便能够侦听得到生成表情图片操作,并以此执行后续的图片合成过程。After the emoticon image generation entry is triggered, the electronic device can listen to the generated emoticon image operation and perform subsequent image synthesizing processes.
步骤250,根据生成表情图片操作将图片中的脸部区域合成至待合成背景图片,生成与图片相关的表情图片。Step 250: Synthesize a face region in the image to a background image to be synthesized according to the generated emoticon image operation, and generate an emoticon image related to the image.
其中,待合成背景图片可以由预设背景图片素材库中的背景图片素材随机生成,还可以由用户指定的本地存储中的图片随机生成,也可以是默认的一张固定图片,即每次表情图片生成过程中均将该固定图片设置为待合成背景图片。相应地,待合成背景图片的获取中,可以是在执行步骤250之前就预先完成的,也可以是在执行步骤250过程中完成的,在此并未加以限定。The background image to be synthesized may be randomly generated by the background image material in the preset background image material library, or may be randomly generated by the image in the local storage specified by the user, or may be a default fixed image, that is, each expression The fixed picture is set to the background image to be synthesized during the image generation process. Correspondingly, the acquisition of the background image to be synthesized may be completed before the execution of step 250, or may be completed during the execution of step 250, which is not limited herein.
在获得图片中的脸部区域与待合成背景图片之后,便可进行图片合成过程。After obtaining the face area in the picture and the background picture to be synthesized, the picture synthesis process can be performed.
具体地,创建一张空白图片,依次置入进行图片大小匹配处理的待合成背景图片和图片中的脸部区域,同时,为了进一步地保证二者的合成效果,将图片中脸部区域对应在待合成背景图片上的区域进行高斯模糊处理(英文:Gaussian Blur),其中,高斯模糊处理又可称为高斯平滑处理,是用于减少图像噪声以及降低图像细节层次的图像处理技术,由此生成表情图片。Specifically, a blank picture is created, and the background image to be synthesized and the face area in the picture are sequentially placed in the picture size matching process, and in order to further ensure the combined effect of the two, the face area in the picture is corresponding to Gaussian Blur is applied to the area on the background image to be synthesized (English: Gaussian Blur). Gaussian Blur processing, also known as Gaussian Blur, is an image processing technique used to reduce image noise and reduce image detail levels. Emoticon image.
可以理解,如果待合成背景图片上的该区域恰好也包含一脸部区域,则待合成背景图片中的脸部区域可能会对图片中的脸部区域产生影响,进而影响图片中脸部区域与待合成背景图片的合成效果。It can be understood that if the area on the background image to be synthesized also includes a face area, the face area in the background image to be synthesized may affect the face area in the picture, thereby affecting the face area in the picture and The composite effect of the background image to be synthesized.
由此,为了使图片中脸部区域与待合成背景图片融合的更为自然,在对待合成背景图片上的该区域进行高斯模糊处理之前,还将针对待合成背景图片中的脸部区域进行主色调填充。Therefore, in order to make the face region in the picture merge with the background image to be synthesized more naturally, before performing Gaussian blurring on the region on the composite background image, the face region in the background image to be synthesized is also mastered. Tonal fill.
其中,用于填充的主色调是指待合成背景图片中脸部区域去除五官之外的剩余脸部区域的主色调。该主色调与剩余脸部区域的饱和度、剩余脸部区域的平均颜色、以及剩余脸部区域的灰度分布有关。The main color used for the filling refers to the main color of the remaining facial area except the facial features in the facial region to be synthesized in the background image. The primary color tone is related to the saturation of the remaining face regions, the average color of the remaining face regions, and the grayscale distribution of the remaining face regions.
更进一步地,待表情图片生成,表情图片可进行本地存储,以供后续用户 使用,还可以进行显示以供用户预览,又或者,以表情图片形成新的图片消息发布在消息页面中。Further, after the emoticon image is generated, the emoticon image can be stored locally for use by subsequent users, and can also be displayed for preview by the user, or a new photo message formed by the emoticon image is posted in the message page.
为用户执行表情图片预览中,生成的表情图片可以与页面中显示的图片同时显示在该页面中,还可以显示在另外的新页面中。In the emoticon image preview for the user, the generated emoticon image can be displayed on the page simultaneously with the image displayed on the page, and can also be displayed in another new page.
例如,以消息页面作为显示图片的页面,如图3所示,表情图片与图片消息中的图片并排显示在消息页面中,用户可通过预览显示的表情图片确认该表情图片是否符合需求。For example, the message page is used as the page for displaying the picture. As shown in FIG. 3, the picture in the expression picture and the picture message are displayed side by side in the message page, and the user can confirm whether the expression picture meets the requirement by previewing the displayed picture.
如果表情图片符合需求,即用户满意该表情图片,用户便可通过点击该表情图片使得该表情图片形成新的图片消息发布在消息页面中,如图4所示。此时,电子设备还将旧的图片消息中图片相关联的表情图片生成入口取消显示在消息页面中。If the emoticon image meets the requirement, that is, the user is satisfied with the emoticon image, the user can click the emoticon image to cause the emoticon image to form a new photo message to be posted in the message page, as shown in FIG. At this time, the electronic device also cancels the display of the emoticon image associated with the picture in the old picture message in the message page.
如果表情图片不符合需求,即用户不满意该表情图片,则进一步地,表情图片预览过程中还将为用户提供图片编辑开启入口和背景图片更换入口,以通过用户在该图片编辑开启入口触发的相关操作启动对表情图片执行图片编辑处理过程,或者,通过用户在该背景图片更换入口触发的相关操作为表情图片更换其中的背景,直至用户满意。If the emoticon image does not meet the requirement, that is, the user is dissatisfied with the emoticon image, further, the emoticon image previewing process further provides the user with a photo editing open entry and a background image replacement entry to be triggered by the user opening the portal at the photo editing. The related operation starts a picture editing process on the emoticon image, or replaces the background of the emoticon image by the related operation triggered by the user at the background image replacement entry until the user is satisfied.
如图3所示,在消息页面中,该图片编辑开启入口②和背景图片更换入口③以虚拟图标的形式显示在表情图片的一侧。As shown in FIG. 3, in the message page, the picture editing open entry 2 and the background picture replacement entry 3 are displayed on the side of the expression picture in the form of a virtual icon.
当然,在其他实施例中,表情图片还可以显示在另外的新页面中,例如,新页面为图片编辑页面,此时,除了在该图片编辑页面中预览表情图片,用户还可以直接在该图片编辑页面上进行图片编辑处理过程。Of course, in other embodiments, the emoticon image can also be displayed in another new page. For example, the new page is a photo editing page. In this case, in addition to previewing the emoticon image in the photo editing page, the user can directly view the image. The image editing process is performed on the edit page.
通过如上所述的过程,在图片显示的场景中,通过表情图片生成入口的自动触发形成引导用户为页面中显示的图片生成相关的表情图片,而并非局限于依赖用户主动进行的表情图片生成,以此实现表情图片的自动生成。Through the process as described above, in the scene displayed by the image, the automatic triggering of the expression image generation portal forms a guide image for the user to generate a related image for the image displayed in the page, and is not limited to relying on the user to actively generate the expression image. In this way, automatic generation of emoticons is achieved.
此外,表情图片自动生成过程中,用户不需要离开显示图片的页面,即可简单便捷地完成表情图片生成过程,有效地减少了用户的操作步骤,进而有效地提高了表情图片生成效率。In addition, during the automatic generation process of the expression image, the user can complete the process of generating the expression image simply and conveniently without leaving the page displaying the image, thereby effectively reducing the operation steps of the user, thereby effectively improving the efficiency of generating the expression image.
在一示例性实施例中,页面包括可连续发布消息的消息页面,相应地,步骤210之前,如上所述的方法还可以包括以下步骤:In an exemplary embodiment, the page includes a message page that can continuously publish a message, and correspondingly, before step 210, the method as described above may further include the following steps:
在消息页面中进行图片消息检测。Perform picture message detection in the message page.
图片消息检测即是指按照预设规则检测消息页面中是否包含图片消息。该预设规则可以根据具体应用场景灵活地调整。Picture message detection refers to detecting whether a picture message is included in a message page according to a preset rule. The preset rule can be flexibly adjusted according to a specific application scenario.
可选地,对实时性要求较高的应用场景中,预设规则包括预设时间周期,即按照预设时间周期在消息页面中进行图片消息检测,当预设时间周期内消息页面中包括图片消息,则对消息页面所包括的图片消息中的图片进行人脸识别,得到脸部区域,一旦图片消息超过了预设时间周期的范围,则视为图片消息的实时性过低,进而不执行后续相关的表情图片生成过程。又或者,在斗图应用场景中,预设时间包括图片消息数量,即检测消息页面是否包括至少两条图片消息,也就是说,只有当检测到消息页面至少包括两条图片消息,才视为用户有与对方进行斗图的欲望,进而针对消息页面中包括的所有图片消息执行后续相关的表情图片生成过程。Optionally, in an application scenario where the real-time requirement is high, the preset rule includes a preset time period, that is, the image message is detected in the message page according to the preset time period, and the image page is included in the message page in the preset time period. The message is used to perform face recognition on the picture in the picture message included in the message page to obtain a face area. Once the picture message exceeds the range of the preset time period, the real-time performance of the picture message is considered to be too low, and thus is not executed. Follow-up related emoticon generation process. Or, in the scenario application scenario, the preset time includes the number of picture messages, that is, whether the detection message page includes at least two picture messages, that is, only when detecting that the message page includes at least two picture messages, The user has the desire to perform a fight with the other party, and then performs subsequent related emoticon generation processes for all the picture messages included in the message page.
可选地,该预设规则还可以包括预设消息发布数量,即按照预设消息发布数量对连续发布的消息进行图片消息检测。可选地,当预设消息发布数量内连续发布的消息包括图片消息时,则对连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当预设消息发布数量内连续发布的消息中包至少两条图片消息时,对连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。Optionally, the preset rule may further include a preset message publishing quantity, that is, performing a picture message detection on the continuously published message according to the preset message publishing quantity. Optionally, when the continuously published message in the preset message release quantity includes a picture message, performing face recognition on the picture in the picture message included in the continuous release message to detect a face area in the picture; or When at least two picture messages are included in the continuously published message in the preset number of published messages, face recognition is performed on the pictures in the picture message included in the continuous release message to detect the face area in the picture.
值得注意的是,上述检测图片中的脸部区域,可以是检测图片中是否包括脸部区域,也可以是直接得到图片中的脸部区域。It should be noted that the detection of the face area in the picture may be to detect whether the face area is included in the picture, or directly obtain the face area in the picture.
此外,预设规则还可以根据具体应用场景进行组合,例如,预设规则同时包括预设时间周期和图片数量,即按照预设时间周期内消息页面中包含的图片消息数量进行图片消息检测。In addition, the preset rule may be combined according to a specific application scenario. For example, the preset rule includes a preset time period and a number of pictures, that is, the picture message is detected according to the number of picture messages included in the message page in the preset time period.
举例来说,在预设时间周期内检测消息页面中是否包含图片消息。其中,预设时间周期可以根据具体应用场景灵活地调整,例如,预设时间周期为2分钟。For example, it is detected whether a picture message is included in a message page within a preset time period. The preset time period can be flexibly adjusted according to a specific application scenario. For example, the preset time period is 2 minutes.
如果检测到预设时间周期内消息页面中包含图片消息,则为图片消息中的图片自动触发形成相关联的表情图片生成入口。具体而言,将表情图片生成入口以虚拟图标①的形式显示在图片消息中图片的一侧,如图3所示。If a picture message is included in the message page within the preset time period, the picture in the picture message is automatically triggered to form an associated expression picture generation entry. Specifically, the emoticon image generation portal is displayed in the form of the virtual icon 1 on one side of the picture in the picture message, as shown in FIG.
反之,如果检测到预设时间周期内消息页面中未包含图片消息,则可以继 续按照预设时间周期在消息页面中进行图片消息检测,还可以进一步地按照其他预设规则在消息页面中进行图片消息检测。On the other hand, if it is detected that the message page does not include the picture message in the preset time period, the picture message may be continuously detected in the message page according to the preset time period, and the picture may be further performed in the message page according to other preset rules. Message detection.
进一步地,在一示例性实施例中,步骤210之后,如上所述的方法还可以包括以下步骤:Further, in an exemplary embodiment, after step 210, the method as described above may further include the following steps:
当检测到预设时间周期内消息页面中未包含图片消息,或者,当消息页面中未包含至少两条图片消息,则按照预设消息发布数量对连续发布的消息进行图片消息检测。When it is detected that the message page does not include the picture message in the preset time period, or when the message page does not include at least two picture messages, the picture message detection is performed on the continuously published message according to the preset message release quantity.
此处,预设消息发布数量并非针对消息页面,而是针对连续发布的消息,也就是说,图片消息检测是针对连续发布的预设消息发布数量的消息进行的。其中,预设消息发布数量可以根据具体应用场景灵活地调整,例如,预设消息发布数量为5,相应地,图片消息检测是针对连续发布的5条消息进行。Here, the preset number of message issuance is not for the message page, but for the continuously published message, that is, the picture message detection is performed for the message of the number of consecutively issued preset messages. The number of preset message advertisements can be flexibly adjusted according to specific application scenarios. For example, the number of preset message advertisements is 5. Accordingly, the picture message detection is performed for five consecutively published messages.
如果检测到在预设发布数量内连续发布的消息中包含图片消息,则为图片消息中的图片自动触发形成相关联的表情图片生成入口。If it is detected that the message is continuously published in the preset number of publications, the picture in the picture message is automatically triggered to form an associated expression picture generation entry.
反之,如果检测到在预设发布数量内连续发布的消息中未包含图片消息,则返回按照预设时间周期在消息页面中进行图片消息检测的步骤。On the other hand, if it is detected that the message continuously published in the preset number of publications does not include the picture message, the step of performing picture message detection in the message page according to the preset time period is returned.
优选地,在一具体实施例中,表情图片生成入口的自动触发形成依赖于在预设时间周期内消息页面中被检测到包含至少两条图片消息,并进一步依赖于如果上述预设规则不满足,则按照预设消息发布数量检测连续发布的消息中是否包含至少两条图片消息,由此,电子设备将认为用户有通过表情图片进行某种消息传达的欲望,以此充分地保证了表情图片生成入口被触发的概率,例如,适用于斗图应用场景中。Preferably, in a specific embodiment, the automatic trigger formation of the emoticon generation portal depends on detecting that at least two picture messages are included in the message page within a preset time period, and further relies on if the preset rule is not satisfied And detecting, according to the preset message publishing quantity, whether at least two picture messages are included in the continuously published message, thereby the electronic device will consider that the user has a desire to transmit a message through the expression picture, thereby fully ensuring the expression picture. The probability that the entry is triggered is generated, for example, in the scenario application scenario.
在上述实施例的配合下,通过预设时间周期、图片消息数量、和预设消息发布数量等预设规则的设置,为表情图片生成入口的自动触发提供了充分的依据,避免电子设备执行不必要的处理任务,有利于提高电子设备的处理效率。With the cooperation of the foregoing embodiment, the setting of the preset rule, such as the preset time period, the number of picture messages, and the number of preset message releases, provides a sufficient basis for automatic triggering of the expression image generation entry, thereby avoiding the execution of the electronic device. The necessary processing tasks are conducive to improving the processing efficiency of electronic devices.
进一步地,为了提高表情图片生成入口被触发的概率,还将针对页面中显示的图片进行人脸识别,以检测该图片中的脸部区域,进而由图片中被检测到的脸部区域自动触发形成表情图片生成入口,以此保证了表情图片是与图片中的脸部区域相关的。此处,页面包括消息页面、图片浏览器页面、图片选择页面或者其它可进行图片显示的页面。Further, in order to improve the probability that the emoticon image generation entry is triggered, the face image displayed in the page is also subjected to face recognition to detect the face region in the image, and then automatically triggered by the detected face region in the image. An emoticon image generation portal is formed to ensure that the emoticon image is related to the facial region in the image. Here, the page includes a message page, a picture browser page, a picture selection page, or other page that can display the picture.
一方面,如果该图片中检测到脸部区域,则以该图片中的脸部区域作为表情图片的表情,即由该图片中抠出该脸部区域并合成至待合成背景图片,此时,待合成背景图片可以由预设待合成背景图片素材库中的待合成背景图片素材随机生成,可以由用户指定背景图片素材库中的待合成背景图片素材生成,还可以由用户指定的本地存储中的图片生成,或者,以默认的一张固定图片进行设置。On the one hand, if a face area is detected in the picture, the face area in the picture is used as an expression of the expression picture, that is, the face area is extracted from the picture and synthesized into a background picture to be synthesized. The background image to be synthesized may be randomly generated by the background image material to be synthesized in the background image material library to be synthesized, and may be generated by the user to specify the background image material to be synthesized in the background image material library, or may be locally stored by the user. The image is generated, or, set with a default fixed image.
进一步地,为了保证图片中脸部区域与待合成背景图片的合成效果,将对图片中的脸部区域进行图像增强处理,该图像增强处理包括但不限于:黑白化处理、色阶调整处理、边缘羽化处理等等,以此提高图片中脸部区域与待合成背景图片的合成效果,进而提升用户的表情图片生成体验。Further, in order to ensure the composite effect of the face region in the picture and the background image to be combined, the image enhancement processing is performed on the face region in the image, and the image enhancement processing includes, but is not limited to, black and white processing, color gradation adjustment processing, Edge feathering and the like, thereby improving the composite effect of the face region and the background image to be synthesized in the picture, thereby improving the user's expression image generation experience.
更进一步地,人脸识别和图像增强处理由应用中内嵌的图像处理插件实施,以方便用户为页面中显示的图片生成相关的表情图片,进而有利于提高表情图片生成效率。Further, the face recognition and image enhancement processing is implemented by an image processing plug-in embedded in the application, so that the user can generate relevant expression images for the pictures displayed on the page, thereby facilitating the improvement of the generation efficiency of the expression images.
另一方面,如果该图片中未检测到脸部区域,将无法自动触发形成表情图片生成入口,此时,用户还可以通过预先设置的表情图片生成入口为该图片执行表情图片生成过程。On the other hand, if the face area is not detected in the picture, the formation of the expression picture generation entry cannot be automatically triggered. At this time, the user can also perform the expression picture generation process for the picture through the preset expression picture generation entry.
例如,进行图片显示的页面是图片浏览器页面,该图片浏览器页面中预先设置有一表情图片生成入口,假设该图片浏览器页面中显示的图片未检测到脸部区域,当用户在该表情图片生成入口触发相关操作,则以该显示的图片作为表情图片的背景,即以该图片中的全部内容作为待合成背景图片,相应地,将一张空白图片作为脸部区域合成至待合成背景图片,以此为用户自动生成表情图片。For example, the page for displaying the image is a picture browser page, and the image browser page is preset with an expression image generation entry, and it is assumed that the picture displayed in the picture browser page does not detect the face area, and when the user is in the expression picture When the portal trigger related operation is generated, the displayed image is used as the background of the emoticon image, that is, the entire content in the image is used as the background image to be synthesized, and accordingly, a blank image is synthesized as the facial region to the background image to be synthesized. In order to automatically generate an emoticon image for the user.
进一步地,生成的表情图片将显示在另外的新页面中,例如,新页面为图片编辑页面,进而使得用户能够在该图片编辑页面中预览表情图片并进行图片编辑处理。Further, the generated emoticon image will be displayed in another new page, for example, the new page is a photo editing page, thereby enabling the user to preview the emoticon image and perform image editing processing in the photo editing page.
通过如此设置,无论页面中显示的图片是否能够检测到脸部区域,电子设备均可以根据表情图片生成入口被触发而为用户自动生成表情图片,大大提高了表情图片生成过程的适用性和兼容性。By setting in this way, regardless of whether the picture displayed on the page can detect the face area, the electronic device can automatically generate an expression picture for the user according to the expression picture generation entry, which greatly improves the applicability and compatibility of the expression picture generation process. .
需要说明的是,如果进行图片显示的页面中预先设置了表情图片生成入口,则可以不必显示另一表情图片生成入口,由此,表情图片生成入口的形成方式 可以根据应用场景进行灵活地调整,例如,消息页面中可以自动触发形成表情图片生成入口,还可以在消息页面中弹出的上拉会话框中再设置一表情图片生成入口,图片浏览器页面中则仅预先设置一表情图片生成入口,以此满足不同用户的不同需求,有利于提升用户的表情图片生成体验。It should be noted that, if the emoticon image generation entry is preset in the page for displaying the image, the other emoticon image generation entry may not be displayed. Therefore, the manner of forming the emoticon image generation portal may be flexibly adjusted according to the application scenario. For example, the message page may be automatically triggered to form an expression image generation entry, and an expression picture generation entry may be further set in the pull-up session box popped up in the message page, and only one expression picture generation entry is preset in the picture browser page. In order to meet the different needs of different users, it is beneficial to enhance the user's expression picture generation experience.
请参阅图5,在一示例性实施例中,图片中的脸部区域是通过人脸识别检测到的,则在将脸部区域合成至待合成背景图片并生成表情图片时,首先根据生成表情图片操作,通过脸部特征提取在图片中确定脸部区域,根据该脸部区域对图片进行裁剪,得到脸部区域图片,并对该脸部区域图片进行图像增强处理,得到人脸图片,最后该人脸图片合成至待合成背景图片,生成与图片相关的表情图片。相应地,步骤250根据生成表情图片操作将图片中的脸部区域合成至待合成背景图片可以包括以下步骤:Referring to FIG. 5, in an exemplary embodiment, a face region in a picture is detected by face recognition, and when a face region is synthesized to a background image to be synthesized and an expression image is generated, the generated expression is first generated according to The picture operation determines the face area in the picture by facial feature extraction, crops the picture according to the face area, obtains a face area picture, and performs image enhancement processing on the face area picture to obtain a face picture, and finally The face image is synthesized to the background image to be synthesized, and an image corresponding to the image is generated. Correspondingly, the step 250 synthesizing the face region in the picture to the background image to be synthesized according to the generating the emoticon picture operation may include the following steps:
步骤251,通过脸部特征提取在图片中确定脸部区域。 Step 251, determining a face region in the picture by facial feature extraction.
可选地,根据生成表情图片操作,通过脸部特征提取在图片中确定脸部区域。该脸部特征包括脸部轮廓、左眼、右眼、左眉、右眉、鼻子、嘴巴等,相应地,由脸部特征提取所确定的脸部区域即是包括脸部轮廓区域、左眼区域、右眼区域、左眉区域、右眉区域、鼻子区域、嘴巴区域等。Optionally, the facial region is determined in the picture by facial feature extraction according to the generated emoticon image operation. The facial features include a facial contour, a left eye, a right eye, a left eyebrow, a right eyebrow, a nose, a mouth, and the like. Accordingly, the face region determined by the facial feature extraction includes a facial contour region and a left eye. Area, right eye area, left eye area, right eye area, nose area, mouth area, etc.
可选地,图片中的各区域通过坐标值进行标识,即,图片中确定出的脸部区域将以坐标值的形式唯一地表示,以供后续按照脸部区域对应的坐标值裁剪图片得到脸部区域图片。Optionally, each area in the picture is identified by a coordinate value, that is, the determined face area in the picture is uniquely represented in the form of coordinate values, so that the face is subsequently cropped according to the coordinate value corresponding to the face area. Part area picture.
步骤253,根据脸部区域对图片进行裁剪,得到脸部区域图片。In step 253, the picture is cropped according to the face area to obtain a picture of the face area.
可选地,在根据脸部区域对图片进行裁剪时,可以通过步骤251中的坐标值,对图片进行裁剪得到脸部区域。Optionally, when the picture is cropped according to the face area, the picture may be cropped by the coordinate value in step 251 to obtain a face area.
步骤255,将脸部区域图片合成至待合成背景图片,生成与图片相关的表情图片。In step 255, the facial region image is synthesized to the background image to be synthesized, and an emoticon image related to the image is generated.
可选地,在将脸部区域图片合成至待合成图片的过程中,首先,可以对脸部区域图片进行图像增强处理,得到人脸图片,再将该人脸图片合成至待合成背景图片,生成表情图片。Optionally, in the process of synthesizing the face region image into the image to be combined, first, the image of the face region may be subjected to image enhancement processing to obtain a face image, and then the face image is synthesized to the background image to be synthesized. Generate an emoticon image.
其中,图像增强处理过程可以包括以下步骤中的至少一个步骤:Wherein, the image enhancement processing process may include at least one of the following steps:
第一步骤,通过计算脸部区域图片的灰度值对脸部区域图片进行黑白化处理。In the first step, the face region picture is black-whitened by calculating the gray value of the face region picture.
脸部区域图片的灰度值与脸部区域图片中各像素点的灰度值以及非透明像素点的个数有关。由此,在计算脸部区域图片的灰度值之前,首先需要得到脸部区域图片中各像素点的灰度值。The gray value of the face area picture is related to the gray value of each pixel point in the face area picture and the number of non-transparent pixel points. Therefore, before calculating the gradation value of the face region picture, it is first necessary to obtain the gradation value of each pixel in the face region picture.
具体地,遍历脸部区域图片中的各像素点,计算相应的灰度值,计算公式如下:Specifically, traversing each pixel in the face region picture, and calculating a corresponding gray value, the calculation formula is as follows:
m=R×0.3+G×0.59+B×0.11。m = R x 0.3 + G x 0.59 + B x 0.11.
其中,m表示像素点的灰度值,R表示该像素点对应的红色值,G表示该像素点对应的绿色值,B表示该像素点对应的蓝色值。Where m represents the gray value of the pixel, R represents the red value corresponding to the pixel, G represents the green value corresponding to the pixel, and B represents the blue value corresponding to the pixel.
在得到脸部区域图片中各像素点的灰度值,便可计算出脸部区域图片的灰度值,计算公式如下:After obtaining the gray value of each pixel in the image of the face region, the gray value of the image of the face region can be calculated, and the calculation formula is as follows:
Figure PCTCN2018095360-appb-000001
Figure PCTCN2018095360-appb-000001
其中,x表示脸部区域图片的灰度值,m i表示脸部区域图片中第i个像素点的灰度值,N表示脸部区域图片中像素点的总数,N’表示脸部区域图片中非透明像素点的个数。 Where x represents the gray value of the face region picture, m i represents the gray value of the i-th pixel point in the face region picture, N represents the total number of pixel points in the face region picture, and N′ represents the face region picture The number of transparent pixels in Central Africa.
由于RGB色值范围为0~255,相应地,像素点的灰度值以及脸部区域图片的灰度值范围仍为0~255,其中,0表示黑色,255表示白色,0至255之间表示相应的灰色。Since the RGB color value ranges from 0 to 255, correspondingly, the gray value of the pixel and the gray value range of the face region picture are still 0 to 255, wherein 0 represents black, 255 represents white, and between 0 and 255. Indicates the corresponding gray.
进一步地,黑白化处理可以调用系统预置的灰度处理插件实施,也可以通过应用中内嵌的图像处理插件实施。Further, the black and white processing may be implemented by calling a grayscale processing plug-in preset by the system, or may be implemented by an image processing plug-in embedded in the application.
第二步骤,按照灰度值与色阶调整参数之间的对应关系得到脸部区域图片的灰度值对应的色阶调整参数,并按照色阶调整参数对脸部区域图片进行色阶调整,得到中间结果图片。In the second step, the gradation adjustment parameter corresponding to the gradation value of the face region picture is obtained according to the correspondence between the gradation value and the gradation adjustment parameter, and the gradation adjustment is performed on the face region picture according to the gradation adjustment parameter. Get an intermediate result picture.
在完成脸部区域图片的黑白化处理之后,为了确保经黑白化处理的脸部区域图片符合视觉要求,需要对其进行色阶调整。After the black-and-white processing of the face region picture is completed, in order to ensure that the black-and-white-processed face region picture meets the visual requirements, it is necessary to adjust the tone scale.
具体地,根据脸部区域图片的灰度值由灰度值与色阶调整参数之间的对应关系中得到对应的色阶调整参数,按照对应的色阶调整参数对脸部区域图片进行色阶调整。Specifically, according to the gray value of the face region picture, the corresponding color gradation adjustment parameter is obtained from the correspondence between the gray value and the gradation adjustment parameter, and the gradation of the face region image is performed according to the corresponding gradation adjustment parameter. Adjustment.
其中,灰度值与色阶调整参数之间的对应关系是按照视觉要求对海量脸部区域图片的灰度值与不同的色阶调整参数之间关系进行统计形成的,并可以在 实际应用中灵活地调整,进而使得后续生成的表情图片所具有的图片效果可以动态变化,以此提升用户的自定义体验。The correspondence between the gray value and the color tone adjustment parameter is formed according to the visual requirement, and the relationship between the gray value of the image of the massive face region and the different color tone adjustment parameters is statistically formed, and can be used in practical applications. Flexible adjustment, so that the image effects of subsequent generated emoticons can be dynamically changed, thereby enhancing the user's custom experience.
需要说明的是,视觉要求是指按照用户对表情图片的审美标准设计的表情图片风格。It should be noted that the visual requirement refers to an emoticon style designed according to the user's aesthetic standard for the emoticon image.
举例来说,色阶调整参数包括黑色值、白色值和gamma值。For example, the tone scale adjustment parameters include a black value, a white value, and a gamma value.
灰度值与色阶调整参数之间的对应关系如表1所示。The correspondence between the gray value and the level adjustment parameter is shown in Table 1.
表1灰度值与色阶调整参数之间的对应关系Table 1 Correspondence between gray value and color scale adjustment parameters
Figure PCTCN2018095360-appb-000002
Figure PCTCN2018095360-appb-000002
由此,在得到脸部区域图片的灰度值之后,便可通过表1确认对应的色阶调整参数,进而将对应的色阶调整参数输入至应用中内嵌的图像处理插件,以此实现脸部区域图片的色阶调整。Therefore, after obtaining the gradation value of the face region picture, the corresponding gradation adjustment parameter can be confirmed by Table 1, and the corresponding gradation adjustment parameter is input to the image processing plug-in embedded in the application, thereby realizing Level adjustment of the face area picture.
第三步骤,对中间结果图片进行边缘羽化处理得到人脸图片。In the third step, the intermediate result picture is subjected to edge feathering to obtain a face picture.
在脸部区域图片完成色阶调整得到中间结果图片之后,还将进一步对该中间结果图片进行边缘羽化处理,以此进一步地保证后续人脸图片与待合成背景图片的合成效果。After the image of the face region is adjusted to obtain the intermediate result picture, the intermediate result picture is further subjected to edge feathering processing, thereby further ensuring the synthesis effect of the subsequent face picture and the background picture to be synthesized.
具体而言,首先,遍历中间结果图片中的各像素点,调整各像素点的透明度。透明度调整过程包括:保持透明度为零的像素点的透明度、根据下述计算公式调整透明度不为零的像素点的透明度。Specifically, first, each pixel point in the intermediate result picture is traversed, and the transparency of each pixel point is adjusted. The transparency adjustment process includes: maintaining the transparency of the pixel having zero transparency, and adjusting the transparency of the pixel whose transparency is not zero according to the following calculation formula.
n=R×0.3+G×0.59+B×0.11。n = R × 0.3 + G × 0.59 + B × 0.11.
其中,n表示像素点的透明度,R表示该像素点对应的红色值,G表示该像素点对应的绿色值,B表示该像素点对应的蓝色值。Where n is the transparency of the pixel, R is the red value corresponding to the pixel, G is the green value corresponding to the pixel, and B is the blue value corresponding to the pixel.
然后,按照脸部区域图片中的脸部区域制作相应的蒙版图,同时,对蒙版图进行高斯模糊处理,并计算高斯模糊处理的蒙版图中各像素点的灰度值。其中,蒙版图中各像素点灰度值的计算公式与脸部区域图片中各像素点灰度值的 计算公式一致,在此不再详细地描述。Then, according to the face area in the face area picture, a corresponding mask pattern is created, and at the same time, Gaussian blur processing is performed on the mask pattern, and the gray value of each pixel in the mask pattern of the Gaussian blur processing is calculated. The calculation formula of the gray value of each pixel in the mask is consistent with the calculation formula of the gray value of each pixel in the image of the face region, and will not be described in detail herein.
最后,同时遍历中间结果图片与蒙版图中的各像素点,进行像素点合并,该像素点合并的计算公式如下:Finally, at the same time, traversing the intermediate result image and each pixel in the mask image to perform pixel point merging, the calculation formula of the pixel point combination is as follows:
s i=n i×t i 3,0<i<=N。 s i =n i ×t i 3 , 0<i<=N.
其中,s i表示人脸图片中第i个像素点的透明度,n i表示中间结果图片中第i个像素点的透明度,t i表示蒙版图中第i个像素点的灰度值,N表示人脸图片中像素点的总数。 Where s i represents the transparency of the i-th pixel in the face image, n i represents the transparency of the i-th pixel in the intermediate result picture, t i represents the gray value of the i-th pixel in the mask, and N represents The total number of pixels in the face image.
通过如上所述的过程,便实现了人脸图片的获取,并且通过图像增强处理为人脸图片与待合成背景图片的完美融合提供了充分的保障。Through the process as described above, the acquisition of the face image is realized, and the image enhancement processing provides a sufficient guarantee for the perfect fusion of the face image and the background image to be synthesized.
请参阅图6,在一示例性实施例中,如上所述的方法还可以包括以下步骤:Referring to FIG. 6, in an exemplary embodiment, the method as described above may further include the following steps:
步骤310,接收针对表情图片触发的图片编辑开启操作。Step 310: Receive a picture editing open operation triggered for an emoticon picture.
可选地,该图片编辑开启操作用于跳转至图片编辑页面对表情图片进行编辑处理。Optionally, the image editing open operation is used to jump to the image editing page to edit the expression image.
如前所述,在为用户执行表情图片预览时,如果用户不满意生成的表情图片,则向用户提供图片编辑开启入口,以通过用户在该图片编辑开启入口触发的相关操作启动对表情图片执行图片编辑处理过程。As described above, when performing an emoticon preview for a user, if the user is dissatisfied with the generated emoticon image, the user is provided with a photo editing open entry to initiate execution of the emoticon image by the related operation triggered by the user in the photo editing open portal. Picture editing process.
例如,如图3所示,在进行图片显示的消息页面中,生成的表情图片一侧设置有一虚拟图标②作为图片编辑开启入口,当用户需要对表情图片进行图片编辑处理时,用户可以通过电子设备配置的输入模块或者触屏点击该虚拟图标②,该点击操作即视为用户在该图片编辑开启入口针对表情图片触发的图片编辑开启操作。For example, as shown in FIG. 3, in the message page for displaying a picture, a virtual icon 2 is set as a picture editing opening entrance on the side of the generated expression picture, and the user can pass the electronic when the user needs to perform image editing processing on the expression picture. The input module of the device configuration or the touch screen clicks the virtual icon 2, and the click operation is regarded as a picture editing open operation triggered by the user for the expression picture at the picture editing opening.
步骤330,根据图片编辑开启操作跳转至图片编辑页面。Step 330: Jump to the picture editing page according to the picture editing open operation.
可选地,电子设备通过响应用户触发的图片编辑开启操作,由消息页面跳转至图片编辑页面,以供后续用户在图片编辑页面中对表情图片进行图片编辑处理。Optionally, the electronic device jumps to the image editing page by the message page in response to the user-triggered picture editing opening operation, so that the subsequent user performs image editing processing on the expression image in the image editing page.
步骤350,接收在图片编辑页面中触发的图片编辑操作。Step 350: Receive a picture editing operation triggered in the picture editing page.
步骤370,根据图片编辑操作对表情图片进行图片编辑处理。Step 370: Perform image editing processing on the emoticon image according to the picture editing operation.
在图片编辑页面中,用户可以对表情图片进行图片编辑处理。由于表情图片包括表情和背景,为此,图片编辑处理包括但不限于:更换表情、更换背景、 添加文字等等。In the photo editing page, the user can perform image editing processing on the emoticon image. Since the emoticon image includes an emoticon and a background, the photo editing process includes, but is not limited to, changing the emoticon, changing the background, adding text, and the like.
用户在进行表情更换时,首先由本地存储中选择图片或者利用电子设备配置的摄像模块进行拍照得到相应图片,然后通过人脸识别得到图片中包含的人脸,进而以该人脸进行表情图片中表情替换。When the user performs the expression replacement, the user first selects a picture in the local storage or uses the camera module configured by the electronic device to take a picture to obtain a corresponding picture, and then obtains a face included in the picture through face recognition, and then performs the face image in the face with the face. Expression replacement.
背景更换过程中,可以为用户展示预设背景图片素材库中的背景图片素材,并根据用户选择的背景图片素材进行表情图片中背景更换,或者,由用户直接在本地存储中选择图片或者利用电子设备配置的摄像模块进行拍照得到相应的待合成背景图片,以此替换表情图片中的背景。During the background replacement process, the background image material in the preset background image material library may be displayed for the user, and the background image in the emoticon image may be replaced according to the background image material selected by the user, or the user may directly select the image or use the electronic in the local storage. The camera module configured by the device performs a photo to obtain a corresponding background image to be synthesized, thereby replacing the background in the emoticon image.
如果图片编辑处理后的表情图片仍然不符合需求,则用户将继续执行步骤350及步骤370,接收对表情图片触发的图片编辑操作,并对该表情图片进行图片编辑处理。If the emoticon image after the image editing process still does not meet the requirements, the user will continue to perform steps 350 and 370, receive a photo editing operation triggered by the emoticon image, and perform a photo editing process on the emoticon image.
反之,如果图片编辑处理后的表情图片符合需求,则跳转进入步骤390,将该表情图片形成新的图片消息发布在消息页面中。On the other hand, if the emoticon image after the image editing process meets the requirements, the process proceeds to step 390, where the emoticon image is formed into a new photo message and posted in the message page.
步骤390,以图片编辑处理后的表情图片形成新的图片消息发布在可连续发布消息的消息页面中。Step 390: Forming a new picture message by using the picture edited image to be published in a message page of the continuously publishable message.
在上述实施例的作用下,在表情图片生成过程中,还同时为用户提供了换表情、换背景、添加文字的功能,不仅能够满足原创用户的自定义需求,还有利于提高用户的表情图片生成体验。Under the action of the above embodiment, in the process of generating an emoticon image, the function of changing the expression, changing the background, and adding text is also provided for the user, which not only satisfies the custom requirement of the original user, but also helps to improve the emoticon image of the user. Generate experience.
请参阅图7,在一示例性实施例中,如上所述步骤330之前还可以包括以下步骤:Referring to FIG. 7, in an exemplary embodiment, the following steps may be further included before step 330 as described above:
步骤410,当至少两个脸部区域合成至背景图片生成表情图片时,生成人脸选择消息。Step 410: Generate a face selection message when at least two face regions are synthesized into a background image to generate an emoticon image.
可选地,每个脸部区域图片在表情图片中对应至少一个脸部区域。Optionally, each face region picture corresponds to at least one face region in the emoticon picture.
可选地,该人脸选择消息用于提示在表情图片中的至少两个脸部区域进行选择。可选地,该人脸选择消息中包括至少两个脸部区域的相关信息。例如,该相关信息可以是脸部区域在表情图片中的位置、脸部区域的特征或者脸部区域识别标识。Optionally, the face selection message is used to prompt selection in at least two face regions in the emoticon picture. Optionally, the face selection message includes related information of at least two face regions. For example, the related information may be a location of a face region in an emoticon picture, a feature of a face region, or a face region identification identifier.
以脸部区域识别标识为例进行说明,当图片中进行人脸识别得到多个脸部区域,将以脸部区域识别标识唯一地标识脸部区域,例如,以数字的方式对脸 部区域进行标识,并打包脸部区域识别标识和相应的脸部区域得到人脸选择消息,并以弹出对话框的方式展示该人脸选择消息,以此提示用户进行人脸选择。Taking the face area identification mark as an example for description, when face recognition is performed in the picture to obtain a plurality of face areas, the face area identification mark is used to uniquely identify the face area, for example, the face area is digitally performed. Identifying and packaging the facial area identification identifier and the corresponding facial area to obtain a face selection message, and displaying the face selection message in a pop-up dialog box, thereby prompting the user to perform face selection.
在弹出对话框中,不仅显示出多个脸部区域,还同时显示该多个脸部区域对应的脸部区域识别标识,例如,以高亮的形式将脸部区域识别标识显示在对应脸部区域上,相应地,用户便可通过选择脸部区域识别标识进行对应脸部区域的选择。In the pop-up dialog box, not only a plurality of face regions are displayed, but also a face region identification identifier corresponding to the plurality of face regions is displayed, for example, the face region identification identifier is displayed on the corresponding face in a highlighted form. In the area, correspondingly, the user can select the corresponding face area by selecting the face area identification mark.
步骤430,根据人脸选择消息中触发的人脸选定操作生成人脸选定指令,将人脸选定指令指示的脸部区域显示在图片编辑页面。Step 430: Generate a face selection instruction according to the face selection operation triggered in the face selection message, and display the face area indicated by the face selection instruction on the picture editing page.
图片编辑页面用于对脸部区域进行图片编辑处理。The picture editing page is used to perform image editing processing on the face area.
在用户根据人脸选择消息完成选择之后,人脸选定指令便可根据用户所作的选择操作(即人脸选定操作)生成,即人脸选定指令包含了脸部区域对应的脸部区域识别标识。After the user completes the selection according to the face selection message, the face selection instruction can be generated according to the selection operation (ie, the face selection operation) made by the user, that is, the face selection instruction includes the face area corresponding to the face area. Identify the logo.
相应地,在图片编辑页面中,便可按照人脸选定指令中包含的脸部区域识别标识显示出对应的脸部区域,以便于后续对该脸部区域进行图片编辑处理。Correspondingly, in the picture editing page, the corresponding face area can be displayed according to the face area identification mark included in the face selection instruction, so as to perform subsequent image editing processing on the face area.
在上述实施例的配合下,实现了生成的人脸选定指令反映了用户的需求,有利于后续生成符合用户需求的表情图片。With the cooperation of the foregoing embodiment, the generated face selection instruction reflects the user's requirement, and is favorable for subsequently generating an expression picture that meets the user's needs.
请参阅图8,在一示例性实施例中,如上所述的步骤255之后,还可以包括以下步骤:Referring to FIG. 8, in an exemplary embodiment, after step 255 as described above, the following steps may also be included:
步骤510,获取针对表情图片触发的更换背景图片操作;Step 510: Acquire a replacement background picture operation triggered by the emoticon picture;
步骤530,根据更换背景图片操作获取更换后的待合成背景图片。Step 530: Acquire a replacement background image to be synthesized according to the replacement background image operation.
如前所述,在为用户执行表情图片预览时,如果用户不满意生成的表情图片,将向用户提供背景图片更换入口,以通过用户在该背景图片更换入口触发的相关操作对表情图片进行背景图片更换。As described above, when performing an emoticon preview for a user, if the user is dissatisfied with the generated emoticon image, the user will be provided with a background image replacement entry to background the emoticon image by the user's related operation triggered by the background image replacement entry. Picture replacement.
如图3所示,生成的表情图片一侧设置有一虚拟图标③作为背景图片更换入口,当用户需要更换表情图片中的背景时,用户可以通过电子设备配置的输入装置或者触屏点击该虚拟图标③,该点击操作即视为用户在该背景图片更换入口针对表情图片触发的更换背景图片操作。As shown in FIG. 3, a virtual icon 3 is set as a background image replacement entry on the side of the generated emoticon. When the user needs to replace the background in the emoticon, the user can click the virtual icon through an input device or a touch screen configured by the electronic device. 3. The click operation is regarded as a replacement background image operation triggered by the user for the expression picture in the background image replacement entry.
相应地,电子设备通过响应用户触发的更换背景图片操作,获取更换后的待合成背景图片,以供后续用户进行表情图片中背景更换。Correspondingly, the electronic device obtains the replaced background image to be synthesized by responding to the user-replaced background image replacement operation, so that the subsequent user can perform background replacement in the expression image.
可选地,该更换后的待合成背景图片,可以是由预设背景图片素材库中的 背景图片素材随机生成,可以是由用户在预设背景图片素材库中指定的,也可以是由用户指定的本地存储中的图片,还可以是在本地存储的图片中随机获取,本实施例对此不加以限定。Optionally, the replaced background image to be synthesized may be randomly generated by the background image material in the preset background image material library, may be specified by the user in the preset background image material library, or may be specified by the user. The picture in the specified local storage may also be randomly obtained in the locally stored picture, which is not limited in this embodiment.
步骤550,根据更换后的待合成背景图片与图片中的脸部区域进行表情图片重新合成,得到重新合成后的表情图片。Step 550: Perform re-synthesis of the expression image according to the replaced background image to be synthesized and the face region in the picture to obtain the re-synthesized expression picture.
步骤570,显示重新合成后的表情图片作为表情图片预览。 Step 570, displaying the re-formed emoticon image as an emoticon image preview.
在重新合成表情图片之后,重新合成的表情图片可以显示在进行图片显示的页面中,也可以显示在新的页面中。After re-synthesizing the emoticon image, the re-composed emoticon image can be displayed on the page where the image is displayed, or can be displayed on the new page.
例如,在图片浏览器页面中,该重新合成的表情图片显示在图片编辑页面中,该图片编辑页面区别于图片浏览器页面。For example, in the picture browser page, the re-synthesized expression picture is displayed in a picture editing page, which is different from the picture browser page.
又或者,在消息页面中,该重新合成的表情图片仍然与图片消息中的图片并排显示在消息页面中,以供用户再次预览并判断该重新合成的表情图片是否符合需求。Or, in the message page, the re-formed emoticon image is still displayed side by side in the message page with the picture in the picture message, so that the user can preview again and determine whether the re-formed emoticon image meets the requirement.
如果该重新合成表情图片符合需求,则用户便可通过点击该重新合成的表情图片使得该重新合成的表情图片形成新的图片消息发布在消息页面中。If the re-synthesized emoticon image meets the requirement, the user can post the re-formed emoticon image into a new photo message by clicking the re-formed emoticon image.
反之,如果该重新合成的表情图片不符合需求,则用户可通过图片编辑开启入口和/或背景图片更换入口和/或图片编辑页面继续对该重新合成的表情图片进行相应换表情、换背景、添加文字等处理。On the other hand, if the re-synthesized emoticon does not meet the requirements, the user can continue to change the entrance and/or the photo editing page through the photo editing to open the entrance and/or the photo editing page to continue to change the expression, change the background, Add text and other processing.
在上述实施例的作用下,在表情图片生成过程中,还同时为用户提供了换背景的功能,不仅能够满足原创用户的自定义需求,还有利于提高用户的表情图片生成体验。Under the action of the above embodiment, in the process of generating the expression image, the user is also provided with the function of changing the background, which not only satisfies the customization requirement of the original user, but also improves the user's expression picture generation experience.
图9至图14是具体应用场景中一种表情图片生成方法的相关示意图。9 to FIG. 14 are related diagrams of a method for generating an emoticon in a specific application scenario.
图9至图10所示出的具体应用场景中,电子设备中运行了可连续发布消息的应用,例如社交应用、即时通讯应用等等,触控屏幕中当前显示页面为可连续发布消息的消息页面。In the specific application scenario shown in FIG. 9 to FIG. 10, an application that can continuously publish a message, such as a social application, an instant messaging application, or the like, is run in the electronic device, and the currently displayed page in the touch screen is a message that can continuously publish a message. page.
如图9所示,当消息页面600中包含图片消息601,则自动触发形成与该图片消息601相关联的表情图片生成入口602,进而根据该表情图片生成入口602触发执行与图片消息601中图片相关的表情图片生成过程,并在消息页面600中并排显示图片消息601中的图片及相关的表情图片603,以供用户预览。As shown in FIG. 9 , when the message page 601 is included in the message page 600, the emoticon generation portal 602 associated with the photo message 601 is automatically triggered to be generated, and the image in the photo message 601 is triggered and executed according to the emoticon generation portal 602. The related emoticon generation process, and the pictures in the picture message 601 and the related emoticons 603 are displayed side by side in the message page 600 for the user to preview.
通过预览,如果用户满意该表情图片603,则点击该表情图片603,以使该表情图片603形成新的图片消息604发布在消息页面600中。此时,表情图片生成入口602消失于消息页面600中。By previewing, if the user is satisfied with the emoticon 603, the emoticon 603 is clicked to cause the emoticon 603 to form a new photo message 604 to be posted in the message page 600. At this time, the emoticon image generation entry 602 disappears in the message page 600.
如果用户不满意该表情图片603,则可以触发图片编辑开启入口605,跳转进入图片编辑页面606对表情图片603进行换表情、换背景608等图片编辑处理,并将图片编辑处理的表情图片609显示在图片编辑页面606供用户预览,又或者,用户可以触发背景图片更换入口607,重新合成表情图片以在消息页面600中供用户预览,直至生成的表情图片符合用户需求。If the user is dissatisfied with the emoticon 603, the photo editing open entry 605 can be triggered, and the jump enters the photo editing page 606 to change the emoticon 603, change the background 608, and the like, and edit the emoticon 609. The image editing page 606 is displayed for the user to preview, or the user can trigger the background image replacement entry 607 to re-synthesize the emoticon image for previewing in the message page 600 until the generated emoticon image meets the user's needs.
此外,如果表情图片始终未形成新的图片消息发布在消息页面600中,则与该表情图片相关的表情图片生成入口、图片编辑开启入口、背景图片更换入口将始终存在于消息页面600中。In addition, if the emoticon picture is not formed in the message page 600, the emoticon image creation portal, the photo editing open portal, and the background image replacement portal associated with the emoticon will always exist in the message page 600.
如图10所示,当消息页面700中未包含图片消息,用户还可以通过预先设置的表情图片生成入口701启动表情图片生成过程,此处,该表情图片生成入口预先设置在消息页面中弹出的上拉对话框中。As shown in FIG. 10, when the message page 700 does not include a picture message, the user may also initiate an expression picture generation process by using an preset expression picture generation entry 701. Here, the expression picture generation entry is preset to be popped up in the message page. Pull up in the dialog box.
此时,将相应地跳转进入图片编辑页面706,以通过该图片编辑页面706中的换表情、换背景708等图片编辑处理,生成符合用户需求的表情图片,进而形成新的图片消息704发布在消息页面700中。At this time, the image editing page 706 will be jumped accordingly, so as to generate an emoticon image that meets the user's needs through the image editing process such as changing the expression and changing the background 708 in the photo editing page 706, thereby forming a new photo message 704. In the message page 700.
需要说明的是,在图10所示出的图片编辑页面706显示出的表情图片中,表情图片的表情和背景可以按照指定的默认图片进行显示,也可以按照用户前一次进行图片编辑处理时所使用的表情图片进行显示,在此并未加以限定。It should be noted that, in the emoticon image displayed on the photo editing page 706 shown in FIG. 10, the emoticon and background of the emoticon image may be displayed according to the specified default photo, or may be performed according to the user's previous image editing process. The expression image used is displayed and is not limited herein.
图11所示出的具体应用场景中,电子设备中运行了可进行图片显示的应用,例如,图片浏览器应用等,触控屏幕中当前显示页面为可进行图片显示的图片浏览器页面。In the specific application scenario shown in FIG. 11 , an application capable of displaying a picture is run in an electronic device, for example, a picture browser application, and the currently displayed page in the touch screen is a picture browser page that can display a picture.
如图11所示,图片浏览器页面811中显示的图片813、815无论是否包含人脸,用户都可以通过预先设置的表情图片生成入口817启动表情图片生成过程,此时,将相应地跳转进入图片编辑页面812,并在图片编辑页面812中显示图片浏览器页面中的图片813、815,进而通过该图片编辑页面812中对该图片813、815进行换表情816、换背景814等图片编辑处理,生成符合用户需求的表情图片。As shown in FIG. 11, the pictures 813 and 815 displayed in the picture browser page 811 can initiate the expression picture generation process through the preset expression picture generation entry 817 regardless of whether the face is included or not. The image editing page 812 is displayed, and the pictures 813 and 815 in the picture browser page are displayed in the picture editing page 812, and the picture 813, 815 is replaced with the image 813, and the background 814 is edited by the picture editing page 812. Processing, generating an emoticon image that meets the user's needs.
图12所示出的具体应用场景中,电子设备中未运行相关应用,触控屏幕 中当前显示页面为图片选择页面,例如,图片选择页面为用户查看照片时所在的页面。In the specific application scenario shown in FIG. 12, the related application is not run in the electronic device, and the currently displayed page in the touch screen is a picture selection page. For example, the picture selection page is a page where the user views the photo.
如图12所示,在该图片选择页面918中,当用户选定某一图片919之后,便可通过预先设置的表情图片生成入口920启动表情图片生成过程,此时,将相应地跳转进入图片编辑页面912,并在图片编辑页面912中显示用户选定的图片919,进而通过该图片编辑页面912中对该图片919进行换表情、换背景914等图片编辑处理,生成符合用户需求的表情图片。As shown in FIG. 12, in the picture selection page 918, after the user selects a certain picture 919, the expression picture generation process can be started through the preset expression picture generation entry 920. At this time, the corresponding jump will be entered. The image editing page 912 displays the user-selected picture 919 in the picture editing page 912, and then performs image editing processing on the picture 919 by changing the expression, changing the background 914, etc., to generate an expression that meets the user's needs. image.
进一步地,图11至图12所示出的具体应用场景中,生成的表情图片可进行本地存储,以供后续用户使用,还可以触发可连续发布消息的应用运行,以将该表情图片形成新的图片消息发布在该应用的消息页面中。Further, in the specific application scenario shown in FIG. 11 to FIG. 12, the generated emoticon image may be stored locally for use by subsequent users, and may also trigger an application running to continuously post a message to form a new image of the emoticon. The image message is posted on the app's message page.
上述表情图片生成过程中步骤执行的具体流程可参考如图13和图14所示。其中,图13为上述具体应用场景中一种表情图片生成方法的具体流程示意图。图14为上述具体应用场景中一种表情图片生成方法中人脸识别以及表情图片合成的具体流程示意图。For the specific process of the steps in the above-mentioned expression picture generation process, reference may be made to FIG. 13 and FIG. FIG. 13 is a schematic flowchart of a method for generating an emoticon in the specific application scenario. FIG. 14 is a schematic diagram of a specific process of face recognition and expression image synthesis in an expression image generation method in the above specific application scenario.
如图13所示,该表情图片生成方法包括:As shown in FIG. 13, the emoticon generating method includes:
步骤1301,流程开始。In step 1301, the process begins.
步骤1302,打开聊天窗口。In step 1302, the chat window is opened.
可选地,当用户打开聊天窗口后,电子设备针对当前聊天窗口进行图片消息检测。Optionally, after the user opens the chat window, the electronic device performs image message detection for the current chat window.
步骤1303,根据规则,判断当前聊天信息是否需要快速生成斗图。 Step 1303, according to the rule, determine whether the current chat information needs to quickly generate a bucket map.
可选地,该规则即为上述在消息页面中进行图片消息检测的规则,此处不再赘述。Optionally, the rule is the foregoing rule for performing image message detection in a message page, and details are not described herein again.
步骤1304,当需要快速生成斗图时,对图像进行人脸识别。Step 1304: Perform face recognition on the image when it is required to quickly generate the bucket map.
步骤1305,判断图像中是否有多个人脸图像。In step 1305, it is determined whether there are multiple face images in the image.
步骤1306,当图像中包括多个人脸图像时,在多个人脸图像中进行选择。 Step 1306, when a plurality of face images are included in the image, the selection is made in the plurality of face images.
步骤1307,当图像中不包括多个人脸图像时,对脸部区域进行提取。Step 1307: When a plurality of face images are not included in the image, the face region is extracted.
步骤1308,对脸部区域与背景图片进行编辑。In step 1308, the face area and the background picture are edited.
步骤1309,将脸部区域与背景图片进行合成,得到表情图像。 Step 1309, synthesizing the face area and the background picture to obtain an expression image.
可选地,在得到表情图像后,用户还可以对该表情图像进行重新编辑,如:对表情图像中人脸图像区域进行调整,对背景图片进行修改等。Optionally, after the expression image is obtained, the user may further re-edit the expression image, such as: adjusting the face image area in the expression image, modifying the background image, and the like.
步骤1310,流程结束。At step 1310, the process ends.
如图14所示,该表情图片生成方法中人脸识别以及表情图片合成的方法包括:As shown in FIG. 14, the method for generating face recognition and emoticon image in the emoticon generating method includes:
步骤1401,抠脸任务开始。In step 1401, the face task starts.
步骤1402,用户选择脸部。In step 1402, the user selects a face.
可选地,当图片中包括多个脸部区域时,用户对目标脸部区域进行选择。Optionally, when the plurality of face regions are included in the picture, the user selects the target face region.
步骤1403,确定脸部抠图范围。In step 1403, a range of facial maps is determined.
可选地,在电子设备进行抠脸操作之前,可以在用户界面中显示对图像中脸部区域的范围进行识别的结果,用户可以对该识别结果进行准确性判断,并在识别结果不准确时,对该识别结果进行调整。示意性的,用户界面中以点坐标的方式对脸部区域的范围进行展示,如:在脸部区域的周围使用点坐标进行包围。Optionally, before the electronic device performs the face operation, the result of identifying the range of the face region in the image may be displayed in the user interface, and the user may determine the accuracy of the recognition result, and when the recognition result is inaccurate , the recognition result is adjusted. Illustratively, the range of the face area is displayed in the user interface in the form of point coordinates, such as: using point coordinates to surround the face area.
步骤1404,抠脸及裁剪。 Step 1404, licking the face and cropping.
根据脸部区域的点坐标对该脸部区域进行裁剪,得到脸部区域图像。The face area is cropped according to the point coordinates of the face area to obtain a face area image.
步骤1405,将脸部区域图像黑白化。In step 1405, the face area image is black and white.
步骤1406,计算灰度值及对比度。At step 1406, gray values and contrast are calculated.
步骤1407,对脸部区域图像进行增强处理。In step 1407, the face region image is enhanced.
步骤1408,对背景图片进行透明处理。In step 1408, the background image is transparently processed.
步骤1409,对背景图片进行边缘羽化效果。 Step 1409, performing edge feathering on the background image.
可选地,步骤1405至步骤1409的过程,在上述步骤255中已进行了详细说明,此处不再赘述。Optionally, the process of step 1405 to step 1409 has been described in detail in step 255 above, and details are not described herein again.
步骤1410,确定结果图片。In step 1410, a result picture is determined.
可选地,该结果图片为进行透明处理以及边缘羽化效果处理的背景图片。Optionally, the result picture is a background picture for performing transparent processing and edge feathering effect processing.
步骤1411,确定脸部区域图像和结果图片的合成逻辑。In step 1411, a synthesis logic of the face region image and the resulting image is determined.
步骤1412,对脸部区域图像和结果图片进行图片大小匹配。Step 1412: Perform image size matching on the face area image and the result picture.
步骤1413,将脸部区域图像和结果图片进行合成。In step 1413, the face area image and the result picture are combined.
步骤1414,完成。 Step 1414, completed.
在本申请各实施例中,通过表情图片生成入口的自动触发形成,不仅简化了用户繁琐的操作,使得表情图片更加快速地生成,还巧妙地将人脸识别、图 片合成、图片编辑结合起来,通过技术的趣味性联结,让用户体验到一种更加活跃、更加愉快的沟通方式。In the embodiments of the present application, the automatic trigger formation by the expression picture generation entry not only simplifies the user's cumbersome operation, but also makes the expression picture generate more quickly, and also subtly combines face recognition, picture synthesis, and picture editing. Through the interesting connection of technology, users can experience a more active and enjoyable way of communication.
此外,通过在现有的进行图片显示的页面中预先设置的表情图片生成入口,即可借助于本申请各实施例中的方案,来兼容于现有的各种进行图片显示的页面,使得现有的显示图片页面也能够引导用户快速地生成表情图片,并实现表情图片的快速发布,具备非常高的通用性,不仅实现了与现有技术的巧妙融合,更有利于实现表情图片在不同应用之间的快速交互,进一步有利于提升用户的表情图片生成体验。In addition, by generating an entry in the existing image display page in the image display, the program in the embodiments of the present application can be used to be compatible with various existing image display pages, so that the current Some display image pages can also guide users to quickly generate emoticons, and realize the rapid release of emoticons. They have very high versatility, which not only realizes the ingenious integration with the prior art, but also facilitates the implementation of emoticons in different applications. The rapid interaction between them further enhances the user's expression picture generation experience.
下述为本申请装置实施例,可以用于执行本申请所涉及的表情图片生成方法。对于本申请装置实施例中未披露的细节,请参照本申请所涉及的表情图片生成方法实施例。The following is an embodiment of the device of the present application, which can be used to execute the method for generating an emoticon image according to the present application. For details not disclosed in the embodiment of the present application, please refer to the embodiment of the emoticon generating method according to the present application.
请参阅图15,在一示例性实施例中,一种表情图片生成装置包括但不限于:显示模块1510、接收模块1520和合成模块1530。Referring to FIG. 15 , in an exemplary embodiment, an emoticon generating device includes, but is not limited to, a display module 1510 , a receiving module 1520 , and a synthesizing module 1530 .
显示模块1510,用于当页面中的图片包含脸部区域时,在所述页面中显示与所述图片对应的表情图片生成入口;The display module 1510 is configured to: when the picture in the page includes a face area, display an expression picture generation entry corresponding to the picture in the page;
接收模块1520,用于接收对所述表情图片生成入口触发的生成表情图片操作;The receiving module 1520 is configured to receive an generated emoticon picture triggered by the emoticon image generation entry triggering;
合成模块1530,用于根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片。The synthesizing module 1530 is configured to synthesize a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generate an emoticon image related to the image.
在一个可选的实施例中,如图16所示,所述页面包括可连续发布消息的消息页面,所述装置还包括:In an optional embodiment, as shown in FIG. 16, the page includes a message page that can continuously publish a message, and the device further includes:
检测模块1540,用于按照预设时间周期在所述消息页面中进行图片消息检测;The detecting module 1540 is configured to perform image message detection in the message page according to a preset time period;
人脸识别模块1550,用于当所述预设时间周期内所述消息页面中包括图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当所述预设时间周期内所述消息页面中包括至少两条图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。The face recognition module 1550 is configured to perform a face recognition on a picture in the picture message included in the message page to detect the picture in the picture page. Or the face page of the picture message included in the message page is subjected to face recognition to detect the The face area in the picture.
在一个可选的实施例中,检测模块1540,用于按照预设消息发布数量对连续发布的消息进行图片消息检测;In an optional embodiment, the detecting module 1540 is configured to perform a picture message detection on the continuously published message according to the preset number of message releases;
人脸识别模块1550,用于当所述预设消息发布数量内所述连续发布的消息包括所述图片消息时,则对所述连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当所述预设消息发布数量内所述连续发布的消息包括至少两条图片消息时,对所述连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。The face recognition module 1550 is configured to perform face recognition on the picture in the picture message included in the continuous release message when the continuously published message includes the picture message in the preset number of published messages, And detecting a face region in the picture; or, when the continuously published message includes at least two picture messages in the preset message publishing quantity, the picture in the picture message included in the continuous release message Face recognition is performed to detect a face area in the picture.
在一个可选的实施例中,所述合成模块1530包括:In an optional embodiment, the synthesizing module 1530 includes:
脸部区域确定单元1531,用于根据所述生成表情图片操作,通过脸部特征提取在所述图片中确定脸部区域;a face region determining unit 1531, configured to determine a face region in the picture by facial feature extraction according to the generating an emoticon image operation;
图片裁剪单元1532,用于根据所述脸部区域对所述图片进行裁剪,得到脸部区域图片;a picture cropping unit 1532, configured to crop the picture according to the face area to obtain a picture of a face area;
所述合成模块1530,还用于将所述脸部区域图片合成至所述待合成背景图片,生成与所述图片相关的所述表情图片。The synthesizing module 1530 is further configured to synthesize the facial region picture to the background image to be synthesized, and generate the emoticon image related to the picture.
在一个可选的实施例中,所述接收模块1520,还用于接收针对所述表情图片触发的图片编辑开启操作;In an optional embodiment, the receiving module 1520 is further configured to receive a picture editing open operation triggered by the emoticon picture;
所述装置,还包括:The device further includes:
跳转模块1560,用于根据所述图片编辑开启操作跳转至图片编辑页面,所述图片编辑页面用于对所述表情图片进行编辑处理;The jump module 1560 is configured to jump to a picture editing page according to the picture editing open operation, where the picture editing page is used to perform editing processing on the expression picture;
所述接收模块1520,还用于接收在所述图片编辑页面中触发的图片编辑操作;The receiving module 1520 is further configured to receive a picture editing operation triggered in the picture editing page;
编辑模块1570,用于根据所述图片编辑操作对所述表情图片进行图片编辑处理;The editing module 1570 is configured to perform image editing processing on the emoticon image according to the photo editing operation;
发布模块1580,用于以图片编辑处理后的表情图片形成新的图片消息发布在所述可连续发布消息的消息页面中。The publishing module 1580 is configured to form a new picture message by using the image editing processed image to be published in the message page of the continuously publishable message.
在一个可选的实施例中,所述装置还包括:In an optional embodiment, the apparatus further includes:
生成模块1590,用于当至少两个脸部区域图片合成至所述背景图片生成表情图片时,生成人脸选择消息,所述人脸选择消息用于提示在所述表情图片中的至少两个脸部区域中进行选择,其中,每个所述脸部区域图片在所述表情图片中对应至少一个脸部区域;a generating module 1590, configured to generate a face selection message, where the face selection message is used to prompt at least two of the emoticons when the at least two facial region images are synthesized to the background image to generate an emoticon image Selecting in the face region, wherein each of the face region pictures corresponds to at least one face region in the emoticon image;
生成模块1590,用于根据所述人脸选择消息中触发的人脸选定操作生成人脸选定指令,将所述人脸选定指令指示的脸部区域显示在所述图片编辑页面, 所述图片编辑页面用于对所述脸部区域进行图片编辑处理。a generating module 1590, configured to generate a face selection instruction according to the face selection operation triggered in the face selection message, and display the face area indicated by the face selection instruction on the picture editing page. The picture editing page is used to perform image editing processing on the face area.
在一个可选的实施例中,接收模块1520,还用于获取针对所述表情图片触发的更换背景图片操作;In an optional embodiment, the receiving module 1520 is further configured to acquire a replacement background picture operation triggered by the emoticon picture;
所述接收模块1520,还用于根据所述更换背景图片操作获取更换后的待合成背景图片;The receiving module 1520 is further configured to obtain the replaced background image to be synthesized according to the replacing the background image operation;
所述合成模块1530,还用于根据所述更换后的待合成背景图片与所述图片中的脸部区域进行表情图片重新合成,得到重新合成后的表情图片;The synthesizing module 1530 is further configured to perform re-synthesis of the emoticon image according to the replaced background image to be synthesized and the facial region in the image, to obtain a recombined emoticon image;
所述显示模块1510,还用于显示所述重新合成后的表情图片作为表情图片预览The display module 1510 is further configured to display the re-formed emoticon image as an emoticon image preview
需要说明的是,上述实施例所提供的表情图片生成装置在进行表情图片生成处理时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即表情图片生成装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。It should be noted that, when performing the expression picture generation process, the expression picture generation device provided by the above embodiment is only illustrated by the division of each function module. In actual application, the function may be assigned differently according to needs. The function module is completed, that is, the internal structure of the emoticon image generating device is divided into different functional modules to complete all or part of the functions described above.
另外,上述实施例所提供的表情图片生成装置与表情图片生成方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。In addition, the embodiment of the method for generating an emoticon and the method for generating an emoticon are provided in the same manner, and the specific manner in which each module performs the operation has been described in detail in the method embodiment, and details are not described herein again.
在一示例性实施例中,一种电子设备包括但不限于:处理器及存储器。In an exemplary embodiment, an electronic device includes, but is not limited to, a processor and a memory.
其中,存储器上存储有计算机可读指令,该计算机可读指令被处理器执行时实现如上所述的各实施例中的表情图片生成方法。The memory readable instructions are stored on the memory, and when the computer readable instructions are executed by the processor, the expression image generating method in each embodiment as described above is implemented.
在一示例性实施例中,一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上所述的各实施例中的表情图片生成方法。In an exemplary embodiment, a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements an emoticon generating method in various embodiments as described above.
上述内容,仅为本申请的较佳示例性实施例,并非用于限制本申请的实施方案,本领域普通技术人员根据本申请的主要构思和精神,可以十分方便地进行相应的变通或修改,故本申请的保护范围应以权利要求书所要求的保护范围为准。The above is only a preferred exemplary embodiment of the present application, and is not intended to limit the embodiments of the present application, and those skilled in the art can make corresponding modifications or modifications in a very convenient manner according to the main idea and spirit of the present application. Therefore, the scope of protection of this application shall be subject to the scope of protection required by the claims.

Claims (16)

  1. 一种表情图片生成方法,其特征在于,应用于电子设备中,所述方法包括:An expression picture generating method is characterized in that it is applied to an electronic device, and the method includes:
    当页面中的图片包含脸部区域时,在所述页面中显示与所述图片对应的表情图片生成入口;When the picture in the page includes a face area, an emoticon image generation entry corresponding to the picture is displayed in the page;
    接收对所述表情图片生成入口触发的生成表情图片操作;Receiving an generated emoticon picture operation triggered by the emoticon image generation entry;
    根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片。Generating a face region in the picture to a background image to be synthesized according to the generating an emoticon picture operation, and generating an emoticon picture related to the picture.
  2. 如权利要求1所述的方法,其特征在于,所述页面包括可连续发布消息的消息页面,所述当页面中的图片包含脸部区域,在所述页面中为所述图片自动触发形成相关联的表情图片生成入口之前,所述方法还包括:The method according to claim 1, wherein the page comprises a message page that can continuously publish a message, wherein the picture in the page includes a face area, and the picture is automatically triggered to form a correlation in the page. Before the associated emoticon image generation portal, the method further includes:
    按照预设时间周期在所述消息页面中进行图片消息检测;Performing picture message detection in the message page according to a preset time period;
    当所述预设时间周期内所述消息页面中包括图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当所述预设时间周期内所述消息页面中包括至少两条图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。Performing face recognition on a picture in the picture message included in the message page to detect a face area in the picture when the picture message includes a picture message in the preset time period; or, when If the message page includes at least two picture messages in the preset time period, face recognition is performed on the picture in the picture message included in the message page to detect a face area in the picture.
  3. 如权利要求1所述的方法,其特征在于,所述当页面中的图片包含脸部区域,在所述页面中为所述图片自动触发形成相关联的表情图片生成入口之前,所述方法还包括:The method according to claim 1, wherein said method further comprises: before said picture in the page comprises a face area, said method further comprises: before said picture is automatically triggered to form an associated expression picture generation entry for said picture include:
    按照预设消息发布数量对连续发布的消息进行图片消息检测;Performing image message detection on consecutively posted messages according to the number of preset message releases;
    当所述预设消息发布数量内所述连续发布的消息包括所述图片消息时,则对所述连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当所述预设消息发布数量内所述连续发布的消息包括至少两条图片消息时,对所述连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。When the continuously published message includes the picture message in the preset number of published messages, performing face recognition on a picture in the picture message included in the continuous release message to detect a face in the picture Or, when the consecutively published message includes at least two picture messages, the face in the picture message included in the continuous release message is face-recognized to detect the location The face area in the picture.
  4. 如权利要求1所述的方法,其特征在于,所述根据所述生成表情图片 操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片,包括:The method according to claim 1, wherein the synthesizing a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generating an emoticon image related to the image, comprising:
    根据所述生成表情图片操作,通过脸部特征提取在所述图片中确定脸部区域;Determining a face region in the picture by facial feature extraction according to the generating an emoticon picture operation;
    根据所述脸部区域对所述图片进行裁剪,得到脸部区域图片;And cutting the picture according to the face area to obtain a picture of the face area;
    将所述脸部区域图片合成至所述待合成背景图片,生成与所述图片相关的所述表情图片。The face region picture is synthesized to the background image to be synthesized, and the expression picture related to the picture is generated.
  5. 如权利要求2至4任一所述的方法,其特征在于,所述根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片之后,所述方法还包括:The method according to any one of claims 2 to 4, wherein said synthesizing a face region in said picture to a background image to be synthesized according to said generating an emoticon picture operation, generating an expression related to said picture After the picture, the method further includes:
    接收针对所述表情图片触发的图片编辑开启操作;Receiving a picture editing open operation triggered for the expression picture;
    根据所述图片编辑开启操作跳转至图片编辑页面,所述图片编辑页面用于对所述表情图片进行编辑处理;And jumping to a picture editing page according to the picture editing open operation, where the picture editing page is used to edit the expression picture;
    接收在所述图片编辑页面中触发的图片编辑操作;Receiving a picture editing operation triggered in the picture editing page;
    根据所述图片编辑操作对所述表情图片进行图片编辑处理;Performing a picture editing process on the emoticon image according to the picture editing operation;
    以图片编辑处理后的表情图片形成新的图片消息发布在所述可连续发布消息的消息页面中。The emoticon image processed by the photo editing forms a new photo message and is published in the message page of the continuously publishable message.
  6. 如权利要求5所述的方法,其特征在于,所述按照所述图片编辑页面中触发的图片编辑操作对所述表情图片进行图片编辑处理之前,所述方法还包括:The method of claim 5, wherein the method further comprises: performing image editing processing on the emoticon image according to a photo editing operation triggered in the photo editing page, the method further comprising:
    当至少两个脸部区域图片合成至所述背景图片生成表情图片时,生成人脸选择消息,所述人脸选择消息用于提示在所述表情图片中的至少两个脸部区域中进行选择,其中,每个所述脸部区域图片在所述表情图片中对应至少一个脸部区域;Generating a face selection message for prompting selection in at least two facial regions in the emoticon image when at least two facial region images are synthesized to the background image to generate an emoticon image Wherein each of the facial region pictures corresponds to at least one facial region in the emoticon image;
    根据所述人脸选择消息中触发的人脸选定操作生成人脸选定指令,将所述人脸选定指令指示的脸部区域显示在所述图片编辑页面,所述图片编辑页面用于对所述脸部区域进行图片编辑处理。Generating a face selection instruction according to the face selection operation triggered in the face selection message, and displaying the face area indicated by the face selection instruction on the picture editing page, where the picture editing page is used for Performing a picture editing process on the face area.
  7. 如权利要求1至4任一所述的方法,其特征在于,所述根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片之后,所述方法还包括:The method according to any one of claims 1 to 4, wherein said synthesizing a face region in said picture to a background image to be synthesized according to said generating an emoticon picture operation, generating an expression related to said picture After the picture, the method further includes:
    获取针对所述表情图片触发的更换背景图片操作;Obtaining a replacement background image operation triggered by the emoticon image;
    根据所述更换背景图片操作获取更换后的待合成背景图片;Obtaining the replaced background image to be synthesized according to the replacing the background image operation;
    根据所述更换后的待合成背景图片与所述图片中的脸部区域进行表情图片重新合成,得到重新合成后的表情图片;Performing re-synthesis of the expression image according to the replaced background image to be synthesized and the face region in the image, to obtain a re-synthesized expression image;
    显示所述重新合成后的表情图片作为表情图片预览。The re-formed emoticon image is displayed as an emoticon image preview.
  8. 一种表情图片生成装置,其特征在于,所述装置包括:An expression picture generating device, characterized in that the device comprises:
    显示模块,用于当页面中的图片包含脸部区域时,在所述页面中显示与所述图片对应的表情图片生成入口;a display module, configured to: when the picture in the page includes a face area, display an expression picture generation entry corresponding to the picture in the page;
    接收模块,用于接收对所述表情图片生成入口触发的生成表情图片操作;a receiving module, configured to receive an generated emoticon image triggering operation on the emoticon image generation entry;
    合成模块,用于根据所述生成表情图片操作将所述图片中的脸部区域合成至待合成背景图片,生成与所述图片相关的表情图片。And a synthesizing module, configured to synthesize a face region in the image to a background image to be synthesized according to the generating an emoticon image operation, and generate an emoticon image related to the image.
  9. 如权利要求8所述的装置,其特征在于,所述页面包括可连续发布消息的消息页面,所述装置还包括:The device of claim 8, wherein the page comprises a message page that can continuously publish a message, the device further comprising:
    检测模块,用于按照预设时间周期在所述消息页面中进行图片消息检测;a detecting module, configured to perform image message detection in the message page according to a preset time period;
    人脸识别模块,用于当所述预设时间周期内所述消息页面中包括图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域;或,当所述预设时间周期内所述消息页面中包括至少两条图片消息,则对所述消息页面所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。a face recognition module, configured to perform face recognition on a picture in the picture message included in the message page to detect a picture message in the message page during the preset time period, to detect a picture in the picture a face area; or, when the message page includes at least two picture messages in the preset time period, performing face recognition on the picture in the picture message included in the message page to detect the picture The face area in the middle.
  10. 如权利要求8所述的装置,其特征在于,所述装置还包括:The device of claim 8 further comprising:
    检测模块,用于按照预设消息发布数量对连续发布的消息进行图片消息检测;a detecting module, configured to perform image message detection on consecutively posted messages according to a preset number of message releases;
    人脸识别模块,用于当所述预设消息发布数量内所述连续发布的消息包括所述图片消息时,则对所述连续发布消息所包括的图片消息中的图片进行人脸 识别,以检测所述图片中的脸部区域;或,当所述预设消息发布数量内所述连续发布的消息包括至少两条图片消息时,对所述连续发布消息所包括的图片消息中的图片进行人脸识别,以检测所述图片中的脸部区域。a face recognition module, configured to perform face recognition on a picture in a picture message included in the continuous release message when the continuously published message includes the picture message in the preset number of published messages, Detecting a face region in the picture; or, when the continuously published message includes at least two picture messages, the picture in the picture message included in the continuous release message is performed Face recognition to detect a face area in the picture.
  11. 如权利要求8所述的装置,其特征在于,所述合成模块包括:The apparatus of claim 8 wherein said synthesizing module comprises:
    脸部区域确定单元,用于根据所述生成表情图片操作,通过脸部特征提取在所述图片中确定脸部区域;a face area determining unit, configured to determine a face area in the picture by facial feature extraction according to the generating an emoticon picture operation;
    图片裁剪单元,用于根据所述脸部区域对所述图片进行裁剪,得到脸部区域图片;a picture cropping unit, configured to crop the picture according to the face area to obtain a picture of a face area;
    所述合成模块,还用于将所述脸部区域图片合成至所述待合成背景图片,生成与所述图片相关的所述表情图片。The synthesizing module is further configured to synthesize the facial region picture to the background image to be synthesized, and generate the emoticon picture related to the picture.
  12. 如权利要求9至11任一所述的装置,其特征在于,所述接收模块,还用于接收针对所述表情图片触发的图片编辑开启操作;The device according to any one of claims 9 to 11, wherein the receiving module is further configured to receive a picture editing open operation triggered by the emoticon picture;
    所述装置,还包括:The device further includes:
    跳转模块,用于根据所述图片编辑开启操作跳转至图片编辑页面,所述图片编辑页面用于对所述表情图片进行编辑处理;a jump module, configured to jump to a picture editing page according to the picture editing open operation, where the picture editing page is used to edit the expression picture;
    所述接收模块,还用于接收在所述图片编辑页面中触发的图片编辑操作;The receiving module is further configured to receive a picture editing operation triggered in the picture editing page;
    编辑模块,用于根据所述图片编辑操作对所述表情图片进行图片编辑处理;An editing module, configured to perform image editing processing on the emoticon image according to the image editing operation;
    发布模块,用于以图片编辑处理后的表情图片形成新的图片消息发布在所述可连续发布消息的消息页面中。And a publishing module, configured to form a new picture message by using the image editing processed image to be published in the message page of the continuously publishable message.
  13. 根据权利要求12所述的装置,其特征在于,所述装置还包括:The device of claim 12, wherein the device further comprises:
    生成模块,用于当至少两个脸部区域图片合成至所述背景图片生成表情图片时,生成人脸选择消息,所述人脸选择消息用于提示在所述表情图片中的至少两个脸部区域中进行选择,其中,每个所述脸部区域图片在所述表情图片中对应至少一个脸部区域;a generating module, configured to generate a face selection message, where the face selection message is used to prompt at least two faces in the emoticon image when at least two facial region images are synthesized to the background image to generate an emoticon image Selecting in the region, wherein each of the facial region pictures corresponds to at least one facial region in the emoticon image;
    所述生成模块,还用于根据所述人脸选择消息中触发的人脸选定操作生成人脸选定指令,将所述人脸选定指令指示的脸部区域显示在所述图片编辑页面,所述图片编辑页面用于对所述脸部区域进行图片编辑处理。The generating module is further configured to generate a face selection instruction according to the face selection operation triggered in the face selection message, and display the face area indicated by the face selection instruction on the picture editing page. The picture editing page is configured to perform a picture editing process on the face area.
  14. 如权利要求8至11任一所述的装置,其特征在于,所述接收模块,还用于获取针对所述表情图片触发的更换背景图片操作;The device according to any one of claims 8 to 11, wherein the receiving module is further configured to acquire a replacement background image operation triggered by the emoticon image;
    所述接收模块,还用于根据所述更换背景图片操作获取更换后的待合成背景图片;The receiving module is further configured to obtain the replaced background image to be synthesized according to the replacing the background image operation;
    所述合成模块,还用于根据所述更换后的待合成背景图片与所述图片中的脸部区域进行表情图片重新合成,得到重新合成后的表情图片;The synthesizing module is further configured to re-synthesize an emoticon image according to the replaced background image to be synthesized and a facial region in the image, to obtain a recombined emoticon image;
    所述显示模块,还用于显示所述重新合成后的表情图片作为表情图片预览。The display module is further configured to display the re-formed emoticon image as an emoticon image preview.
  15. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器;及Processor; and
    存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现如权利要求1至7中任一项所述的表情图片生成方法。A memory having computer readable instructions stored thereon, the computer readable instructions being executed by the processor to implement the emoticon generating method of any one of claims 1 to 7.
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的表情图片生成方法。A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the expression image generating method according to any one of claims 1 to 7.
PCT/CN2018/095360 2017-07-18 2018-07-12 Emoticon image generation method and device, electronic device, and storage medium WO2019015522A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710586647.0A CN109948093B (en) 2017-07-18 2017-07-18 Expression picture generation method and device and electronic equipment
CN201710586647.0 2017-07-18

Publications (1)

Publication Number Publication Date
WO2019015522A1 true WO2019015522A1 (en) 2019-01-24

Family

ID=65016072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/095360 WO2019015522A1 (en) 2017-07-18 2018-07-12 Emoticon image generation method and device, electronic device, and storage medium

Country Status (3)

Country Link
CN (1) CN109948093B (en)
TW (1) TW201908949A (en)
WO (1) WO2019015522A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541950A (en) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 Expression generation method and device, electronic equipment and storage medium
CN111860387A (en) * 2020-07-27 2020-10-30 平安科技(深圳)有限公司 Method and device for expanding data and computer equipment
CN113867876A (en) * 2021-10-08 2021-12-31 北京字跳网络技术有限公司 Expression display method, device, equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443972A (en) * 2020-03-25 2020-07-24 北京金山安全软件有限公司 Picture application method and device, electronic equipment and storage medium
CN114816599B (en) * 2021-01-22 2024-02-27 北京字跳网络技术有限公司 Image display method, device, equipment and medium
CN116612218A (en) * 2022-02-08 2023-08-18 北京字跳网络技术有限公司 Expression animation generation method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120077485A (en) * 2010-12-30 2012-07-10 에스케이플래닛 주식회사 System and service for providing audio source based on facial expression recognition
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Image generation method and device
CN105787976A (en) * 2016-02-24 2016-07-20 深圳市金立通信设备有限公司 Method and apparatus for processing pictures
CN106529450A (en) * 2016-11-03 2017-03-22 珠海格力电器股份有限公司 Emoticon picture generating method and device
CN106791091A (en) * 2016-12-20 2017-05-31 北京奇虎科技有限公司 image generating method, device and mobile terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599B (en) * 2007-09-19 2012-02-08 中国科学院自动化研究所 Game role control method based on human face expression
CN103544272A (en) * 2013-10-18 2014-01-29 北京奇虎科技有限公司 Method and device for displaying pictures in browser
CN105204744B (en) * 2015-09-28 2018-10-19 北京金山安全软件有限公司 Method and device for starting application program and electronic equipment
CN106599926A (en) * 2016-12-20 2017-04-26 上海寒武纪信息科技有限公司 Expression picture pushing method and system
CN106875460A (en) * 2016-12-27 2017-06-20 深圳市金立通信设备有限公司 A kind of picture countenance synthesis method and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120077485A (en) * 2010-12-30 2012-07-10 에스케이플래닛 주식회사 System and service for providing audio source based on facial expression recognition
CN104616330A (en) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 Image generation method and device
CN105787976A (en) * 2016-02-24 2016-07-20 深圳市金立通信设备有限公司 Method and apparatus for processing pictures
CN106529450A (en) * 2016-11-03 2017-03-22 珠海格力电器股份有限公司 Emoticon picture generating method and device
CN106791091A (en) * 2016-12-20 2017-05-31 北京奇虎科技有限公司 image generating method, device and mobile terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541950A (en) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 Expression generation method and device, electronic equipment and storage medium
CN111541950B (en) * 2020-05-07 2023-11-03 腾讯科技(深圳)有限公司 Expression generating method and device, electronic equipment and storage medium
CN111860387A (en) * 2020-07-27 2020-10-30 平安科技(深圳)有限公司 Method and device for expanding data and computer equipment
CN111860387B (en) * 2020-07-27 2023-08-25 平安科技(深圳)有限公司 Method, device and computer equipment for expanding data
CN113867876A (en) * 2021-10-08 2021-12-31 北京字跳网络技术有限公司 Expression display method, device, equipment and storage medium
CN113867876B (en) * 2021-10-08 2024-02-23 北京字跳网络技术有限公司 Expression display method, device, equipment and storage medium

Also Published As

Publication number Publication date
TW201908949A (en) 2019-03-01
CN109948093B (en) 2023-05-23
CN109948093A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
WO2019015522A1 (en) Emoticon image generation method and device, electronic device, and storage medium
EP3105921B1 (en) Photo composition and position guidance in an imaging device
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US9558591B2 (en) Method of providing augmented reality and terminal supporting the same
CN110100251B (en) Apparatus, method, and computer-readable storage medium for processing document
JP2021517696A (en) Video stamp generation method and its computer program and computer equipment
US20190222806A1 (en) Communication system and method
WO2020134558A1 (en) Image processing method and apparatus, electronic device and storage medium
CN112262563A (en) Image processing method and electronic device
US10025482B2 (en) Image effect extraction
WO2019237747A1 (en) Image cropping method and apparatus, and electronic device and computer-readable storage medium
CN112532882B (en) Image display method and device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
US20240070976A1 (en) Object relighting using neural networks
US20230328390A1 (en) Adaptive front flash view
US20240046538A1 (en) Method for generating face shape adjustment image, model training method, apparatus and device
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
JP2011192008A (en) Image processing system and image processing method
CN111107264A (en) Image processing method, image processing device, storage medium and terminal
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
CN110597589A (en) Page coloring method and device, electronic equipment and storage medium
CN113056905A (en) System and method for taking tele-like images
US20230410479A1 (en) Domain changes in generative adversarial networks
US20230069614A1 (en) High-definition real-time view synthesis
CN113489901B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18836042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18836042

Country of ref document: EP

Kind code of ref document: A1