CN113936078A - Image processing method, image processing device, computer-readable storage medium and electronic equipment - Google Patents

Image processing method, image processing device, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN113936078A
CN113936078A CN202111357330.2A CN202111357330A CN113936078A CN 113936078 A CN113936078 A CN 113936078A CN 202111357330 A CN202111357330 A CN 202111357330A CN 113936078 A CN113936078 A CN 113936078A
Authority
CN
China
Prior art keywords
image
candidate
target
candidate image
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111357330.2A
Other languages
Chinese (zh)
Inventor
辛一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111357330.2A priority Critical patent/CN113936078A/en
Publication of CN113936078A publication Critical patent/CN113936078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an image processing method, an image processing device, a computer readable storage medium and electronic equipment. Wherein, the method comprises the following steps: responding to a control instruction triggered by a current application program, determining at least one candidate image from the communication content of the current application program, and analyzing the at least one candidate image to obtain the image characteristics of the at least one candidate image; displaying at least one target image on a graphical user interface of a current application program, wherein the at least one target image is an image screened from at least one candidate image according to image characteristics; responding to the selection operation of at least one target image, and generating the expression package images corresponding to the target images corresponding to the selection operation in batch. The invention solves the technical problem of low expression package image generation efficiency caused by the fact that a plurality of expression package images cannot be generated simultaneously in the prior art.

Description

Image processing method, image processing device, computer-readable storage medium and electronic equipment
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
With the rapid development of information technology, people can communicate with each other without going out of home through various social software. When the user communicates through the social software, the client can make pictures in the communication content into expression package images and store the generated expression package images, for example, the user selects the expression package images and adds the expression package images into an expression package storage unit of the social software by long-pressing the expression package images sent by other users.
However, in the prior art, the user can only generate one emoticon image at a time and only one emoticon image can be saved, which reduces the generation efficiency of the emoticon images. Moreover, when the expression package images stored in the client are repeated, the repeated expression package images cannot be automatically deleted, when the user uses the expression package images, the user needs to search the expression package images stored in the expression package storage unit, when the number of the expression package images is large, the expression package images required by the user cannot be quickly obtained, and the use experience of the user is reduced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device, a computer readable storage medium and electronic equipment, which are used for at least solving the technical problem of low expression package image generation efficiency caused by the fact that a plurality of expression package images cannot be generated simultaneously in the prior art.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: responding to a control instruction triggered by a current application program, determining at least one candidate image from the communication content of the current application program, and analyzing the at least one candidate image to obtain the image characteristics of the at least one candidate image; displaying at least one target image on a graphical user interface of a current application program, wherein the at least one target image is an image screened from at least one candidate image according to image characteristics; responding to the selection operation of at least one target image, and generating the expression package images corresponding to the target images corresponding to the selection operation in batch.
Further, the graphic user interface of the current application program includes an emoticon generation control, and the image processing method further includes: responding to an expression package generation instruction triggered by the expression package generation control; acquiring communication contents with preset duration according to the expression package generation instruction; at least one candidate image is extracted from the communication content.
Further, the image processing method further includes: when the number of the communication objects corresponding to the communication content is multiple, responding to an object selection instruction, and determining a target object from the multiple communication objects, wherein the communication objects are objects communicating with the current object; and screening out target communication contents associated with the target object from the communication contents with preset duration according to the expression package generation instruction.
Further, the image processing method further includes: when determining that no text is contained in at least one candidate image according to the image characteristics, screening at least one first candidate image containing a main body image from the at least one candidate image according to the image characteristics, wherein the main body image is an object with facial expression; screening at least one second candidate image, of which the facial expression and/or limb movement of the main body image meet a first preset condition, from the at least one first candidate image; and determining at least one target image according to the at least one second candidate image, and displaying the at least one target image.
Further, the image processing method further includes: and removing the image stored in the preset storage area from the at least one second candidate image, and/or removing the superposed image in the at least one second candidate image to obtain at least one target image.
Further, the image processing method further includes: when determining that at least one candidate image contains a text according to the image characteristics, acquiring the text length of the text; and screening at least one target image from at least one candidate image according to the text length and the image characteristics, and displaying the at least one target image.
Further, the image processing method further includes: when the text length is larger than the preset length, rejecting at least one candidate image; when the text length is smaller than or equal to a preset length, screening at least one first candidate image containing a main body image from at least one candidate image according to image characteristics, wherein the main body image is an object with facial expression; screening at least one second candidate image, of which the facial expression and/or limb movement of the main body image meet a first preset condition, from the at least one first candidate image; at least one target image is determined based on the at least one second candidate image.
Further, the image processing method further includes: acquiring image content in at least one second candidate image; removing at least one third candidate image of the at least one second candidate image according to the text content and the image content of the text to obtain at least one fourth candidate image, wherein the text content of the at least one third candidate image is not matched with the image content of the at least one third candidate image; and removing the image stored in the preset storage area from at least one fourth candidate image, and/or removing the superposed image in at least one fourth candidate image to obtain at least one target image.
Further, the image processing method further includes: after responding to the selection operation aiming at least one target image and generating expression package images corresponding to the target image corresponding to the selection operation in batch, matching the expression package images with preset expression package images to obtain a matching result; and carrying out duplication elimination processing on the expression bag image according to the matching result to obtain the duplicate-eliminated expression bag image.
Further, the image processing method further includes: and after the expression package image is subjected to duplication elimination processing according to the matching result to obtain the duplicate-eliminated expression package image, carrying out deletion operation and/or classification operation on the duplicate-eliminated expression package image according to a response control instruction.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: the determining module is used for responding to a control instruction triggered by the current application program, determining at least one candidate image from the communication content of the current application program, and analyzing the at least one candidate image to obtain the image characteristics of the at least one candidate image; the display module is used for displaying at least one target image on a graphical user interface of the current application program, wherein the at least one target image is an image screened from at least one candidate image according to image characteristics; and the generation module is used for responding to the selection operation of at least one target image and generating the expression package images corresponding to the target images corresponding to the selection operation in batch.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-mentioned image processing method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out a method for operating the program, wherein the program is arranged to carry out the image processing method described above when executed.
In the embodiment of the invention, a mode of generating expression package images corresponding to a plurality of images in batch is adopted, after a control instruction triggered by a current application program is responded, at least one candidate image is determined from communication contents of the current application program, the at least one candidate image is analyzed to obtain a specific selection characteristic of the at least one candidate image, then a target image is obtained by screening from the at least one candidate image according to the image characteristic, the target image is displayed in a graphical user interface of the current application program, and finally, an expression package image corresponding to the target image corresponding to the selection operation is generated in batch in response to the selection operation aiming at the at least one target image.
According to the method and the device, the corresponding expression package images can be generated in batch by at least one candidate image in the communication content, and the problem of low expression package image generation efficiency caused by operation of each candidate image is solved. In addition, in the process of generating the expression package image from at least one candidate image, the method and the device screen at least one candidate image according to the image characteristics, so that the problem of system resource waste caused by processing images which do not meet the requirements of the expression package image is solved. In addition, because the screening operation is carried out on at least one candidate image, the method and the device only carry out the production of the expression package image on the image which meets the requirement of the expression package image, avoid the interference of the image which does not meet the requirement of the expression package image, and improve the generation efficiency of the expression package image. Finally, in the application, the user can further perform selection operation on the target images obtained by screening, and the target images selected by the user are generated into the expression package images, so that the user can flexibly select the images needing to be generated into the expression package images according to the own requirements, and the user experience is improved.
Therefore, the scheme provided by the application achieves the purpose of simultaneously generating a plurality of expression package images, thereby realizing the technical effect of improving the generation efficiency of the expression package images and further solving the technical problem of low generation efficiency of the expression package images caused by the fact that a plurality of expression package images cannot be simultaneously generated in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an image processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an alternative graphical user interface according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative graphical user interface according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative graphical user interface in accordance with an embodiment of the present invention;
fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
In addition, it should be further noted that the client may serve as an execution subject of the method provided in this embodiment.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, responding to a control instruction triggered by the current application program, determining at least one candidate image from the communication content of the current application program, and analyzing the at least one candidate image to obtain the image characteristics of the at least one candidate image.
In step S102, the current application may be, but is not limited to, a social application, and the communication content may be content communicated between users through the social application (i.e., the current application described above), where the communication content includes, but is not limited to, text, voice, image, video, file, and the like, and the image in the communication content may be a static image (e.g., an image in PNG format, an image in JPG format, an image in a form unique to an emoticon image), or may be a dynamic image (e.g., an image in GIF format).
In addition, in step S102, the image features of at least one candidate image at least include: the candidate image comprises image information and text information, wherein the image information comprises the type of a main object and facial expressions and/or limb movements of the main object when the main object is an object with facial expressions, and the text information at least comprises text content and text length. The subject object is an object in the candidate image, for example, in an image including a human figure, the subject object is the human figure; in an image containing an animal figure, the subject object is the animal figure.
The candidate images are analyzed to obtain the image features of the candidate images, so that the client can screen the candidate images according to the image features of the candidate images, and the problem of low generation efficiency of the expression package images caused by processing of images which do not conform to the expression package images is solved.
In addition, it should be noted that, in this embodiment, the communication content is the communication content of the current application program operated by the user, for example, when the user communicates with another user through a WeChat (that is, the current application program), the user may determine at least one candidate image in the chat content in the WeChat, and then generate the expression package image in batch from the at least one candidate image, and the process may implement batch generation of the expression package image from the images in the communication content without calling a third-party application, so that a problem of low expression package image generation efficiency caused by the need of loading the third-party application in a process of calling the third-party application to generate the expression package image is avoided, and the expression package image generation efficiency is improved.
In an optional embodiment, when the user a communicates with other users in the current application program, the user a may operate a specific control in the client, and after it is detected that the user a operates the specific control, the client obtains communication content within a preset time length from the communication content of the current application program, and screens out an image from the communication content within the preset time length to obtain a candidate image, and then analyzes the candidate image to obtain an image feature of the candidate image. The preset time length may be a fixed time length, for example, the preset time length is a time length which is 2 hours apart from the current time; the preset time period may also be a time period set by the user, for example, when the user makes the facial expression package image, the preset time period may be set by the user to be 2 hours, one day, one week, or the like.
It should be noted that, in the above embodiment, the client may obtain at least one candidate image within a preset time from the communication content, so that the client may generate the expression package image from the at least one candidate image at one time, and a user does not need to operate each candidate image separately, thereby simplifying the manufacturing steps of multiple expression package images and improving the efficiency of batch manufacturing of expression package images.
And step S104, displaying at least one target image on a graphical user interface of the current application program, wherein the at least one target image is an image screened from at least one candidate image according to image characteristics.
Optionally, the client presents the communication content, at least one target image obtained by filtering the at least one candidate image and the expression package image produced by the client to the user through a graphical user interface, and the local of the client may store a target image feature, which is an image feature of an image for generating the expression package image. The client can screen at least one candidate image by comparing the image characteristics of at least one candidate image with the target image characteristics, and the target image obtained by screening is the target image capable of generating the expression package image.
It should be noted that, in step S104, candidate images that do not conform to the feature of the expression package are discharged, and only the target image that conforms to the feature of the expression package is subjected to generation of the expression package image, so that the generation efficiency of the expression package image is improved.
And step S106, responding to the selection operation aiming at least one target image, and generating the expression package images corresponding to the target images corresponding to the selection operation in batch.
After acquiring the at least one target image conforming to the characteristics of the emoticon, the user may further filter the at least one target image, for example, in the graphical user interface shown in fig. 2, 15 target images are displayed and, after the user clicks on the image filter control, popping up a target image selection interface shown in fig. 3 in a graphical user interface of the client, displaying a plurality of screening rules for the target image in the target image selection interface, for example, "filter by time", "filter by type", and "custom filter" in FIG. 3, wherein, after the user selects the screening rule of screening according to time, the client pops up a screening interface according to time, in the interface, a user can set a time range, so that the client screens out images with the release time within the time range set by the user from the plurality of target images; after the user selects the filtering rule of the filtering by type, the client pops up a filtering by type interface, in which the user can set the image type, such as the character type, the animal type, the cartoon type and the like, so that the client can filter the image type from the plurality of target images as the image type set by the user; after the user selects the screening rule of the 'custom screening', the client pops up a 'custom screening interface', and in the interface, the user can set the screening rule of the image or directly select the image required by the user from a plurality of target images.
Further, after selecting the at least one target image, the client acquires images obtained by screening the at least one target image, and generates expression package images in batch from the screened images, for example, in the graphical user interface shown in fig. 4, after the user screens the at least one target image, the images obtained by screening the at least one image are displayed in the expression package generation interface, for example, as shown in fig. 4, the user selects target images 1, 5, and 11 from the plurality of target images, and after the user clicks the "confirm" control, the client generates expression package images by clicking one of the target images 1, 5, and 11.
It should be noted that the client may generate the expression package image corresponding to each target image, that is, the user may generate the expression package images corresponding to the plurality of images only by performing one operation, and does not need to perform an operation on each image, thereby improving the generation efficiency of the expression package images.
Based on the schemes defined in steps S102 to S106, it can be known that, in the embodiment of the present invention, a manner of generating expression package images corresponding to a plurality of images in batch is adopted, after a control instruction triggered for a current application program is responded, at least one candidate image is determined from communication content of the current application program, and is analyzed to obtain a specific selection feature of the at least one candidate image, then a target image is obtained by screening from the at least one candidate image according to the image feature, the target image is displayed in a graphical user interface of the current application program, and finally, in response to a selection operation for the at least one target image, an expression package image corresponding to the target image corresponding to the selection operation is generated in batch.
It is easy to notice that, the method and the device can generate the corresponding expression package image from at least one candidate image in the communication content in batch, and avoid the problem of low expression package image generation efficiency caused by the operation of each candidate image. In addition, in the process of generating the expression package image from at least one candidate image, the method and the device screen at least one candidate image according to the image characteristics, so that the problem of system resource waste caused by processing images which do not meet the requirements of the expression package image is solved. In addition, because the screening operation is carried out on at least one candidate image, the method and the device only carry out the production of the expression package image on the image which meets the requirement of the expression package image, avoid the interference of the image which does not meet the requirement of the expression package image, and improve the generation efficiency of the expression package image. Finally, in the application, the user can further perform selection operation on the target images obtained by screening, and the target images selected by the user are generated into the expression package images, so that the user can flexibly select the images needing to be generated into the expression package images according to the own requirements, and the user experience is improved.
Therefore, the scheme provided by the application achieves the purpose of simultaneously generating a plurality of expression package images, thereby realizing the technical effect of improving the generation efficiency of the expression package images and further solving the technical problem of low generation efficiency of the expression package images caused by the fact that a plurality of expression package images cannot be simultaneously generated in the prior art.
In an optional embodiment, the graphical user interface of the current application includes an emoticon generation control, for example, the emoticon generation control in the graphical user interface shown in fig. 2, the transparency of the emoticon generation control may be changed according to whether the communication content includes an image, for example, when the communication content does not include an image, the emoticon generation control is in a transparent state, and when the communication content includes an image, the emoticon generation control is in a non-transparent state. Optionally, the transparency of the expression package generation control may also be changed according to the type of the image included in the communication content, for example, when the communication content does not include an image or the image included in the communication content cannot generate an expression package image, the expression package generation control is in a transparent state, and when the communication content includes an image and the image included in the communication content can generate an expression package image, the expression package generation control is in a non-transparent state.
In an alternative embodiment, the client first needs to obtain at least one candidate image before parsing, screening and producing the emoticon image. The client responds to an expression package generation instruction triggered by the expression package generation control, communication contents with preset duration are obtained according to the expression package generation instruction, and then at least one candidate image is extracted from the communication contents.
Optionally, when the user needs to make an expression package image, the user may operate an expression package generation control on a display interface of the client, or operate an entity control used for generating the expression package image in the terminal device, the client generates an expression package generation instruction after detecting the operation of generating the expression package image, the client acquires communication content within a preset time before the current time according to the expression package generation instruction, and extracts at least one candidate image from the communication content.
It should be noted that, after detecting the operation of generating the emoticon image, the client generates a prompt message to prompt the user to determine the extraction duration of the communication content, for example, the client generates a plurality of durations for the user to select, or the client generates an input box of the extraction duration, and the user can input the extraction duration in the input box, so that the client can determine the preset duration according to the extraction duration.
In an optional embodiment, the client may further determine the communication content according to the number of the communication objects corresponding to the communication content, specifically, when the number of the communication objects corresponding to the communication content is multiple, the client responds to an object selection instruction to determine a target object from the multiple communication objects, and selects a target communication content associated with the target object from the communication contents of a preset duration according to an expression package generation instruction, where the communication object is an object that communicates with the current object, for example, in a wechat group, the communication object is another user in the wechat group except the current user.
Optionally, the communication objects corresponding to the communication content are the number of the objects participating in the social interaction, for example, if there are 20 members in the wechat group, the number of the objects corresponding to the communication content is 20. When the number of the objects corresponding to the communication content is multiple, the user can select a target object from the multiple communication objects through the client, in the scene, the client screens out the communication content related to the target object as the target communication content, for example, the client screens out voice, characters, images, videos, files and the like sent by the target object within a preset time length from the communication content as the target communication content.
Optionally, when the number of the objects corresponding to the communication content is only one, after the client detects that the user clicks the emoticon generation control, the client automatically acquires the communication content within a preset time.
It should be noted that, after receiving an expression package generation instruction, the client first detects whether an image exists in the communication content, and if it is detected that the image does not exist in the communication content, the client generates a prompt message, where the prompt message is used to prompt the user that the image does not exist in the communication content, and it is necessary for the user to reset a preset duration, or to reselect the communication content.
In an alternative embodiment, after obtaining the preset at least one candidate image from the communication content, the client may screen out at least one target image from the at least one candidate image according to the image characteristics. The client detects whether at least one candidate image contains a text or not according to the image characteristics, screens at least one first candidate image containing a main body image from the at least one candidate image according to the image characteristics when determining that the at least one candidate image does not contain the text according to the image characteristics, screens at least one second candidate image of which the facial expression and/or the limb movement of the main body image meet a first preset condition from the at least one first candidate image, and finally determines at least one target image according to the at least one second candidate image and displays the at least one target image. Wherein the body image is an object having a facial expression.
Optionally, when it is determined that the at least one candidate image does not contain the text, the client detects a main object contained in the at least one candidate image, and screens out an image containing a main image from the at least one candidate image, so as to avoid interference of a landscape image or a real image with the image of the situation package. Wherein the subject object is a subject image having a facial expression, such as an animal image, a human image, a cartoon image, and the like.
Further, after the first candidate image containing the main body image is obtained, the client further detects the facial expression and/or limb movement of the main body image, and rejects the image of which the facial expression and/or limb movement do not meet the requirement according to the facial expression and/or limb movement of the main body image, for example, rejects the image of which the facial expression is more conventional, and retains the image of which the facial expression and/or limb movement is more exaggerated, so that the interference of the facial expression package image by the identification photo or the daily human image is avoided.
It should be noted that, at least one target image obtained through the above process may overlap with an image that is already stored by the client and used for making an expression package image, or multiple repeated images exist in at least one target image, and making an expression package image on the overlapped image undoubtedly increases consumption of system resources.
In order to reduce the consumption of system resources, in the process of determining at least one target image according to at least one second candidate image, the client side also carries out deduplication processing on the at least one second candidate image. Specifically, the client removes the image stored in the preset storage area from the at least one second candidate image, and/or removes the superposed image in the at least one second candidate image to obtain the at least one target image.
Optionally, the client compares an image identifier (for example, a name, a generation time, and the like of the image) corresponding to the at least one candidate image with a target image identifier of an image pre-stored in a preset storage area of the client, and removes an image with the image identifier identical to the target image identifier from the at least one candidate image through comparison, thereby implementing a deduplication operation on the at least one candidate image.
Optionally, the client may further obtain an image identifier corresponding to the at least one candidate image, detect whether an image with the same image identifier exists in the at least one image, and if so, remove a redundant image with the same image identifier in the at least one image, and only reserve one image with the image identifier, thereby implementing a deduplication operation on the at least one candidate image.
In an optional embodiment, when the client determines that the at least one candidate image contains the text according to the image characteristics, the client obtains the text length of the text, screens out at least one target image from the at least one candidate image according to the text length and the image characteristics, and displays the at least one target image.
Specifically, when the text length is greater than the preset length, the client removes at least one candidate image; when the text length is smaller than or equal to the preset length, the client screens at least one first candidate image containing the main body image from the at least one candidate image according to the image characteristics, screens at least one second candidate image, of which the facial expression and/or the limb movement of the main body image meet a first preset condition, from the at least one first candidate image, and then determines at least one target image according to the at least one second candidate image.
It should be noted that when the text length is long (for example, the text length is greater than the preset length), the client may determine that the text is used for annotating the candidate image, that is, the text is an annotation text and is not a target text in the expression package image, so that the candidate image with the long text length is rejected, and interference of the annotation text on generation of the expression package image can be avoided.
In addition, when the text length is short (for example, the text length is less than or equal to the preset length), the client may perform the image filtering processing by using the processing method for the candidate image that does not include the text.
Similarly, after obtaining the at least one second candidate image, the client may perform image screening on the at least one second candidate image to obtain the at least one target image. When the image screening is performed on at least one second candidate image, the client needs to consider image content and text content.
Specifically, the client side firstly obtains image content in at least one second candidate image, and rejects at least one third candidate image of the at least one second candidate image according to text content and the image content of the text to obtain at least one fourth candidate image, and then rejects the image stored in the preset storage area from the at least one fourth candidate image, and/or rejects a superposed image in the at least one fourth candidate image to obtain at least one target image. Wherein the text content of the at least one third candidate image does not match the image content of the at least one third candidate image.
Optionally, in the process of screening the second candidate image including the text, the client may first remove an image in the second candidate image whose text content does not match the image content, for example, the text content of the candidate image a is "haha", the image content of the candidate image a is an expression of a cartoon character crying, and in this scene, the client removes the candidate image a from the second candidate image.
Further, after the images with text contents unmatched with the image contents are removed from the second candidate images to obtain at least one fourth candidate image, the client performs deduplication processing on the at least one fourth candidate image.
It should be noted that the deduplication operation of the client on the fourth candidate image is the same as the deduplication operation on the second candidate image that does not include text, and details are not repeated here.
In addition, it should be noted that after the at least one target image is obtained in the above manner, the client automatically generates the expression package image corresponding to each target image without the need of the user to repeatedly operate each target image, thereby improving the generation efficiency of the expression package image.
In an optional embodiment, after responding to the selection operation on at least one target image and generating expression package images corresponding to the target image corresponding to the selection operation in batch, the client further matches the expression package images with preset expression package images to obtain a matching result, and performs deduplication processing on the expression package images according to the matching result to obtain deduplicated expression package images.
Namely, after the expression package image is obtained, the client can also perform the duplication removing operation on the expression package image. The client can compare the generated expression package image corresponding to the at least one target image with an expression package image (i.e., the preset expression package image) stored by the client, for example, compare image identifiers, generation time and the like of the expression package image, and remove the expression package image stored in the server from the expression package image corresponding to the at least one target image through comparison, thereby avoiding the problem of poor user experience caused by storing multiple repeated expression package images in the prior art.
In another optional embodiment, in the process of removing the duplicate of the candidate image, the client may further compare the similarity between the candidate image and a preset expression package image (i.e., an expression package image already stored by the server), and delete an image with the similarity greater than the preset similarity from the candidate image, without performing the duplicate removal processing on the expression package image after the expression package image is regenerated, thereby simplifying the duplicate removal step of the expression package image.
Optionally, after the expression package image is subjected to deduplication processing according to the matching result to obtain the deduplicated expression package image, the client performs deletion operation and/or classification operation on the deduplicated expression package image according to the responsive control instruction. For example, the client generates a pop-up window and presents all the emoticon images within the pop-up window. The user can delete the expression package images displayed in the pop-up window so as to store the rest of the deleted expression package images in the local client. In addition, the user can also classify the expression bag images displayed in the pop-up window and carry out custom naming on the classified expression bag images, for example, the user can gather the selected expression bag images together to obtain a plurality of sets and name each set, so that the user can quickly find the custom expression bag images in the subsequent use process of the expression bag images.
According to the scheme, all images in the communication content within the fixed time can be extracted by one key, a plurality of images are screened, and then the screened images are used for generating the expression package images by one key, so that the generation efficiency of the expression package images is improved. Moreover, after the expression package images are generated, the user can also screen and classify the expression package images for the generated expression package images, so that the addition of repeated expression package images is avoided, the user can quickly find the user-defined expression package images, and the use experience of the user in using the expression package images is improved.
Example 2
There is also provided, in accordance with an embodiment of the present invention, an embodiment of an image processing apparatus, wherein fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including: a determination module 501, a display module 503, and a generation module 505.
The determining module 501 is configured to determine at least one candidate image from communication content of a current application program in response to a control instruction triggered by the current application program, and analyze the at least one candidate image to obtain an image feature of the at least one candidate image; a display module 503, configured to display at least one target image on a graphical user interface of a current application program, where the at least one target image is an image screened from at least one candidate image according to image characteristics; and a generating module 505, configured to respond to a selection operation for at least one target image, and generate, in batch, an expression package image corresponding to the target image corresponding to the selection operation.
It should be noted that the determining module 501, the displaying module 503 and the generating module 505 correspond to steps S102 to S106 in the above embodiment, and the three modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in embodiment 1.
Optionally, the graphical user interface of the current application program includes an emoticon generation control, and the determining module includes: the device comprises a first generation module, a first acquisition module and an extraction module. The first generation module is used for responding to an expression package generation instruction triggered by the expression package generation control; the first acquisition module is used for acquiring communication contents with preset duration according to the expression package generation instruction; and the extraction module is used for extracting at least one candidate image from the communication content.
Optionally, the first obtaining module includes: the device comprises a first determining module and a first screening module. The first determining module is used for responding to an object selection instruction and determining a target object from a plurality of communication objects when the number of the communication objects corresponding to the communication content is multiple, wherein the communication objects are objects communicating with the current object; and the first screening module is used for screening target communication contents associated with the target object from the communication contents with preset duration according to the expression package generation instruction.
Optionally, the display module includes: the device comprises a second screening module, a third screening module and a second determining module. The second screening module is used for screening at least one first candidate image containing a main body image from at least one candidate image according to the image characteristics when the at least one candidate image is determined to not contain texts according to the image characteristics, wherein the main body image is an object with facial expression; the third screening module is used for screening at least one second candidate image, of which the facial expression and/or the limb movement of the main body image meet the first preset condition, from the at least one first candidate image; and the second determining module is used for determining at least one target image according to the at least one second candidate image and displaying the at least one target image.
Optionally, the second determining module includes: and the first removing module is used for removing the image stored in the preset storage area from at least one second candidate image and/or removing the superposed image in at least one second candidate image to obtain at least one target image.
Optionally, the display module includes: the second acquisition module and the fourth screening module. The second obtaining module is used for obtaining the text length of the text when the text contained in the at least one candidate image is determined according to the image characteristics; and the fourth screening module is used for screening out at least one target image from the at least one candidate image according to the text length and the image characteristics and displaying the at least one target image.
Optionally, the fourth screening module includes: the device comprises a second rejection module, a fifth screening module, a sixth screening module and a third determination module. The second eliminating module is used for eliminating at least one candidate image when the text length is larger than the preset length; the fifth screening module is used for screening at least one first candidate image containing a main image from the at least one candidate image according to the image characteristics when the text length is less than or equal to the preset length, wherein the main image is an object with facial expression; the sixth screening module is used for screening at least one second candidate image, of which the facial expression and/or the limb movement of the main body image meet the first preset condition, from the at least one first candidate image; and a third determining module for determining at least one target image according to the at least one second candidate image.
Optionally, the third determining module includes: the device comprises a third acquisition module, a third rejection module and a fourth rejection module. The third acquisition module is used for acquiring the image content in at least one second candidate image; the third eliminating module is used for eliminating at least one third candidate image of at least one second candidate image according to the text content and the image content of the text to obtain at least one fourth candidate image, wherein the text content of the at least one third candidate image is not matched with the image content of the at least one third candidate image; and the fourth eliminating module is used for eliminating the image stored in the preset storage area from at least one fourth candidate image and/or eliminating the superposed image in at least one fourth candidate image to obtain at least one target image.
Optionally, the image processing apparatus further includes: the device comprises a matching module and a processing module. The matching module is used for matching the expression package images with preset expression package images after responding to selection operation of at least one target image and generating expression package images corresponding to the target image corresponding to the selection operation in batch, so as to obtain a matching result; and the processing module is used for carrying out duplication elimination processing on the expression package image according to the matching result to obtain the duplicate-eliminated expression package image.
Optionally, the image processing apparatus further includes: and the processing submodule is used for performing deletion operation and/or classification operation on the expression bag image after the expression bag image is subjected to duplication removal processing according to the matching result to obtain the expression bag image subjected to duplication removal, and according to the response control instruction.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the image processing method in embodiment 1 described above when running.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running the programs, wherein the programs are configured to perform the image processing method of embodiment 1 described above when run.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. An image processing method, comprising:
responding to a control instruction triggered by a current application program, determining at least one candidate image from communication contents of the current application program, and analyzing the at least one candidate image to obtain image characteristics of the at least one candidate image;
displaying at least one target image on a graphical user interface of the current application program, wherein the at least one target image is an image screened from the at least one candidate image according to the image characteristics;
responding to the selection operation aiming at the at least one target image, and generating the expression package images corresponding to the target images corresponding to the selection operation in batch.
2. The method of claim 1, wherein the graphical user interface of the current application includes an emoticon generation control, and the determining at least one candidate image from the communication content of the current application in response to a control command triggered by the current application comprises:
responding to an expression package generation instruction triggered by the expression package generation control;
acquiring communication contents with preset duration according to the expression package generation instruction;
and extracting the at least one candidate image from the communication content.
3. The method of claim 2, wherein obtaining the communication content with a preset duration according to the emotion package generation instruction comprises:
when the number of the communication objects corresponding to the communication content is multiple, responding to an object selection instruction, and determining a target object from the multiple communication objects, wherein the communication objects are objects communicating with the current object;
and screening out target communication contents associated with the target object from the communication contents with the preset duration according to the expression package generation instruction.
4. The method of claim 1, wherein displaying at least one target image on a graphical user interface of the current application comprises:
when the text is determined not to be contained in the at least one candidate image according to the image characteristics, screening at least one first candidate image containing a main image from the at least one candidate image according to the image characteristics, wherein the main image is an object with a facial expression;
screening out at least one second candidate image, of which the facial expression and/or limb movement of the main body image meet a first preset condition, from the at least one first candidate image;
and determining the at least one target image according to the at least one second candidate image, and displaying the at least one target image.
5. The method of claim 4, wherein determining the at least one target image from the at least one second candidate image comprises:
and removing the image stored in a preset storage area from the at least one second candidate image, and/or removing a superposed image from the at least one second candidate image to obtain the at least one target image.
6. The method of claim 1, wherein displaying at least one target image on a graphical user interface of the current application comprises:
when determining that the at least one candidate image contains a text according to the image characteristics, acquiring the text length of the text;
and screening the at least one target image from the at least one candidate image according to the text length and the image characteristics, and displaying the at least one target image.
7. The method of claim 6, wherein screening the at least one target image from the at least one candidate image according to the text length and the image feature comprises:
when the text length is larger than a preset length, rejecting the at least one candidate image;
when the text length is smaller than or equal to the preset length, screening at least one first candidate image comprising a main body image from the at least one candidate image according to the image characteristics, wherein the main body image is an object with facial expression;
screening out at least one second candidate image, of which the facial expression and/or limb movement of the main body image meet a first preset condition, from the at least one first candidate image;
determining the at least one target image according to the at least one second candidate image.
8. The method of claim 7, wherein determining the at least one target image from the at least one second candidate image comprises:
acquiring image content in the at least one second candidate image;
removing at least one third candidate image of the at least one second candidate image according to the text content of the text and the image content to obtain at least one fourth candidate image, wherein the text content of the at least one third candidate image is not matched with the image content of the at least one third candidate image;
and removing the image stored in a preset storage area from the at least one fourth candidate image, and/or removing a superposed image from the at least one fourth candidate image to obtain the at least one target image.
9. The method according to claim 1, wherein after the emoticon images corresponding to the target images corresponding to the selection operation are generated in batch in response to the selection operation for the at least one target image, the method further comprises:
matching the expression packet image with a preset expression packet image to obtain a matching result;
and carrying out duplication elimination processing on the expression bag image according to the matching result to obtain the duplication eliminated expression bag image.
10. The method of claim 9, wherein after performing de-duplication processing on the emoticon image according to the matching result to obtain a de-duplicated emoticon image, the method further comprises:
and performing deleting operation and/or classifying operation on the deduplicated expression package image according to the responded control instruction.
11. An image processing apparatus characterized by comprising:
the determining module is used for responding to a control instruction triggered by a current application program, determining at least one candidate image from the communication content of the current application program, and analyzing the at least one candidate image to obtain the image characteristics of the at least one candidate image;
the display module is used for displaying at least one target image on a graphical user interface of the current application program, wherein the at least one target image is an image screened from the at least one candidate image according to the image characteristics;
and the generation module is used for responding to the selection operation aiming at the at least one target image and generating the expression package images corresponding to the target images corresponding to the selection operation in batch.
12. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the image processing method of any of claims 1 to 10 when executed.
13. An electronic device, wherein the electronic device comprises one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the image processing method of any of claims 1 to 10 when run.
CN202111357330.2A 2021-11-16 2021-11-16 Image processing method, image processing device, computer-readable storage medium and electronic equipment Pending CN113936078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111357330.2A CN113936078A (en) 2021-11-16 2021-11-16 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111357330.2A CN113936078A (en) 2021-11-16 2021-11-16 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113936078A true CN113936078A (en) 2022-01-14

Family

ID=79286809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111357330.2A Pending CN113936078A (en) 2021-11-16 2021-11-16 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113936078A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663963A (en) * 2022-05-24 2022-06-24 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663963A (en) * 2022-05-24 2022-06-24 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN114663963B (en) * 2022-05-24 2022-09-27 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, image processing device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN106844659A (en) A kind of multimedia data processing method and device
US20130132393A1 (en) Method and system for displaying activities of friends and computer storage medium therefor
JP2017517830A (en) Display method, apparatus, server, program and recording medium for social network information stream
CN107071143B (en) Image management method and device
US11894021B2 (en) Data processing method and system, storage medium, and computing device
CN110889379A (en) Expression package generation method and device and terminal equipment
CN108133058B (en) Video retrieval method
CN111488477A (en) Album processing method, apparatus, electronic device and storage medium
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
KR20220039578A (en) Method for providing clothing recommendation information based on user-selected clothing, and server and program using the same
CN106681523A (en) Library configuration method, library configuration device and call handling method of input method
CN113936078A (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111488813A (en) Video emotion marking method and device, electronic equipment and storage medium
CN112035685A (en) Album video generating method, electronic device and storage medium
CN108804652B (en) Method and device for generating cover picture, storage medium and electronic device
CN103093213A (en) Video file classification method and terminal
CN111063037A (en) Three-dimensional scene editing method and device
CN111144141A (en) Translation method based on photographing function
CN110661693A (en) Methods, computing device-readable storage media, and computing devices facilitating media-based content sharing performed in a computing device
CN112714299B (en) Image display method and device
CN111414554B (en) Commodity recommendation method, commodity recommendation system, server and storage medium
KR102444172B1 (en) Method and System for Intelligent Mining of Digital Image Big-Data
US11144750B2 (en) Association training related to human faces
CN105787496A (en) Data collection method and electronic device
CN108875670A (en) Information processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination