CN111984173B - Expression package generation method and device - Google Patents

Expression package generation method and device Download PDF

Info

Publication number
CN111984173B
CN111984173B CN202010694477.XA CN202010694477A CN111984173B CN 111984173 B CN111984173 B CN 111984173B CN 202010694477 A CN202010694477 A CN 202010694477A CN 111984173 B CN111984173 B CN 111984173B
Authority
CN
China
Prior art keywords
target
images
screen capture
expression package
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010694477.XA
Other languages
Chinese (zh)
Other versions
CN111984173A (en
Inventor
王蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010694477.XA priority Critical patent/CN111984173B/en
Publication of CN111984173A publication Critical patent/CN111984173A/en
Application granted granted Critical
Publication of CN111984173B publication Critical patent/CN111984173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method and a device for generating an expression package, and belongs to the technical field of communication. The problem that the process of generating the user-defined expression package is complicated and complicated can be solved. The method is applied to the electronic equipment, and the method can comprise the following steps: receiving a first input of a first control, wherein the first control is used for triggering and generating an expression package; responding to the first input, and acquiring M screen capture images of a current display interface of the electronic equipment; determining a first area according to the M screen capture images, wherein the first area is a content difference area between the M screen capture images; generating target expression packages corresponding to the N target images according to the N target images corresponding to the first area; wherein M and N are both positive integers. The method and the device are suitable for generating the user-defined expression package scene.

Description

Expression package generation method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to an expression package generation method and device.
Background
With the development of communication technology, electronic devices have increasingly powerful functions, for example, the electronic devices can receive and send emoticons through communication applications, and the emoticons can be downloaded by the electronic devices and can also be customized by users.
At present, if a user needs to define an emoticon, the user may first trigger the electronic device to download and install an emoticon generation tool, and then trigger the electronic device to operate the emoticon generation tool, so that the user may manually clip and process emoticon materials (such as pictures, characters and the like) found by the user through the emoticon generation tool, thereby obtaining the defined emoticon.
However, according to the above method, the user needs to download and install the emotion package generation tool in the triggered electronic device, and the user-defined emotion package can be obtained only after the emotion package generation material is manually edited and processed in the emotion package generation tool, which results in a cumbersome process for generating the user-defined emotion package.
Disclosure of Invention
The embodiment of the application aims to provide an expression package generation method and device, and the problem that the process of generating a user-defined expression package is complicated and complicated can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an expression package generating method, where the method includes: receiving a first input of a user to a first control; responding to the first input, and acquiring M screen capture images of a current display interface of the electronic equipment; determining a first area according to the M screen capture images; generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area; the first control is used for triggering generation of the expression package, and the first area is a content difference area among the M screen capture images; m and N are both positive integers.
In a second aspect, an embodiment of the present application provides an emoticon generating apparatus, which may include a receiving module, an obtaining module, a determining module, and a generating module. The receiving module is used for receiving first input of a user to a first control, and the first control is used for triggering generation of an expression package; the acquisition module is used for responding to the first input received by the receiving module and acquiring M screen capture images of the current display interface of the electronic equipment; the determining module is used for determining a first area according to the M screen capture images acquired by the acquiring module, wherein the first area is a content difference area between the M screen capture images; the generating module is used for generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area determined by the determining module; wherein M and N are both positive integers.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, the emoticon generating apparatus may receive a first input (for triggering generation of an emoticon) from a user to the first control; responding to the first input, and acquiring M screen capture images of a current display interface of the electronic equipment; determining a first area according to the M screen capture images; generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area; the first area is a content difference area between the M screen shots, wherein M and N are positive integers. According to the scheme, the expression package generating device can determine the first area according to the acquired M screen capture images of the current display interface of the electronic equipment, and generate the target expression package (corresponding to the N target images) according to the N target images corresponding to the first area, without installing an expression package generating tool, searching for expression package materials, and manually editing and processing the expression package materials in the expression package generating tool, so that the process of generating the custom expression package can be simplified.
Drawings
Fig. 1 is a schematic diagram of an expression package generation method according to an embodiment of the present disclosure;
FIG. 2 is a schematic view of M screen shots and a first region;
FIG. 3 is a schematic diagram of determining a content difference region between two adjacent screenshot images;
FIG. 4 is a diagram illustrating the determination of the number of occurrences of a content difference region;
FIG. 5 is a schematic diagram of an emoticon generation apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 7 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Some of the nouns or terms referred to in the claims and the specification of the present application will be explained first.
The expression bag: is a Chinese vocabulary and is a mode for expressing emotion by adopting pictures. The expression package comprises a dynamic expression package and a static expression package; specifically, a dynamic emoticon can be generated by using a plurality of images including different contents, and a static emoticon can be generated by using a static image (for example, one image).
The method for generating an expression package according to the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The method for generating the expression package can be applied to a scene of generating the user-defined expression package.
For example, when a user needs to generate a custom expression package, the user may perform input on a first control on an interface where the user wants to make the expression package, so as to trigger an expression package generation device to acquire M screen capture images of a current display interface of the electronic device, determine a dynamic region (for example, a first region in the embodiment of the present application) according to the multiple images, and then generate a target expression package corresponding to N target images according to N target images corresponding to the region, so that the custom expression package may be obtained. Therefore, the expression package generating device can determine the dynamic area based on the input of the user, and generate the self-defined expression package based on the N target images corresponding to the dynamic area without installing an expression package generating tool, searching expression package materials and manually editing and processing the expression package materials in the expression package generating tool, so that the process of generating the self-defined expression package can be simplified, and the man-machine interaction performance is improved.
As shown in fig. 1, an embodiment of the present application provides an emoticon generation method, which may include steps 101 to 104 described below.
Step 101, an expression package generation device receives a first input of a user to a first control.
The first control can be used for triggering the expression package generating device to generate the expression package.
Optionally, in this embodiment of the application, when the user needs to customize the emoticon, the user may perform a first input on a first control in an interface on which the emoticon is required to be made, so as to trigger the emoticon generation device to generate the emoticon based on the content in the interface. The interface can be any possible interface such as a video playing interface, a chat interface, a shooting preview interface and the like, and can be determined according to actual use requirements, and the embodiment of the application is not limited.
Optionally, in this embodiment of the application, the first control may be displayed on the interface or may be hidden in the interface. When the first control is hidden, before the user executes the first control, the first control may be displayed on the interface by an input-triggered emoticon generating apparatus.
Optionally, in this embodiment of the application, the first input may be a touch input to the first control, for example, a click on the first control, a long-press input, and the like; or may be a voice input; the method can be determined according to actual use requirements, and the embodiment of the application is not limited.
And step 102, responding to the first input by the expression package generating device, and acquiring M screen capture images of the current display interface of the electronic equipment.
Wherein M may be a positive integer.
In the embodiment of the application, the expression package generating device can acquire M screen capture images of the current display interface of the electronic equipment in a screen capture mode of the electronic equipment.
Optionally, in this embodiment of the application, the M screen shots may be obtained by taking a screen of a whole interface region of the current display interface of the electronic device for M times, or may be obtained by taking a screen of a part of interface region of the current display interface of the electronic device for M times.
Alternatively, in this embodiment of the application, the M screen capture images may be images obtained by performing screen capture M times on the same area (hereinafter referred to as a target area) of the screen of the electronic device.
Optionally, in this embodiment of the application, the target area may be all screen areas of a screen of the electronic device; the target area may be an area where a certain interface in the screen of the electronic device is located, and may be determined specifically according to actual use requirements, which is not limited in the embodiment of the present application.
Optionally, in this embodiment of the application, the emoticon generating apparatus may capture a screen of a current display interface of the electronic device at a preset period, so as to obtain the M screen-captured images. For example, the emoticon generating means may screen-capture the current display interface of the electronic device once every 10 seconds.
Optionally, in this embodiment of the application, the M screen capturing images may be images obtained by capturing a screen of a current display interface of the electronic device within a first preset time period (which may be determined according to actual use requirements, and this embodiment of the application is not limited). For example, the first preset time duration may be 500ms, after the expression package generation device captures a screen once, it is first determined whether the screen capture time duration reaches the first preset time duration, if the screen capture time duration reaches the first preset time duration, the screen capture is ended, and all screen capture images obtained by screen capture are determined as the M screen capture images.
Optionally, in this embodiment of the application, the number of the M screen shots may also be preset, so that the emoticon generating apparatus may end the screen shot after the mth screen shot.
In the embodiment of the present application, the size (area and shape) of the M screen shots is the same, and the M screen shots are in the same coordinate system (hereinafter referred to as a target coordinate system), and the coordinate ranges of the M screen shots in the target coordinate system are the same.
And step 103, the expression package generating device determines a first area according to the M screen shots.
Wherein, the first area can be a content difference area between the M screen capture images.
In this embodiment of the application, the first area of the M screen shots includes at least one different content.
For example, fig. 2 is a schematic diagram of M screen shots and a first region. Assuming that M screen shots are an image M1, an image M2, and an image M3 arranged in this order of acquisition, and the first region is a region indicated by a dashed circle 20 in fig. 2, then, as shown in fig. 2, the content 21 is not included in the first region of the image M1, the content 21 is included in the first region of the image M2, and the content 21 is not included in the image M3. As such, at least one different content included in the first area of the M screen shots is the content 21.
Optionally, in this embodiment of the application, at least one content difference region may be included between M screen shots. The first area may specifically be one or more content difference areas in the at least one content difference area, and may specifically be determined according to actual usage requirements, which is not limited in this embodiment of the application.
For the description of at least one content difference region, detailed description will be made in the following embodiments, and details are not repeated herein to avoid repetition.
And 104, generating a target expression package corresponding to the N target images by the expression package generating device according to the N target images corresponding to the first area.
Wherein N may be a positive integer.
Optionally, in this embodiment of the application, after the expression package generating device determines the first area, the expression package generating device may cut, according to an area corresponding to the first area, N screen shots that are different from the M screen shots, to obtain the N target images (in a first manner). Or, the expression package generating device may directly crop N images of the M screen shots according to the first region to obtain the N target images, where N is less than or equal to M (the second mode).
In this embodiment of the application, after the expression package generation device obtains the N target images, each image in the N target images may be encoded according to a coding mode corresponding to a file format of a static expression package (or a dynamic expression package), and each encoded image is synthesized into the target expression package.
Optionally, in this embodiment of the application, if the expression package generation device encodes each image in the N target images according to the coding mode corresponding to the file format of the static expression package, the target expression package is the static expression package, and in this case, N may be greater than or equal to 1. If the expression package generating device encodes each image in the N target images according to the encoding mode corresponding to the file format of the dynamic expression package, the target expression package is a dynamic expression package, and in this case, N may be greater than 1.
Optionally, in this embodiment of the application, the file format of the static emoticon may include: any possible formats such as JPEG format jpg format, png format, tif format, and the like can be determined according to actual use requirements, and the embodiment of the present application is not limited. The file format of the dynamic emoticon may include: GIF format, webp format, apn format and other optional formats.
For example, taking the target expression package as a dynamic expression package, the expression package generating device may encode each target image according to a GIF encoding mode corresponding to the GIF format, and synthesize N encoded target images to obtain the target expression package.
Optionally, in this embodiment of the application, after the emotion packet generation device synthesizes the target emotion packet, the target emotion packet may be stored in the electronic device, or the target emotion packet may be stored in a favorite, and the target emotion packet may also be sent to other electronic devices. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
In the method for generating the expression package provided by the embodiment of the application, the expression package generating device can determine the first region according to the acquired M screen capture images of the current display interface of the electronic device, and generate the target expression package (corresponding to the N target images) according to the N target images corresponding to the first region, without installing an expression package generating tool, searching for expression package materials, and manually editing and processing the expression package materials in the expression package generating tool, so that the process of generating the custom expression package can be simplified.
The following exemplarily describes a method of the emoticon generation apparatus determining the first area.
Optionally, in this embodiment of the application, the expression package generation device may specifically determine the first area by a pixel comparison method according to the M screen shots. The step 103 can be specifically realized by the steps 103a and 103b described below.
Step 103a, the expression package generating device performs pixel comparison on the ith screen capture image in the M screen capture images and the (i + 1) th screen capture image in the M screen capture images, determines a content difference area between the ith screen capture image and the (i + 1) th screen capture image, and obtains at least one content difference area.
The ith screen capture image and the (i + 1) th screen capture image can be screen capture images in the M screen capture images, and i can be an integer from 1 to M-1.
In the embodiment of the application, the ith screen capture image is a screen capture image obtained by capturing the screen for the ith time, and the ith screen capture image is a screen capture image obtained by capturing the screen for the ith time. Namely, the ith screen capture image and the (i + 1) th screen capture image are screen capture images obtained by two adjacent screen captures.
Exemplarily, the expression package generating device may extract pixel points of an i-th screenshot image and pixel points of an i + 1-th screenshot image, compare the pixel points of the i-th screenshot image and the pixel points of the i + 1-th screenshot image one by one in a row unit, and record, for each row of pixel points, when a difference is found between one pixel point (difference start pixel point), coordinate information (which is coordinate information in a target coordinate system) of a current pixel point and a current row number; when another pixel point of the line behind the one pixel point is the same, the coordinate information of the previous pixel point (difference ending pixel point) of the other pixel point in the line is recorded, and the area between the difference starting pixel point and the difference ending pixel point is a pixel difference area of the line. It can be understood that after the expression package generation device compares the last pixel point of the last line of the ith screenshot image with the last pixel point of the last line of the (i + 1) th screenshot image, the content difference region between the ith screenshot image and the (i + 1) th screenshot image can be determined according to the recorded pixel difference region of each line. It can be seen that, in the embodiment of the present application, the content difference region between the ith screenshot image and the (i + 1) th screenshot image is a region formed by different pixel points between the ith screenshot image and the (i + 1) th screenshot image.
It can be understood that a row of pixel points may include a plurality of pixel difference regions, or may not have a pixel difference region, and may be determined specifically according to actual use requirements.
Optionally, in this embodiment of the application, the comparing of the two pixel points by the emotion bag generation device may specifically be comparing gray values or RGB values of the two pixel points. The difference between one pixel point and another pixel point may be that the matching degree between the gray values or RGB values of the two pixel points is less than or equal to a preset threshold.
In this embodiment of the application, the at least one content difference area may be a sum of content difference areas between two adjacent screen capture images in the M screen capture images. In other words, at least one content difference region is the whole content difference region obtained after the images in the M screen capture images are compared pairwise.
Exemplarily, it is assumed that M screen shots sequentially follow the acquisition sequence: screenshot image 1, screenshot image 2, screenshot image 3 and screenshot image 4, then the at least one content difference region may include: a content difference region between the screenshot image 1 and the screenshot image 2, a content difference region between the screenshot image 2 and the screenshot image 3, a content difference region between the screenshot image 3 and the screenshot image 4, and a content difference region between the screenshot image 4 and the screenshot image 5.
Optionally, in this embodiment of the application, for each content difference area in the at least one content difference area, the expression package generation apparatus may record coordinate information of one content difference area in the target coordinate system, and record an area of the content difference area.
Optionally, in this embodiment of the application, an area of each content difference region in the at least one content difference region is greater than or equal to 5dp by 5 dp. Specifically, the expression package generation device may discard the content difference region with an area smaller than 5dp × 5dp, so as to reduce subsequent calculation amount and eliminate an error caused by cursor flicker or time variation.
Alternatively, in the embodiment of the present application, in order to facilitate calculation of the areas of the content difference regions, the emoticon generation apparatus may use, as each content difference region, a regular pattern (for example, a circle, a square, a rectangle, a triangle, or the like) that circumscribes the outer contour line of each content difference region. That is, in an actual implementation, for each of the at least one content difference region, an area of one content difference region may be greater than or equal to an actual area of the difference content, and a shape of one content difference region may be different from a shape of the difference content.
For example, fig. 3 is a schematic diagram of determining a content difference region between two adjacent screen shots. As shown in fig. 3 (a), the person image 31 and the tree image 32 are included in the 2 nd (i.e., i + 2) th screenshot image, and as shown in fig. 3 (b), the tree image 32 is included in the 3 rd (i.e., i +1 ═ 3) th screenshot image, and the position of the tree image 32 in the 3 rd screenshot image is the same as that of the tree image in the 2 nd screenshot image, then, as shown in fig. 3 (c), the emoticon generating means may determine a rectangular region 33 circumscribing the person image 32 as a content difference region between the 2 nd and 3 rd screenshot images.
Step 103b, the expression package generating device determines the content difference area meeting the preset condition in the at least one content difference area as the first area.
Optionally, in this embodiment of the application, the preset condition may be at least one of the following: (1) the content difference region with the largest occurrence number; (2) the content difference region with the largest area; (3) a content difference region selected by the user. That is, the preset condition may be any one of (1), (2), and (3), or any two of (1), (2), and (3), or (1), (2), and (3).
Optionally, in this embodiment of the application, when the preset condition includes (1) above, the expression package generation device may determine the number of times that each content difference area of the at least one content difference area appears, and determine the content difference area with the largest number of times that the content difference area appears in the at least one content difference area as the first area.
In this embodiment, the number of occurrences of one content difference region may be the number of occurrences of at least a part of the content difference region.
Example 1, as shown in fig. 4, assuming that at least one content difference region is 5 content difference regions, which are region a1, region a2, region A3, region a4, and region a5, respectively: since there is an intersection between region a1 and region a2 (the filled region as shown in fig. 4), the number of occurrences of region a1 and region a2 is 2 times each; and since the region A3 and the region a4 completely coincide, the number of occurrences of the region A3 and the region a4 is also 2 times each; region a5 does not coincide with any other region, so region a5 occurs 1 time. That is, it can be understood that the content difference regions with the largest occurrence number are respectively: region a1, region a2, region A3, and region a 4.
Optionally, in this embodiment of the application, when the preset condition includes (2) above, the expression package generation device determines, as the first area, a content difference area with a largest area in the at least one content difference area.
Optionally, in this embodiment of the application, when the preset condition includes the foregoing (3), the electronic device may display P identifiers, each identifier may indicate a position of one content difference region in the target coordinate system, and the user may specifically determine the first region by selecting an identifier in the P identifiers.
Optionally, in this embodiment of the application, the preset conditions are different, and the first areas determined according to the at least one content difference area may also be different.
Example 2, it is assumed that the areas of the region a1, the region a2, the region A3, the region a4, and the region a5 in the above example 1 are: s1, s2, s3, s4 and s5, and s1 > s2, s2 > s3, s3 > s4, s5 > s 1. Then, if the preset conditions are that the area is the largest and the number of occurrences is the largest, the expression package generation means may determine the area a1 as the first area; if the preset condition is that the area is the maximum, the expression package generation means may determine the area a5 as the first area.
It is understood that, in the embodiment of the present application, the first area is different, and the target emoticon may also be different.
When it needs to be described, in the embodiment of the present application, the expression package generation device in the above embodiment first acquires M screen capture images, and then determines a content difference area between an i-th screen capture image in the M screen capture images and an i + 1-th screen capture image in the M screen capture images, in an actual implementation, after acquiring the i-th screen capture image and the i + 1-th screen capture image, the expression package generation device may determine the content difference area between the two screen capture images; then acquiring the (i + 2) th screen capture image, and determining a content difference area between the (i + 1) th screen capture image and the (i + 2) th screen capture image; and in the same way, at least one content difference area can be obtained until the Mth screen capture image is obtained and the content difference area between the M-1 st screen capture image and the Mth screen capture image is determined.
In the embodiment of the application, different first areas can be determined according to different preset conditions, so that the flexibility and diversity of determining the first areas can be improved, and the flexibility and convenience of generating the custom expression packages can be improved.
In the embodiment of the application, at least one content difference area is obtained by the expression package generating device in a pixel comparison mode, so that the first area can be accurately determined.
The first and second embodiments will be described in detail below.
First mode
Optionally, in this embodiment of the application, in the first manner, the expression package generating device may first acquire N screen capture images (i.e., the N second images), then crop an image of a second region corresponding to the first region in the N screen capture images, and determine the N acquired images as N target images. And the acquisition time of the N screen capture images is different from that of the M screen capture images. Specifically, the N screen capture images may be acquired after the M screen capture images are acquired, and may also be acquired after the first region is determined, which may be determined specifically according to actual use requirements, and the embodiment of the present application is not limited.
For example, in the embodiment of the present application, before the step 104, the method for generating an emoticon provided in the embodiment of the present application may further include the following steps 105 to 107.
And step 105, responding to the first input by the expression package generating device, and acquiring N screen capture images of the current display interface of the electronic equipment.
Optionally, in this embodiment of the application, the content of the N screenshot images and the content of the M screenshot images of the current display interface of the electronic device, which are obtained by the emotion package generation apparatus, may correspond to the same content or may correspond to different contents. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
For example, assuming that the content a and the content b are displayed on the screen of the electronic device in a circulating manner, the content a and the content b may be included in the N screenshot images acquired by the expression package generation device, and the content a and the content b may also be included in the M screenshot images, so that the content of the N screenshot images acquired by the expression package generation device corresponds to the same content as the content of the M screenshot images.
For another example, if content a, content b, content c, and content d are sequentially displayed on the screen of the electronic device, and the content a, the content b, the content c, and the content d are different, then if the M screenshot images acquired by the emotion package generation apparatus include the content a and the content b, and the N screenshot images acquired by the emotion package generation apparatus include the content c and the content d, the content of the N screenshot images corresponds to different content from the content of the M screenshot images.
And 106, determining a second area corresponding to the first area in the N screen capture images by the expression package generating device.
In the embodiment of the application, the N screen shots and the M screen shots are both in the target coordinate system, and the coordinate information of the second area in the target coordinate system is the same as the coordinate information of the first area in the target coordinate system.
And step 107, the expression package generating device cuts the image of the second area in the N screen capture images to obtain N target images.
In this embodiment of the application, for each target image in the N target images, the expression package generation device may crop an image of the second area in one target image to obtain one target image. In this way, after the expression package generation device cuts the image of the second area in the N screen shots, N target images can be obtained.
It is understood that the expression package generation device may specifically cut the image of the second area in one target image by reserving the image of the first area in the target image and discarding the images except the first area in the target image.
Optionally, in this embodiment of the application, the number of the N screen shots may be preset or determined by an input of a user (for example, a second input described below), where when the N screen shots are determined by the input of the user, the expression package generating device may acquire the N screen shots after acquiring the M screen shots.
In the embodiment of the application, the user does not need to manually search the emotion bag material and cut the emotion bag material, so that the process of generating the user-defined emotion bag can be further simplified, and the man-machine interaction performance is improved.
Second mode
Optionally, in this embodiment of the application, in a second implementation manner, after the step 103 and before the step 104, the method for generating an expression package provided by this embodiment of the application may further include the following step 108.
And step 108, the expression package generating device responds to the first input, and N images in the M screen capture images are cut to obtain N target images.
Wherein N is less than or equal to M.
Optionally, in this embodiment of the application, the N images may be any N images in the M screen capture images, and may be determined specifically according to actual use requirements, and this embodiment of the application is not limited.
For the description of step 108, reference may be made to the related description in step 107, and details are not repeated here to avoid repetition.
Optionally, in this embodiment of the application, the M screen shots in step 108 may be original images of the M screen shots in step 102, or may be M backup screen shots obtained by the expression package generation device backing up the M screen shots in step 102. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
In the embodiment of the application, on one hand, the M screen capture images can be used for determining the first area and can be used as the emoticon materials for generating the emoticon, so that an emoticon generating device is not needed to capture the current display interface of the electronic equipment, that is, the emoticon materials are not needed to be specially acquired, the time consumed for acquiring the emoticon materials can be saved, and the speed for generating the custom emoticon can be increased; on the other hand, the N target images are the images of the first areas of the N images of the M screen capture images, so that the N target images can be ensured to comprise at least one different content, and the target expression package can be ensured to have a better display effect when the dynamic expression package is generated by adopting the N target images.
Optionally, in this embodiment of the application, in the first manner, when the N screen shots are determined for the user input, after the first input of the user is received by the emotion bag generation apparatus, a second control may be further displayed (for example, the second control may be an "end control"), and the second control may be configured to trigger the emotion bag generation apparatus to end acquiring emotion bag materials, and process the acquired emotion bag materials to obtain a custom emotion bag (for example, the target emotion bag).
Optionally, in this embodiment of the application, after acquiring M screen shots, the emoticon generation apparatus may perform continuous screen shots on the screen of the electronic device again to obtain emoticon materials (for example, N screen shots described below) for generating the emoticon.
For example, in the embodiment of the present application, after the step 102 and before the step 105, the method for generating an expression package provided in the embodiment of the present application may further include the following step 109 and step 110. The step 105 can be specifically realized by the step 105a described below.
And step 109, responding to the first input by the expression package generating device, and continuously capturing a screen of the current display interface of the electronic equipment.
And step 110, the emotion bag generation device receives a second input of the user to the second control.
And step 105a, the expression package generation device responds to the second input and acquires N screen capture images.
The N screen shots may be images obtained by screen shots from the start of the emoticon generating apparatus to the continuous screen shot of the current display interface of the electronic device (i.e., from step 109) to the reception of the second input. In other words, the N screen capture images are images captured between the time of starting the screen capture and the time of ending the screen capture.
Optionally, in this embodiment of the application, the second input may be a touch input of the user to the second control, and in an actual implementation, the second input may also be a voice input (for example, "stop recording" voice) or an input of a preset gesture.
It can be understood that, in the embodiment of the present application, since the second control is displayed after the first input is received by the expression package generation device, if the user inputs to the second control before the expression package generation device determines the first area, the expression package generation device may display a prompt message to prompt the user to cancel generating the expression package. Of course, the expression package generating apparatus may also set the state of the second control, for example, before step 109 is executed, the state of the second control is set to be in an inoperable state, and after step 109 is executed, the state of the second control is set to be in an operable state.
It is understood that, in the embodiment of the present application, the above-mentioned embodiment is exemplified by the step 109 being executed after the step 102, and in an actual implementation, the step 108 may also be executed after the step 103. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
In the embodiment of the application, the user can trigger the expression package generation device to stop obtaining the materials of the expression package through the second input according to the actual requirement in the process of generating the expression package by the expression package generation device, namely, the expression package generation device can be triggered to finish obtaining the materials of the expression package at any moment, so that the flexibility of obtaining the materials of the expression package can be improved, and the man-machine interaction performance is improved.
It can be understood that, in the embodiment of the present application, after finishing the obtaining of the emotion bag material, the emotion bag generation device may continue to execute the step 105a, or finish the flow of generating the emotion bag.
It should be noted that an execution subject of the expression package generation method provided in the embodiment of the present invention may be the expression package generation apparatus, or may also be a functional module and/or a functional entity capable of implementing the expression package generation method in the expression package generation apparatus, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an example of an emoticon generation method executed by an emoticon generation device, and exemplarily describes the emoticon generation device provided in the embodiment of the present invention.
As shown in fig. 5, an emoticon generating apparatus 50 according to an embodiment of the present application is provided, and the emoticon generating apparatus 50 may include a receiving module 51, an obtaining module 52, a determining module 53, and a generating module 54. The receiving module 51 may be configured to receive a first input of a first control from a user, where the first control may be used to trigger generation of an emoticon; the obtaining module 52 may be configured to obtain M screen shots of a current display interface of the expression package generation apparatus in response to the first input received by the receiving module 51; a determining module 53, configured to determine a first region according to the M screen shots acquired by the acquiring module 52, where the first region may be a content difference region between the M screen shots; a generating module 54, configured to generate target expressions corresponding to the N target images according to the N target images corresponding to the first area determined by the determining module 53; wherein, M and N can be both positive integers.
In the expression package generation device provided by the embodiment of the application, the expression package generation device can determine the first region according to the acquired M screen capture images of the current display interface of the electronic device, and generate the target expression package (corresponding to the N target images) according to the N target images corresponding to the first region, without installing an expression package generation tool, searching for expression package materials, and manually clipping and processing the expression package materials in the expression package generation tool, so that the process of generating the custom expression package can be simplified.
Optionally, in this embodiment of the application, the expression package generating device may further include a first cutting module. The obtaining module 52 may be further configured to obtain N screen shots of a current display interface of the electronic device before the generating module 54 generates the target expression package corresponding to the N target images; the determining module 53 may be further configured to determine a second region corresponding to the first region in the N screenshot images; the first cropping module can be used for cropping the image of the second area in the N screen capture images to obtain N target images.
In the embodiment of the application, the user does not need to manually search the emotion bag material and cut the emotion bag material, so that the process of generating the user-defined emotion bag can be further simplified, and the man-machine interaction performance is improved.
Optionally, in this embodiment of the application, the expression package generating device may further include a second cutting module. The second cropping module may be configured to crop N images of the M screen shots to obtain N target images before the generating module 54 generates the target expression package corresponding to the N target images, where N is equal to or less than M.
In the embodiment of the application, on one hand, the M screenshot images can be used for determining the first area and can be used as the emoticon material for generating the emoticon, so that an emoticon generating device is not needed to screen-shoot the screen of the electronic equipment, time consumed for obtaining the emoticon material can be saved, and the speed of generating the custom emoticon can be increased; on the other hand, the N target images are the images of the first areas of the N images of the M screen capture images, so that the N target images can be ensured to comprise at least one different content, and the target expression package can be ensured to have a better display effect when the dynamic expression package is generated by adopting the N target images.
Optionally, in this embodiment of the application, the determining module 53 may be specifically configured to perform pixel comparison on an i-th screenshot image of the M screenshot images and an i + 1-th screenshot image of the M screenshot images, and determine a content difference area between the i-th screenshot image and the i + 1-th screenshot image to obtain at least one content difference area; determining a content difference area satisfying a preset condition in at least one content difference area as a first area; the ith screen capture image and the (i + 1) th screen capture image can be screen capture images in M screen capture images, and i is an integer from 1 to M-1.
In the expression package generation device provided by the embodiment of the application, at least one content difference area is obtained by the expression package generation device in a pixel comparison mode, so that the first area can be accurately determined.
Optionally, in this embodiment of the application, the preset condition may include at least one of the following: the most number of occurrences, the largest area, and user selected.
In the expression package generation device provided by the embodiment of the application, different first regions can be determined according to different preset conditions, so that the flexibility and diversity of determining the first regions can be improved, and the flexibility and convenience for generating the user-defined expression package can be improved.
Optionally, in this embodiment of the application, the expression package generating device may further include a screen capture module; the screen capture module may be configured to perform continuous screen capture on the current display interface of the screen emoticon generating apparatus after the obtaining module 52 obtains M screen capture images and before obtaining N screen capture images; the receiving module 51 may be further configured to receive a second input of the second control from the user; the obtaining module 52 may be specifically configured to, in response to the second input received by the receiving module 51, obtain N screen shots, where the N screen shots may be obtained by performing screen shots between the start of continuously capturing the current display interface of the electronic device by the expression package generating apparatus and the reception of the second input.
In the expression package generating device provided by the embodiment of the application, because a user can trigger the expression package generating device to stop acquiring the expression package materials through the second input according to the actual requirements in the process of generating the expression package by the expression package generating device, the expression package generating device can be triggered to stop acquiring the expression package materials at any time, so that the flexibility of acquiring the expression package materials can be improved, and the man-machine interaction performance is improved.
The expression package generating device in the embodiment of the present application may be an electronic device, or may be a component, an integrated circuit, or a chip in the electronic device. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The expression package generation device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The electronic device 50 provided in this embodiment of the application can implement each process implemented by the expression package generation method in the method embodiments of fig. 1 to fig. 4, and for avoiding repetition, details are not described here again.
Optionally, as shown in fig. 6, an electronic device 200 is further provided in this embodiment of the present application, and includes a processor 201, a memory 202, and a program or an instruction stored in the memory 202 and executable on the processor 201, where the program or the instruction is executed by the processor 201 to implement each process of the embodiment of the expression package generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The user input unit 1007 may be configured to receive a first input of a first control from a user, where the first control may be used to trigger generation of an expression package; the processor 1010 may be configured to acquire M screen shots of a current display interface of the electronic device in response to a first input received by the user input unit 1007; determining a first area according to the acquired M screen capture images; generating target expressions corresponding to the N target images according to the N target images corresponding to the first area; the first region may be a content difference region M and N between the M screen shots, and may be positive integers.
In the electronic device provided by the embodiment of the application, the first region can be determined by the electronic device according to the acquired M screen capture images of the current display interface of the electronic device, and the target expression package (corresponding to the N target images) can be generated according to the N target images corresponding to the first region, without installing an expression package generation tool, searching for expression package materials and manually editing and processing the expression package materials in the expression package generation tool, so that the process of generating the user-defined expression package can be simplified.
Optionally, in this embodiment of the application, the processor 1010 may be further configured to obtain N screen shots of a current display interface of the electronic device before generating the target expression package corresponding to the N target images; determining a second area corresponding to the first area in the N screen capture images; and cutting the image of the second area in the N screen capture images to obtain N target images.
In the embodiment of the application, the user does not need to manually search the emotion bag material and cut the emotion bag material, so that the process of generating the user-defined emotion bag can be further simplified, and the man-machine interaction performance is improved.
Optionally, in this embodiment of the application, the processor 1010 is further configured to, before generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area, crop the N images of the M screen shots to obtain the N target images, where N is equal to or less than M.
In the embodiment of the application, on one hand, the electronic equipment can determine the first area by using the M screen capture images and can also be used as the emoticon material for generating the emoticon, so that the electronic equipment does not need to capture the screen of the electronic equipment (namely, obtain the emoticon material), the time consumption for obtaining the emoticon material can be saved, and the speed for generating the custom emoticon can be increased; on the other hand, the N target images are the images of the first areas of the N images of the M screen capture images, so that the N target images can be ensured to comprise at least one different content, and the target expression package can be ensured to have a better display effect when the dynamic expression package is generated by adopting the N target images.
Optionally, in this embodiment of the application, the processor 1010 may be specifically configured to perform pixel comparison on an i-th screenshot image of the M screenshot images and an i + 1-th screenshot image of the M screenshot images, determine a content difference area between the i-th screenshot image and the i + 1-th screenshot image, and obtain at least one content difference area; determining a content difference area satisfying a preset condition in at least one content difference area as a first area; the ith screen capture image and the (i + 1) th screen capture image can be screen capture images in M screen capture images, and i is an integer from 1 to M-1.
In the electronic device provided by the embodiment of the application, the at least one content difference area is obtained by the electronic device in a pixel comparison mode, so that the first area can be accurately determined.
Optionally, in this embodiment of the application, the preset condition may include at least one of the following: the most number of occurrences, the largest area, and user selected.
In the electronic equipment provided by the embodiment of the application, different first regions can be determined according to different preset conditions, so that the flexibility and diversity of determining the first regions can be improved, and the flexibility and convenience of generating the user-defined expression packages can be improved.
Optionally, in this embodiment of the application, the processor 1010 may be further configured to perform continuous screen capturing on a current display interface of the electronic device after the M screen capturing images are acquired and before the N screen capturing images are acquired; the user input unit 1007 may be further configured to receive a second input to the second control; the processor 1010 may be specifically configured to, in response to a second input received by the user input unit 1007, acquire N screen capture images, where the N screen capture images may be images captured from the beginning of continuous screen capture of the current display interface of the electronic device to the receipt of the second input.
In the electronic equipment provided by the embodiment of the application, because the user can trigger the electronic equipment to stop obtaining the emotion bag material through the second input in the process of generating the emotion bag by the electronic equipment according to the actual requirement of the electronic equipment, the electronic equipment can be triggered to stop obtaining the emotion bag material at any time, the flexibility of obtaining the emotion bag material can be improved, and the man-machine interaction performance is improved.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the expression package generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the expression package generation method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An expression package generation method, characterized in that the method comprises:
receiving a first input of a user to a first control, wherein the first control is used for triggering generation of an expression package;
responding to the first input, and acquiring M screen capture images of a current display interface of the electronic equipment;
determining a first region according to the M screen capture images, wherein the first region is a content difference region which meets a preset condition in at least one content difference region among the M screen capture images, and the preset condition is at least one of the following conditions: the most frequent occurrence; the area is maximum; selecting by a user;
generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area;
generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area, wherein the generating of the target expression package corresponding to the N target images comprises the following steps:
coding each image in the N target images according to a target coding mode, and generating a target expression package from each coded image; the target coding mode is a coding mode corresponding to a file format of a static expression package, and the target expression package is the static expression package; or the target coding mode is a coding mode corresponding to the file format of the dynamic expression package, and the target expression package is the dynamic expression package;
wherein M and N are both positive integers.
2. The method of claim 1, wherein before generating the target expression package corresponding to the N target images from the N target images corresponding to the first region, the method further comprises:
acquiring N screen capture images of a current display interface of the electronic equipment;
determining a second area corresponding to the first area in the N screen shots;
and cutting the image of the second area in the N screen capture images to obtain the N target images.
3. The method of claim 1, wherein before generating the target expression package corresponding to the N target images from the N target images corresponding to the first region, the method further comprises:
and cutting N images in the M screen capture images to obtain N target images, wherein N is less than or equal to M.
4. The method according to any one of claims 1 to 3, wherein determining a first region from the M screen shots comprises:
comparing pixels of an ith screen capture image in the M screen capture images with pixels of an (i + 1) th screen capture image in the M screen capture images, and determining a content difference area between the ith screen capture image and the (i + 1) th screen capture image to obtain at least one content difference area;
determining a content difference area satisfying a preset condition among the at least one content difference area as the first area;
wherein i is an integer taken from 1 to M-1.
5. The method according to claim 2, wherein after acquiring the M screen shots of the current display interface of the electronic device and before acquiring the N screen shots of the current display interface of the electronic device, the method further comprises:
continuously screen-capturing a current display interface of the electronic equipment;
receiving a second input to a second control;
the acquiring of the N screen shots of the current display interface of the electronic device includes:
and responding to the second input, and acquiring the N screen capture images, wherein the N screen capture images are obtained by screen capture between the time of starting to continuously screen capture the current display interface of the electronic equipment and the time of receiving the second input.
6. The device for generating the expression package is characterized by comprising a receiving module, an obtaining module, a determining module and a generating module;
the receiving module is used for receiving a first input of a user to a first control, and the first control is used for triggering generation of an expression package;
the acquisition module is used for responding to the first input received by the receiving module and acquiring M screen capture images of a current display interface of the electronic equipment;
the determining module is configured to determine a first region according to the M screen shots acquired by the acquiring module, where the first region is a content difference region that satisfies a preset condition among at least one content difference region among the M screen shots, and the preset condition is at least one of the following conditions: the most frequent occurrence; the area is maximum; selecting by a user;
the generating module is used for generating a target expression package corresponding to the N target images according to the N target images corresponding to the first area determined by the determining module;
the generation module is further used for coding each image in the N target images according to a target coding mode and generating a target expression package from each coded image; the target coding mode is a coding mode corresponding to a file format of a static expression package, and the target expression package is the static expression package; or the target coding mode is a coding mode corresponding to the file format of the dynamic expression package, and the target expression package is the dynamic expression package;
wherein M and N are both positive integers.
7. The apparatus of claim 6, further comprising a first cropping module;
the obtaining module is further configured to obtain N screen shots of a current display interface of the electronic device before the target expression package corresponding to the N target images is generated by the generating module;
the determining module is further configured to determine a second region corresponding to the first region in the N screen shots;
and the first cutting module is used for cutting the image of the second area in the N screen capture images to obtain the N target images.
8. The apparatus of claim 6, further comprising a second cropping module;
and the second cutting module is used for cutting N images in the M screen capture images to obtain the N target images before the generation module generates the target expression package corresponding to the N target images according to the N target images corresponding to the first area, wherein N is not less than M.
9. The apparatus according to any one of claims 6 to 8,
the determining module is specifically configured to perform pixel comparison between an ith screen capture image in the M screen capture images and an (i + 1) th screen capture image in the M screen capture images, determine a content difference region between the ith screen capture image and the (i + 1) th screen capture image, and obtain at least one content difference region; determining a content difference area satisfying a preset condition in the at least one content difference area as the first area;
wherein i is an integer taken from 1 to M-1.
10. The apparatus of claim 7, further comprising a screen capture module;
the screen capture module is used for continuously capturing the screen of the current display interface of the electronic equipment after the acquisition module acquires the M screen capture images and before the acquisition module acquires the N screen capture images;
the receiving module is further used for receiving a second input of a second control;
the obtaining module is specifically configured to obtain the N screen capture images in response to the second input received by the receiving module, where the N screen capture images are obtained by screen capture between the start of continuous screen capture on the current display interface of the electronic device and the reception of the second input.
CN202010694477.XA 2020-07-17 2020-07-17 Expression package generation method and device Active CN111984173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694477.XA CN111984173B (en) 2020-07-17 2020-07-17 Expression package generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694477.XA CN111984173B (en) 2020-07-17 2020-07-17 Expression package generation method and device

Publications (2)

Publication Number Publication Date
CN111984173A CN111984173A (en) 2020-11-24
CN111984173B true CN111984173B (en) 2022-03-25

Family

ID=73438657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694477.XA Active CN111984173B (en) 2020-07-17 2020-07-17 Expression package generation method and device

Country Status (1)

Country Link
CN (1) CN111984173B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929917A (en) * 2011-09-20 2013-02-13 微软公司 Dynamic content feed source filtering
CN104572056A (en) * 2013-10-24 2015-04-29 阿里巴巴集团控股有限公司 Page comparison method and device
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN106780685A (en) * 2017-03-23 2017-05-31 维沃移动通信有限公司 The generation method and terminal of a kind of dynamic picture
CN107657583A (en) * 2017-08-29 2018-02-02 努比亚技术有限公司 A kind of screenshot method, terminal and computer-readable recording medium
CN108200463A (en) * 2018-01-19 2018-06-22 上海哔哩哔哩科技有限公司 The generation system of the generation method of barrage expression packet, server and barrage expression packet
CN110163932A (en) * 2018-07-12 2019-08-23 腾讯数码(天津)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110231905A (en) * 2019-05-07 2019-09-13 华为技术有限公司 A kind of screenshotss method and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968244B (en) * 2012-10-29 2016-03-30 小米科技有限责任公司 The acquisition methods of resource pre-review figure, device and equipment
CN107292944A (en) * 2017-06-26 2017-10-24 上海传英信息技术有限公司 The method for recording and recording system of a kind of screen picture
CN111010610B (en) * 2019-12-18 2022-01-28 维沃移动通信有限公司 Video screenshot method and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929917A (en) * 2011-09-20 2013-02-13 微软公司 Dynamic content feed source filtering
CN104572056A (en) * 2013-10-24 2015-04-29 阿里巴巴集团控股有限公司 Page comparison method and device
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN106780685A (en) * 2017-03-23 2017-05-31 维沃移动通信有限公司 The generation method and terminal of a kind of dynamic picture
CN107657583A (en) * 2017-08-29 2018-02-02 努比亚技术有限公司 A kind of screenshot method, terminal and computer-readable recording medium
CN108200463A (en) * 2018-01-19 2018-06-22 上海哔哩哔哩科技有限公司 The generation system of the generation method of barrage expression packet, server and barrage expression packet
CN110163932A (en) * 2018-07-12 2019-08-23 腾讯数码(天津)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110231905A (en) * 2019-05-07 2019-09-13 华为技术有限公司 A kind of screenshotss method and electronic equipment

Also Published As

Publication number Publication date
CN111984173A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN112422817B (en) Image processing method and device
CN111901896A (en) Information sharing method, information sharing device, electronic equipment and storage medium
CN112954199B (en) Video recording method and device
CN112770059B (en) Photographing method and device and electronic equipment
CN110889379A (en) Expression package generation method and device and terminal equipment
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112291475B (en) Photographing method and device and electronic equipment
CN113570609A (en) Image display method and device and electronic equipment
CN112230831A (en) Image processing method and device
CN113721876A (en) Screen projection processing method and related equipment
CN113593614A (en) Image processing method and device
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN111984173B (en) Expression package generation method and device
CN114827339B (en) Message output method and device and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN113794943B (en) Video cover setting method and device, electronic equipment and storage medium
CN113852774B (en) Screen recording method and device
CN112367487B (en) Video recording method and electronic equipment
CN113163256B (en) Method and device for generating operation flow file based on video
CN112383666B (en) Content sending method and device and electronic equipment
CN113778300A (en) Screen capturing method and device
CN113779293A (en) Image downloading method, device, electronic equipment and medium
CN113691756A (en) Video playing method and device and electronic equipment
CN112684912A (en) Candidate information display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant