CN111612873B - GIF picture generation method and device and electronic equipment - Google Patents

GIF picture generation method and device and electronic equipment Download PDF

Info

Publication number
CN111612873B
CN111612873B CN202010478312.9A CN202010478312A CN111612873B CN 111612873 B CN111612873 B CN 111612873B CN 202010478312 A CN202010478312 A CN 202010478312A CN 111612873 B CN111612873 B CN 111612873B
Authority
CN
China
Prior art keywords
target
video
image
frame
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010478312.9A
Other languages
Chinese (zh)
Other versions
CN111612873A (en
Inventor
付玉迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010478312.9A priority Critical patent/CN111612873B/en
Publication of CN111612873A publication Critical patent/CN111612873A/en
Priority to PCT/CN2021/095887 priority patent/WO2021238943A1/en
Application granted granted Critical
Publication of CN111612873B publication Critical patent/CN111612873B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a GIF picture generation method, a device and electronic equipment, belongs to the technical field of communication, and can solve the problems of long time consumption and even failure of sharing of the GIF picture. The method comprises the following steps: receiving a first input of a user, wherein the first input is used for selecting a target object in a target video frame of a target video; responding to the first input, and picking up the image content of a target object in each of M first video frames in the target video to obtain M target video images; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. The method can be applied to the process of generating the GIF picture from the video.

Description

GIF picture generation method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a GIF picture generation method and device and electronic equipment.
Background
The graphics interchange format (Graphics Interchange Format, GIF) pictures (or GIF pictures, such as GIF picture expression packages) are interesting, so that the GIF pictures are widely used in chat tools such as instant messaging applications. However, the data size of the current GIF picture is generally large, resulting in long time consuming and even failed sharing of the GIF picture.
Disclosure of Invention
The embodiment of the application aims to provide a GIF picture generation method, a device and electronic equipment, which can solve the problems of long time consumption and even failure of sharing the GIF picture.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a GIF picture generation method, where the method includes: receiving a first input of a user, wherein the first input is used for selecting a target object in a target video frame of a target video; responding to the first input, and picking up the image content of a target object in each of M first video frames in the target video to obtain M target video images; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames.
In a second aspect, an embodiment of the present application provides a GIF picture generation apparatus, including: the receiving module is used for receiving a first input of a user, wherein the first input is used for selecting a target object in a target video frame of a target video; the editing module is used for responding to the first input received by the receiving module, and extracting the image content of the target object in each first video frame of the M first video frames in the target video to obtain M target video images; the generation module is used for generating a target GIF picture based on the M target video images obtained by the editing module; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and the program or instructions when executed by the processor implement the steps of the GIF picture generation method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the GIF picture generation method as in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the GIF picture generation method according to the first aspect.
In the embodiment of the application, the image content of the target object in each of M first video frames in the target video can be scratched through the first input of the target object in the target video frame of the target video selected by a user, so that M target video images can be obtained; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. Specifically, since each target video image is a part of content extracted from a first video frame according to a target video object or an object selection frame, but not the whole content of the first video frame, for example, the part of content required by a user in the first video frame, the data size of a target GIF picture obtained by converting the target video is smaller, which is favorable for successful sharing of the GIF picture.
Drawings
Fig. 1 is a schematic flow chart of a picture generation method according to an embodiment of the present application;
fig. 2 is one of operation schematic diagrams of a picture generation method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram illustrating a process of a method for generating a picture according to an embodiment of the present disclosure;
FIG. 4 is a third operation diagram illustrating a process of a method for generating a picture according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a possible image generating apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, the character "/" in the specification and claims generally indicates that the associated object is an or relationship.
The GIF picture generation method provided by the embodiment of the application can be applied to a scene for generating a GIF picture, for example, a scene for generating the GIF picture through video.
According to the GIF picture generation method, the GIF picture generation device and the electronic equipment, the image content of the target object in each of M first video frames in the target video can be extracted through the first input of the target object in the target video frame of the target video selected by the user, so that M target video images can be obtained; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. Specifically, since each target video image is a part of content extracted from a first video frame according to a target video object or an object selection frame, but not the whole content of the first video frame, for example, the part of content required by a user in the first video frame, the data size of a target GIF picture obtained by converting the target video is smaller, which is favorable for successful sharing of the GIF picture.
The GIF picture generation method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenarios thereof with reference to the accompanying drawings.
The GIF picture generation method provided in the embodiment of the present application is described in detail with reference to a flowchart of the GIF picture generation method shown in fig. 1, by way of example. Wherein, although the logical sequence of the GIF picture generation method provided in the embodiments of the present application is shown in the method flowchart, in some cases, the steps shown or described may be performed in a different order than here. For example, the GIF picture generation method shown in fig. 1 may include steps 101 to 103:
step 101, the GIF picture generating device receives a first input of a user, where the first input is used to select a target object in a target video frame of a target video.
By way of example, the terminal provided in the embodiment of the present application may include a display screen, where the display screen may support touch input or fingerprint input of a user. Specifically, the first input may be a touch screen input or a fingerprint input. The touch screen input may be a touch input such as a press input, a long press input, a slide input, a click input, a hover input (an input of a user near the touch screen) of a touch screen of the terminal by a user; the fingerprint input can be the fingerprint input of a user to a fingerprint identifier of a terminal, such as a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like.
Wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are all objects in the object selection frame. That is, the selection of the target object from each first video frame is accomplished through a target video object or object selection frame.
Step 102, the GIF picture generating device responds to the first input and picks up the image content of the target object in each of M first video frames in the target video to obtain M target video images.
Wherein the M first video frames include target video frames, M being a positive integer.
It should be noted that, in the following embodiments, the GIF picture generation method in the embodiments of the present application is described through application scenario 1 to application scenario 3, respectively.
Specifically, in the application scenario 1, the target objects in different first video frames satisfy: the target objects in the different first video frames each contain a target video object (denoted as condition 1).
Alternatively, the target video object may be a person, animal, or still object in a video frame. For example, the target video object may be a foreground object (i.e., foreground) of a person or animal in a video frame, or a background object (i.e., background) of a tree or building in a video frame.
In application scenario 2, the target objects in different first video frames satisfy: the target objects in the different first video frames are all objects in the object selection frame (denoted as condition 2).
In application scenario 2, the target objects in different first video frames satisfy: the target objects in the different first video frames each contain a target video object, and the target objects in the different first video frames each are objects in the object selection frame (denoted as condition 3).
Optionally, the M first video frames are part or all of the video frames in the target video.
Alternatively, the M first video frames may be video frames in the target video selected manually by the user, or video frames in the target video selected automatically by the electronic device.
Specifically, the process of capturing, by the GIF picture generation device, the image content of the target object in the first video frame specifically includes: and identifying the target object from the first video frame, and performing a matting operation on the image content of the target object so as to obtain all pixel points of the image content of the target object.
For example, the video content of the target object of one of the M first video frames may be irregularly shaped image content, or one or more video objects (such as a foreground of a person or an animal) in the first video frame. Therefore, the generated target GIF picture is more interesting.
Step 103, the GIF picture generation device generates a target GIF picture based on the M target video images.
The frame rate of the target GIF picture (denoted as target frame rate) is the frame rate of the target video or a preset frame rate.
Optionally, the preset frame rate may be manually selected by the user or default to the electronic device, which is not specifically limited in the embodiment of the present application, and may be set according to the actual requirement of the user.
It can be understood that the electronic device may sequentially arrange the corresponding target video images according to the frame sequence of the M first video frames, so as to synthesize the M target video images obtained by the capturing into the target GIF picture according to the target frame rate.
Alternatively, the display sizes of different target video images in the M target video images may be the same or different, which may be determined according to the actual needs of the user, and this will not be described in detail in the embodiments of the present application.
It is understood that the target GIF picture may be a dynamic expression pack or decal.
According to the GIF picture generation method, the image content of the target object in each of M first video frames in the target video can be extracted through the first input of the target object in the target video frame of the target video selected by the user, so that M target video images can be obtained; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. Specifically, since each target video image is a part of content extracted from a first video frame according to a target video object or an object selection frame, but not the whole content of the first video frame, for example, the part of content required by a user in the first video frame, the data size of a target GIF picture obtained by converting the target video is smaller, which is favorable for successful sharing of the GIF picture, and the interest of the GIF picture is higher.
Optionally, the electronic device may provide a human-machine interaction interface for converting the video into a GIF picture (e.g., converting the target video into a target GIF picture), so as to enable the user to control the electronic device to generate the target video into the target GIF picture through the human-machine interaction interface.
Optionally, in the embodiment of the present application, the electronic device may provide a control for triggering the conversion of the video into the GIF picture (recorded as the GIF sticker), which is used for triggering the electronic device to execute the process of producing the GIF picture from the video. After the user inputs the GIF sticker control, the electronic device may provide a first content in each first video frame in the target video according to any application scenario from application scenario 1 to application scenario 3. For example, after the user inputs the GIF decal control, the electronic device may display one or more functionality controls, each of which is configured to obtain the first content in each first video frame in the target video using any of the application scenarios 1-3.
For example, as shown in fig. 2 (a), in the case where the electronic device displays the content of the target video 31 on the video playing interface, a "make GIF decal" control 32 may be further included in the video playing interface, where the "make GIF decal" 32 is used to trigger the electronic device to generate the target video 31 as a target GIF picture. After the user inputs the "make GIF decal" 32 shown in fig. 3 (a), the electronic device may display a "body GIF decal" function control 33 and a "region GIF decal" function control 34 as shown in fig. 2 (b). The "main GIF decal" function control 33 and the "area GIF decal" function control 34 are respectively used for triggering the electronic device to obtain the image content of the target object of each first video frame in the target video 31 according to the application scene 1 and the application scene 2, so as to obtain the target GIF picture. Specifically, in the application scenario 1, for each first video frame, the electronic device may first identify the target object and determine the image content of the target object, and then perform a matting operation on the image content of the target object, so as to obtain a target video image in each first video frame.
Similarly, for the control for triggering the electronic device to acquire the first content in the first video frame according to the application scenario 3, reference may be made to the "main GIF decal" function control 33 and the "area GIF decal" function control 34 shown in (b) of fig. 2, which are not described in detail in the embodiment of the present application.
In one possible implementation manner, in application scenario 1 and application scenario 2 in the embodiment of the present application, the target object in each of the M first video frames includes a target video object.
Optionally, in the application scenario 1, step 104 and step 105 may be further included before the above step 101 in this embodiment of the present application. Accordingly, the step 101 may be performed in step 101a. Also, step 106 may be further included before step 102:
step 104, the GIF picture generation device receives a second input of the user.
The description of the input manner of the second input may refer to the related description of the input manner of the first input in the above embodiment, which is not described herein again.
Step 105, the GIF picture generation apparatus displays at least one video object identification in response to the second input, each video object identification indicating one of the video objects in the target video frame.
It will be appreciated that a plurality of video objects, such as a plurality of people, animals or stills, may be included in the target video. At this time, one or more video frames may also be included in each first video frame. Wherein, some video objects are always in the picture of the target video, i.e. all video frames in the target video comprise the image content of the video objects; while other video objects are in part of the picture of the target video, i.e. part of the video frames in the target video comprise the image content of these video objects.
For example, in application scenario 1, the target object may comprise one or more video objects in the target video, and the image content of the target object is in some or all of the video frames in the target video.
For example, the electronic device may provide an identification control by which, in the case where the electronic device displays a first video frame, the electronic device may be triggered to identify and tag one or more video objects in the first video frame, such as generating a tag for each of the one or more video objects, and further selecting, by the user, the target object from the one or more video objects. For example, the second input may be a click input to the recognition control described above.
Step 101a, the GIF picture generation device receives a first input from a user of a target video object identification of the at least one video object identification.
At this time, the first input may be a click input.
Step 106, the GIF picture generating device determines the video object indicated by the target video object identifier as a target video object, wherein the target object comprises the target video object.
Referring to fig. 2, after the user inputs the "body GIF decal" function control 33 shown in (b) of fig. 2, the electronic device may display a first video frame 41 in the target video (i.e., one first video frame in the target video) and the recognition control 42 on the screen as shown in (a) of fig. 3. Subsequently, after the user enters the recognition control 42 (e.g., a second input), the electronic device may display a "cat" tab 43 and a "ball" tab 44 under the first video frame 41, as shown in fig. 3 (b). Wherein the "cat" tag 43 is used to mark a shot object of a cat in the first video frame 41, and the "ball" tag 44 is used to mark a video object of a ball in the first video frame. At this time, the M first video frames of the target video 31 include two video objects of "cat" and "ball". Subsequently, after the user enters (e.g., first input) the "cat" tab 43 shown in fig. 3 (b), the electronic device may determine that the video object, cat, in the target video 31 is the target video object. Further, the electronic device can determine a first video frame of image content of all of the target objects in the target video that includes the cat.
In addition, as shown in (a) of fig. 3, the electronic device may also display video editing controls such as "clip", "text", and "music" for editing the target video.
Therefore, the user can intuitively and conveniently select the target video object meeting the requirement through at least one video object identifier, and each target video image meets the requirement of the user, and the target GIF picture meets the requirement of the user. Therefore, the interest of the target GIF picture is improved.
In one possible implementation, in the application scenario 2, the target object in each first video frame is an object in the object selection frame; the video frames (e.g., target video frames) displayed on the screen (i.e., display screen) also include an object selection frame thereon.
Optionally, in application scenario 2, step 107 is further included before the above step 102, and the corresponding step 102 may be implemented by step 102 b:
step 107, the GIF picture generation device determines the object framed by the object selection frame as a target object.
It should be noted that, in the embodiment of the present application, the object selection frame is used to frame the image content from the video frames, for example, to frame the image content of the target object in the first video frame. Specifically, the object framed by the object selection frame may be an object in the object selection frame in a video frame.
Alternatively, the shape of the object selection frame may be any shape, such as a rectangular shape, a circular shape, a triangular shape, or an irregular shape, and may be determined according to the actual needs of the user, which is not specifically limited herein.
Alternatively, the display size of the object selection and frame may be a preset size or a size manually selected by the user, and may be determined according to the actual requirement of the user, which is not specifically limited herein.
Optionally, the display positions and display sizes of the object frames in the different first video frames are the same or different.
It will be appreciated that the electronic device typically uses the region of the first video frame where the foreground (i.e., the subject, such as the video object) is located as the display location of the object selection box in the first video frame. Wherein a default object selection frame may be placed in the center of a video frame, since the foreground in that video frame is typically placed in the center of the video frame.
It should be noted that a picture (such as an image of a video frame) includes a foreground and a background, where the foreground is an image to be highlighted in the picture, and the background is an image content of the picture other than the foreground.
For example, the foreground of a person picture is the person image in the person picture, or the face image in the person picture. For another example, the foreground of a scene picture is the scene image in the scene picture.
Optionally, the embodiment of the application may determine the foreground in a video frame according to the pixel values of each pixel point of the image of the video frame. For example, the pixel values of the foreground in a video frame satisfy a range of values, such as greater than a preset threshold.
Optionally, the embodiment of the application may determine the foreground in a video frame according to the contrast of the image of the video frame. For example, the difference between the contrast of the foreground in a video frame and the contrast of other image content in the video frame satisfies a range of values.
Step 102b, the GIF picture generating device extracts image contents selected by the object selection frame in each of the M first video frames, and obtains M target video images.
Specifically, in step 102b, the GIF picture generating device may determine, for each first video frame, an object selection frame in the first video frame, and then perform a matting operation on an image content in the object selection frame (i.e., an image content of a target object), so as to obtain a target video image in each first video frame.
Illustratively, referring to fig. 2, after the user inputs the "region GIF decal" functionality control 34 shown in fig. 2 (b), the electronic device may display a rectangular selection frame 51 on the first video frame 41, as shown in fig. 4. The user may trigger the electronic device through the selection frame 51 to determine the display size and the display position of the selection frame 51, so as to scratch the target video image in each first video frame.
Therefore, the user can intuitively and conveniently frame and select the target object meeting the requirements through the object selection frame, each target video image is enabled to meet the requirements of the user, and the target GIF picture is enabled to meet the requirements of the user. Therefore, the interest of the target GIF picture is improved.
Optionally, step 108 and step 109 are further included before step 102b above:
step 108, the GIF picture generation device receives a third input of the user to the object selection frame.
The description of the input manner of the second input may refer to the related description of the input manner of the first input in the above embodiment, which is not described herein again. For example, the third input may be an input dragging the object selection box.
Step 109, the GIF picture generation apparatus adjusts the display position and the display size of the object selection frame in response to the third input.
Illustratively, referring to fig. 4, the user may trigger the electronic device through the checkbox 51 to adjust the display size and display position of the checkbox 51.
Optionally, the center of the default object selection frame of the electronic device is the center of the frame of the first video frame. If the user does not manually select the display position and the display size of the object selection frame in the first video frame, the electronic device determines that the default object selection frame is the display size and the display area of the object selection frame on the current first video frame. If the user manually selects the object selection frame from the first video frame, the electronic device determines the object selection frame manually selected by the user as the object selection frame of the first video frame.
Therefore, the user can adjust the display position and the display size of the object selection frame through the third input, so that the target object selected through the object selection frame meets the user requirement, each target video image further meets the user requirement, and the target GIF picture meets the user requirement. Therefore, the interest of the target GIF picture is improved.
Optionally, in application scenario 3 of the embodiment of the present application, in combination with application scenario 2, step 102b may further include step 110:
step 110, the GIF picture generating device controls the object selection frame to move along with the target video object in the K second video frames based on the display position and the display size of the object selection frame on the target video frame, so that the objects selected by the object selection frame in each second video frame are the same.
The K second video frames are video frames except the target video frame in the M first video frames, and K is a positive integer smaller than M.
Similarly, for a specific manner of acquiring the first content in each first video frame by the electronic device in the application scenario 3, reference may be made to the above description of the application scenario 1 and the application scenario 2, which will not be described in detail here. For example, in application scenario 3, the user may trigger the electronic device to identify and select the target object through the identification control, and the electronic device may provide an object selection box that selects the target object in each first video frame where the image content of the target object is located. Further, the electronic device takes the image content of the target object in the object selection frame as the target video image in the first video frame.
It should be noted that, because the target video image in the first video frame acquired by the electronic device may be not only the image of the target object but also the image in the target area, the first content in each obtained first video frame is beneficial to capturing the image content meeting the user requirement, and the interest of the generated target GIF picture is beneficial to providing.
Optionally, in this embodiment of the present application, step 102 further includes step 111:
step 111, the GIF picture generation device acquires N third video frames with image quality parameters within a preset numerical range in the target video frames.
The M first video frames are video frames in N third video frames, and N is a positive integer greater than or equal to M.
Optionally, the M first video frames are: and video frames with image quality parameters in a preset numerical range in the target video.
It can be understood that the video frame with the image quality parameter in the preset numerical range in the target video is a video frame with better quality in the target video; otherwise, the video frames with the image quality parameters outside the preset numerical range in the target video are video frames with poor quality in the target video. For example, the image quality parameter is the sharpness of an image of a video frame. The video frames with the image quality parameters within the preset numerical range are video frames with better definition; otherwise, the video frames with the image quality parameters outside the preset numerical range are video frames with lower definition, namely more blurred video frames.
For example, the "obtaining N third video frames whose image quality parameters are within the preset numerical range in the target video frame" may exclude blurred pictures (video frames) with unclear subjects (i.e., video objects) according to a preferred algorithm of the pictures, and select video frames with high scores from composition and aesthetic classification to be displayed in the video, while thumbnail images (described in detail in the embodiments below) display all preferred video frames.
Optionally, in combination with the application scenario 1 to the application scenario 3, the electronic device may first obtain, by using an intelligent algorithm, video frames with image quality parameters in a preset numerical range in the target video, so that N third video frames are video frames with image quality parameters in the preset numerical range in the target video.
It can be understood that in the application scenario 1, the electronic device may acquire, by using an intelligent algorithm, video frames with image quality parameters within a preset numerical range in the target video, and then identify video frames including image contents of the target object in the video frames as M first video frames. In the application scenario 2, the electronic device may acquire, through an intelligent algorithm, video frames with image quality parameters in a preset numerical range in the target video, and use the video frames as M first video frames.
It should be noted that, the video frames with poor quality in the target video may be removed, so that M first video frames in the target video are video frames with good quality, thereby being beneficial to improving the quality of the target GIF picture obtained by the M first video frames.
Optionally, in the embodiment of the present application, step 103 includes step 103a and step 103b:
step 103a, the GIF picture generating device sets the transparency of the target area in each target video image to a preset transparency, so as to obtain M first images.
The value of the preset transparency is 100%, that is, the preset transparency is all transparent. At this time, setting the transparency of the target area to the preset transparency may be setting the pixel points in the target area to transparent pixels.
Alternatively, the frame of each target video image may be transparent or opaque, and may be set according to the actual requirements of the user.
Step 103b, the GIF picture generation device synthesizes the M first images to generate the target GIF picture.
The target area in each target video image is an image area except for the area where the target object is located.
The transparency of the target region in each target video image except the region where the target object is located is set to be the preset transparency, so that the interestingness of each target video image can be improved, and the interestingness of the target GIF picture can be further improved.
Alternatively, in the embodiment of the present application, the step 103 may be implemented through steps 103c to 103 e:
step 103c, the GIF picture generation apparatus determines a first image size based on the image size of each target video image.
Wherein the image size of each target video image is less than or equal to the first image size.
Step 103b, the GIF picture generation device adds preset pixels around each target video image to obtain M second images, where the image size of each second image is the first image size.
Alternatively, the preset pixels may be pixels with preset parameters such as preset color, preset transparency, and the like. For example, the preset pixel may be a pixel with a white color and a transparency value of 100%, which is not limited in the embodiment of the present application, and may be determined according to the actual requirement of the user.
Step 103e, the GIF picture generating device generates M second images as target GIF pictures.
Alternatively, for target video images (e.g., second images) having different image sizes, the target GIF picture may be synthesized with the center of each image as a reference, i.e., such that the center points of each image coincide.
Alternatively, the target GIF pictures may be synthesized at a specific point (default or user-selected, such as the point at the upper left corner) of each target video image, i.e., such that the points at the upper left corner of each target video image coincide.
It should be noted that, since the target video images in the plurality of video frames (i.e., the M first video frames) in the target video may be synthesized into the target GIF picture according to the same first display size, the display effect of the generated target GIF picture is better.
Alternatively, in the embodiment of the present application, the step 103c may be implemented by the step 103 c-1:
step 103c-1, the GIF picture generation device determines the first width or the second width as the image width of the first image size, and sets the first height or the second height as the image height of the first image size.
Wherein, the first width is: the image width of the target object with the largest image width in all M target video images; the first height is: the image height of the target object with the largest image height in the M target video images; the second width is the first width plus a first preset value, and the second height is the first height plus a second preset value.
Optionally, the first preset value is the same as or different from the second preset value, for example, the first preset value is equal to the second preset value (for example, the value is 5).
In example 1, the image width of the first image size is the second width, and the image height of the first image size is the second height. At this time, it is explained that the GIF picture generation apparatus adds the first preset number of pixels to the edge of the target display size and adds the second preset number of pixels to the edge of the target display size. I.e., it is equivalent to adding a complete frame to the target GIF picture.
In addition, the image width for the first image size is a first width, and the image height for the first image size is a second height; and the two scenes that the image width of the first image size is the second width and the image height of the first image size is the first height, which indicates that the GIF picture generating device adds a partial frame to the target GIF picture.
For example, when the image content of the target object is at different positions and different sizes in different first video frames, the image content of the target object may be displayed with the maximum and the width of the image content of the target object each being increased by 5 pixels. The 5 pixels are added to avoid that the target object is displayed at the edge of the picture, so that the display effect is poor.
Therefore, the target GIF picture is obtained according to the first image size, the main body (such as the target object or the target video object contained in the target object) in the target GIF picture can be prevented from being displayed at the edge of the target GIF picture, and the display effect of the target GIF picture is improved.
Optionally, in the embodiment of the present application, in a case of one first video frame in the target video on the screen, the electronic device may further display the video frame of the target video or a thumbnail of the video frame, so that the user may conveniently view each first video frame in the target video through the thumbnail. Illustratively, steps 112 to 114 may be further included before the step 101 described above:
Step 112, the GIF picture generation device displays the video frame indicated by the first thumbnail on the first area of the screen, and displays P thumbnails at the first frame rate on the second area of the screen.
Wherein the first frame rate is less than the frame rate of the target video. Each thumbnail is used to indicate one video frame in the target video, and the first thumbnail is one thumbnail of the P thumbnails. P is a positive integer less than or equal to M.
It can be appreciated that the P thumbnails are thumbnails of partial video frames in the target video. For example, the thumbnails shown in fig. 3 or 4 are the above-described P thumbnails.
For example, referring to fig. 3, the electronic device in fig. 3 may display a plurality of thumbnails, such as thumbnail 35, while displaying the first video frame 41. At this time, the thumbnail 35 is the first thumbnail, and the video frame indicated by the thumbnail 35 is the first video frame 41 shown in fig. 3.
Referring to fig. 4, the electronic device in fig. 4 may display a plurality of thumbnails, such as thumbnail 52, at the same time as the first video frame 31. At this time, the thumbnail 52 is the first thumbnail, and the video frame indicated by the thumbnail 52 is the first video frame 31 shown in fig. 4.
Optionally, the GIF picture generation apparatus may provide a control for selecting the first frame rate for supporting the user to manually set the first frame rate.
For example, the GIF picture generation apparatus may provide an interface for adjusting the FPS (i.e., the frame rate), where the interface includes controls such as "5", "10", "15", "20", and "24" for respectively triggering the electronic device to set the first frame rate to a value represented by the corresponding control. In addition, the interface may further include a prompt message "FPS: FPS refers to the number of pictures extracted in 1 s. The higher the FPS value, the smoother the picture, the larger the generated GIF, and possibly the situation that the generated GIF cannot be shared. ". And an indication of "10" flag "suggested value" for prompting the user to set the first frame rate to 10 frames per second so that the generated target GIF picture can be shared normally.
Step 113, the GIF picture generation device receives a third input of the second thumbnail by the user.
Alternatively, the third input may be used to trigger the electronic device to update the thumbnails of the P thumbnails.
Step 114, the GIF picture generation apparatus displays the content update on the first area as the target video frame indicated by the second thumbnail and displays the content update on the second area as Q thumbnails displayed at the first frame rate in response to the third input.
Wherein the P thumbnails are different from the Q thumbnails, the second thumbnail is one of the Q thumbnails, and P is a positive integer less than or equal to M.
For example, if the first input is a sliding input on the P thumbnails, the first input is used to trigger the electronic device to update the P thumbnails, e.g., update the P thumbnails according to the sliding direction of the first input.
For example, if the first input is a long press input to a thumbnail in the P thumbnails, the first input is used to trigger the electronic device to expand and display all video frames in the target video.
Further, in the embodiment of the present application, the thumbnail also supports the user to select whether to delete some video frames in the target video. For example, a user's swipe input of a thumbnail of a video frame, or a long-press and swipe input, is used to trigger the electronic device to delete the video frame. Thus, the user can quickly trigger to delete some video frames in the target video through the thumbnails, such as video frames of which the user does not need to generate the target GIF picture or video frames with lower definition.
It may be understood that deleting a video frame in the target video in the embodiment of the present application refers to deleting a video frame other than the video frame used to generate the target GIF picture, and not to deleting an original video frame in the target video file. Namely, the M first video frames are video frames in the target video, wherein the user does not execute deleting operation.
It will be appreciated that the M first video frames in the target video do not include video frames that were manually deleted by the user.
It should be noted that, by displaying the thumbnail of the video frame in the target video, the user can conveniently view each video frame in the target video, so as to conveniently trigger the user to select the target object from each video frame.
Optionally, in this embodiment of the present application, step 103 further includes steps 115 to 117 after the foregoing step:
step 115, the GIF picture generating device stores the target GIF picture and the identification corresponding to the target GIF picture in the target storage area.
The identification corresponding to the target GIF picture is used for indicating a target video object contained in the target GIF picture. For example, the identification corresponding to the target GIF picture is identification information of the target video object contained therein, such as when the target video object is "dog", the identification is a word of dog.
For example, the electronic device can provide a save control, such as displaying the save control on a screen, to trigger the electronic device to save the target GIF picture.
Optionally, the target storage area may be an area for storing media files such as pictures in the local area of the electronic device, such as a storage area corresponding to a local album application. Alternatively, the target storage area may be an area for storing a media file such as a picture in a server that interacts with the GIF picture generation apparatus.
Step 116, the GIF picture generation apparatus acquires target information input by the user in the content input area in the target application.
Step 117, the GIF picture generation device displays the target GIF picture when the target information is the same as the information indicated by the identification of the target GIF picture.
In the implementation 1, in a scene of video editing, picture editing, or the like, a GIF picture (or a GIF sticker, such as a target GIF picture) may be added as a resource to a video or a picture, and content of the edited picture or video is identified to perform intelligent recommendation of materials. For example, if there is a pet element in the music theme in the edited video picture content, the user recommends that the user use the GIF picture containing the video object of the pet at the time of editing. For example, the target application is a video editing and picture editing application, the content input area is a search box of the target application, and the target information is identification information (such as a name) of the pet.
For example, in implementation 2, when the user sends the expression package in the instant messaging chat tool (i.e., the target application program), the created target GIF picture may be sent as the expression package. Meanwhile, the target GIF picture can be sent according to the content of the main body of the matting (namely the target video object in the target GIF picture). For example, when a target GIF picture including a target video object is made, the selected subject (i.e., the target video object) is a "dog", and the user inputs the word "dog" in the chat interface of the instant messaging chat tool, so that the GIF expression (i.e., the GIF picture) of the subject of the dog can be displayed and sent out.
For example, in implementation 3, when the user inputs "dog" in the input method, the GIF expression (i.e., GIF picture) of the main body of the dog may also be displayed on the editing content (e.g., the edited text). At this time, the target application is an input method application.
In the exemplary implementation 4, the user inputs the text content of the main body through voice during the video call, and sends the corresponding gif expression package of the main body to the counterpart. For example, if the user speaks "dog", the gif expression package with the dog as the main body can be sent to the opposite party, and the chat interface of the opposite party pops up and displays the gif expression package, so that the interestingness of the video chat is increased. At this time, the chat interface of the local user may also display the gif expression package.
In this way, when the target GIF picture and the identifier corresponding to the target GIF picture are stored in the target storage area, the target GIF picture can be quickly and conveniently triggered and displayed by acquiring the target information input by the user in the content input area in the target application program, and when the target information is the same as the information indicated by the identifier of the target GIF picture. Thus, the interest and convenience of the target GIF picture can be improved.
It should be noted that, in the GIF picture generation method provided in the embodiment of the present application, the execution subject may be a GIF picture generation device, or a control module in the GIF picture generation device for executing the GIF picture generation method. In the embodiment of the present application, a GIF picture generation device is described by taking a GIF picture generation method performed by the GIF picture generation device as an example.
Fig. 5 is a schematic structural diagram of a GIF picture generating device according to an embodiment of the present application. As shown in fig. 5, the GIF picture generation apparatus 50, a receiving module 501, configured to receive a first input from a user, where the first input is used to select a target object in a target video frame of a target video; an editing module 502, configured to, in response to the first input received by the receiving module 501, scratch the image content of the target object in each of M first video frames in the target video, to obtain M target video images; a generating module 503, configured to generate a target GIF picture based on the M target video images obtained by the editing module 502; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames.
According to the GIF picture generation device provided by the embodiment of the application, the image content of the target object in each of M first video frames in the target video can be extracted through the first input of the target object in the target video frame of the target video selected by the user, so that M target video images can be obtained; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. Specifically, since each target video image is a part of content extracted from a first video frame according to a target video object or an object selection frame, but not the whole content of the first video frame, for example, the part of content required by a user in the first video frame, the data size of a target GIF picture obtained by converting the target video is smaller, which is favorable for successful sharing of the GIF picture.
Optionally, the target object in each first video frame comprises a target video object; the receiving module 501 is further configured to receive a second input of the user before receiving the first input of the user; the apparatus 50 further comprises: a first display module for displaying at least one video object identification, each video object identification indicating one video object in a target video frame, in response to the second input received by the receiving module 501; a receiving module 501, configured to specifically receive a first input of a target video object identifier from at least one video object identifier by a user; the editing module 502 is further configured to extract image content of a target object in each of M first video frames in the target video, and determine, before obtaining M target video images, a video object indicated by the target video object identifier as a target video object, where the target object includes the target video object.
Therefore, the user can intuitively and conveniently select the target video object meeting the requirement through at least one video object identifier, and each target video image meets the requirement of the user, and the target GIF picture meets the requirement of the user. Therefore, the interest of the target GIF picture is improved.
Optionally, the target object in each first video frame is an object in the object selection frame; the video frame displayed on the screen also comprises an object selection frame; the editing module 502 is further configured to extract image content of a target object in each of M first video frames in the target video, and determine an object framed by the object selection frame as a target object before obtaining M target video images; the editing module 502 is specifically configured to extract image contents framed by the object selection frame in each of the M first video frames, so as to obtain M target video images.
Therefore, the user can intuitively and conveniently frame and select the target object meeting the requirements through the object selection frame, each target video image is enabled to meet the requirements of the user, and the target GIF picture is enabled to meet the requirements of the user. Therefore, the interest of the target GIF picture is improved.
Optionally, the receiving module 501 is configured to receive a third input from the user to the object selection frame before the editing module 502 determines the object framed by the object selection frame as the target object; the apparatus 50 further comprises: and an adjustment module for adjusting the display position and the display size of the object selection frame in response to the third input received by the receiving module 501.
Therefore, the user can adjust the display position and the display size of the object selection frame through the third input, so that the target object selected through the object selection frame meets the user requirement, each target video image further meets the user requirement, and the target GIF picture meets the user requirement. Therefore, the interest of the target GIF picture is improved.
Optionally, the GIF picture generation apparatus 50 further includes: the control module is configured to, before the editing module 502 extracts the image content framed by the object selection frame in each of the M first video frames to obtain M target video images, control the object selection frame to move along with the target video object in the K second video frames based on the display position and the display size of the object selection frame on the target video frames, so that the objects framed by the object selection frame in each of the K second video frames are the same; the K second video frames are video frames except the target video frame in the M first video frames, and K is a positive integer smaller than M.
It should be noted that, because the target video image in the first video frame acquired by the electronic device may be not only the image of the target object but also the image in the target area, the first content in each obtained first video frame is beneficial to capturing the image content meeting the user requirement, and the interest of the generated target GIF picture is beneficial to providing.
Optionally, the GIF picture generation apparatus 50 further includes: the acquisition module is used for picking up the image content of the target object in each of the M first video frames in the target video, and acquiring N third video frames with image quality parameters in a preset numerical range in the target video frames before obtaining M target video images; the M first video frames are video frames in N third video frames, and N is a positive integer greater than or equal to M.
It should be noted that, the video frames with poor quality in the target video may be removed, so that M first video frames in the target video are video frames with good quality, thereby being beneficial to improving the quality of the target GIF picture obtained by the M first video frames.
Optionally, the generating module 503 is specifically configured to set the transparency of the target area in each target video image to a preset transparency, so as to obtain M first images; synthesizing the M first images to generate a target GIF picture; the target area in each target video image is an image area except for the area where the target object is located.
The transparency of the target region in each target video image except the region where the target object is located is set to be the preset transparency, so that the interestingness of each target video image can be improved, and the interestingness of the target GIF picture can be further improved.
Optionally, the generating module 503 is specifically configured to determine the first image size based on the image size of each target video image; adding preset pixels around each target video image to obtain M second images, wherein the image size of each second image is the first image size; generating M second images as target GIF pictures; wherein the image size of each target video image is less than or equal to the first image size.
It should be noted that, since the target video images in the plurality of video frames (i.e., the M first video frames) in the target video may be synthesized into the target GIF picture according to the same first display size, the display effect of the generated target GIF picture is better.
Optionally, the generating module 503 is specifically configured to determine the first width or the second width as an image width of the first image size, and set the first height or the second height as an image height of the first image size; wherein, the first width is: the image width of the target object with the largest image width in all M target video images; the first height is: the image height of the target object with the largest image height in the M target video images; the second width is the first width plus a first preset value, and the second height is the first height plus a second preset value.
Therefore, the target GIF picture is obtained according to the first image size, the main body (such as the target object or the target video object contained in the target object) in the target GIF picture can be prevented from being displayed at the edge of the target GIF picture, and the display effect of the target GIF picture is improved.
Optionally, the GIF picture generation apparatus 50 further includes: a second display module, configured to display a video frame indicated by the first thumbnail on a first area of the screen and display P thumbnails at a first frame rate on a second area of the screen before the receiving module 501 receives the first input of the user; the receiving module 501 is further configured to receive a third input from a user to a second thumbnail of the P thumbnails; the second display module is further configured to display, in response to the third input received by the receiving module 501, content updates on the first area as target video frames indicated by the second thumbnail, and display content updates on the second area as Q thumbnails displayed at the first frame rate; the P thumbnails are different from the Q thumbnails, the first frame rate is smaller than the frame rate of the target video, each thumbnail is used for indicating one video frame in the target video, the first thumbnail is one thumbnail in the P thumbnails, the second thumbnail is one thumbnail in the Q thumbnails, and P and Q are all positive integers smaller than or equal to M.
It should be noted that, by displaying the thumbnail of the video frame in the target video, the user can conveniently view each video frame in the target video, so as to conveniently trigger the user to select the target object from each video frame.
Optionally, the GIF picture generation apparatus 50 further includes: the storage module is configured to store, in the target storage area, a target GIF picture and a flag corresponding to the target GIF picture after the generating module 503 generates the target GIF picture based on the M target video images, where the flag corresponding to the target GIF picture is used to indicate a target video object included in the target GIF picture; the user input module is used for acquiring target information input by a user in a content input area in a target application program; and the third display module is also used for displaying the target GIF picture under the condition that the target information obtained by the user input module is the same as the information indicated by the identification of the target GIF picture stored by the storage module.
In this way, when the target GIF picture and the identifier corresponding to the target GIF picture are stored in the target storage area, the target GIF picture can be quickly and conveniently triggered and displayed by acquiring the target information input by the user in the content input area in the target application program, and when the target information is the same as the information indicated by the identifier of the target GIF picture. Thus, the interest and convenience of the target GIF picture can be improved.
The GIF picture generation device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The GIF picture generation device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The GIF picture generating device provided in the embodiment of the present application can implement each process implemented by the method embodiment of fig. 1, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device 600, including a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and capable of running on the processor 601, where the program or the instruction implements each process of the above-mentioned GIF picture generation method embodiment when executed by the processor 601, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 1007 is configured to receive a first input from a user, where the first input is used to select a target object in a target video frame of the target video; a processor 1010, configured to, in response to a first input received by the user input unit 1007, scratch image content of a target object in each of M first video frames in a target video to obtain M target video images; generating a target GIF picture based on the M target video images obtained by the processor 1010; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames.
According to the electronic equipment provided by the embodiment of the application, the first input of the target object in the target video frame of the target video can be selected by a user, and the image content of the target object in each first video frame in M first video frames in the target video can be scratched to obtain M target video images; generating a target GIF picture based on the M target video images; wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include target video frames. Specifically, since each target video image is a part of content extracted from a first video frame according to a target video object or an object selection frame, but not the whole content of the first video frame, for example, the part of content required by a user in the first video frame, the data size of a target GIF picture obtained by converting the target video is smaller, which is favorable for successful sharing of the GIF picture.
Optionally, the target object in each first video frame comprises a target video object; the user input unit 1007 is further configured to receive a second input of the user before receiving the first input of the user; a display unit 1006 for displaying at least one video object identification, each video object identification indicating one video object in the target video frame, in response to the second input received by the user input unit 1007; a user input unit 1007, specifically configured to receive a first input of a target video object identifier from the at least one video object identifier by a user; the processor 1010 is further configured to extract image content of a target object in each of M first video frames in the target video, and determine, before obtaining M target video images, a video object indicated by the target video object identifier as a target video object, where the target object includes the target video object.
Therefore, the user can intuitively and conveniently select the target video object meeting the requirement through at least one video object identifier, and each target video image meets the requirement of the user, and the target GIF picture meets the requirement of the user. Therefore, the interest of the target GIF picture is improved.
Optionally, the target object in each first video frame is an object in the object selection frame; the video frame displayed on the screen also comprises an object selection frame; the processor 1010 is further configured to extract image contents of a target object in each of M first video frames in the target video, and determine an object framed by the object selection frame as a target object before obtaining M target video images; the processor 1010 is specifically configured to extract image contents framed by the object selection frame in each of the M first video frames, to obtain M target video images.
Therefore, the user can intuitively and conveniently frame and select the target object meeting the requirements through the object selection frame, each target video image is enabled to meet the requirements of the user, and the target GIF picture is enabled to meet the requirements of the user. Therefore, the interest of the target GIF picture is improved.
Optionally, the user input unit 1007 is configured to receive a third input of the object selection frame by the user before the processor 1010 determines the object selected by the object selection frame as the target object; a processor 1010 for adjusting a display position and a display size of the object selection frame in response to the third input received by the user input unit 1007.
Therefore, the user can adjust the display position and the display size of the object selection frame through the third input, so that the target object selected through the object selection frame meets the user requirement, each target video image further meets the user requirement, and the target GIF picture meets the user requirement. Therefore, the interest of the target GIF picture is improved.
Optionally, the processor 1010 is configured to extract image content framed by the object selection frame in each of the M first video frames, and control the object selection frame to move along with the target video object in the K second video frames based on the display position and the display size of the object selection frame on the target video frame before obtaining the M target video images, so that the objects framed by the object selection frame in each of the K second video frames are the same; the K second video frames are video frames except the target video frame in the M first video frames, and K is a positive integer smaller than M.
It should be noted that, because the target video image in the first video frame acquired by the electronic device may be not only the image of the target object but also the image in the target area, the first content in each obtained first video frame is beneficial to capturing the image content meeting the user requirement, and the interest of the generated target GIF picture is beneficial to providing.
Optionally, the processor 1010 is configured to extract image content of a target object in each of M first video frames in the target video, and obtain N third video frames in which an image quality parameter in the target video frame is within a preset numerical range before obtaining M target video images; the M first video frames are video frames in N third video frames, and N is a positive integer greater than or equal to M.
It should be noted that, the video frames with poor quality in the target video may be removed, so that M first video frames in the target video are video frames with good quality, thereby being beneficial to improving the quality of the target GIF picture obtained by the M first video frames.
Optionally, the processor 1010 is specifically configured to set the transparency of the target area in each target video image to a preset transparency, so as to obtain M first images; synthesizing the M first images to generate a target GIF picture; the target area in each target video image is an image area except for the area where the target object is located.
The transparency of the target region in each target video image except the region where the target object is located is set to be the preset transparency, so that the interestingness of each target video image can be improved, and the interestingness of the target GIF picture can be further improved.
Optionally, the processor 1010 is specifically configured to determine the first image size based on the image size of each target video image; adding preset pixels around each target video image to obtain M second images, wherein the image size of each second image is the first image size; generating M second images as target GIF pictures; wherein the image size of each target video image is less than or equal to the first image size.
It should be noted that, since the target video images in the plurality of video frames (i.e., the M first video frames) in the target video may be synthesized into the target GIF picture according to the same first display size, the display effect of the generated target GIF picture is better.
Optionally, the processor 1010 is specifically configured to determine the first width or the second width as an image width of the first image size, and set the first height or the second height as an image height of the first image size; wherein, the first width is: the image width of the target object with the largest image width in all M target video images; the first height is: the image height of the target object with the largest image height in the M target video images; the second width is the first width plus a first preset value, and the second height is the first height plus a second preset value.
Therefore, the target GIF picture is obtained according to the first image size, the main body (such as the target object or the target video object contained in the target object) in the target GIF picture can be prevented from being displayed at the edge of the target GIF picture, and the display effect of the target GIF picture is improved.
Alternatively, the display unit 1006 is configured to display the video frame indicated by the first thumbnail on the first area of the screen and display the P thumbnails at the first frame rate on the second area of the screen before the user input unit 1007 receives the first input of the user; a user input unit 1007 also for receiving a third input by the user of a second thumbnail of the P thumbnails; a second display module, further configured to display, in response to a third input received by the user input unit 1007, content updates on the first region as target video frames indicated by the second thumbnail, and content updates on the second region as Q thumbnails displayed at the first frame rate; the P thumbnails are different from the Q thumbnails, the first frame rate is smaller than the frame rate of the target video, each thumbnail is used for indicating one video frame in the target video, the first thumbnail is one thumbnail in the P thumbnails, the second thumbnail is one thumbnail in the Q thumbnails, and P and Q are all positive integers smaller than or equal to M.
It should be noted that, by displaying the thumbnail of the video frame in the target video, the user can conveniently view each video frame in the target video, so as to conveniently trigger the user to select the target object from each video frame.
Optionally, the memory 1009 is configured to store, after the processor 1010 generates the target GIF picture based on the M target video images, the target GIF picture and an identifier corresponding to the target GIF picture in the target storage area, where the identifier corresponding to the target GIF picture is used to indicate a target video object included in the target GIF picture; a user input unit 1007 for acquiring target information input by a user in a content input region in a target application; the display unit 1006 is further configured to display the target GIF picture if the target information obtained by the user input unit 1007 is the same as the information indicated by the identification of the target GIF picture stored in the memory 1009.
In this way, when the target GIF picture and the identifier corresponding to the target GIF picture are stored in the target storage area, the target GIF picture can be quickly and conveniently triggered and displayed by acquiring the target information input by the user in the content input area in the target application program, and when the target information is the same as the information indicated by the identifier of the target GIF picture. Thus, the interest and convenience of the target GIF picture can be improved.
Alternatively, each of the modules in the GIF picture generation apparatus 50 described above may be implemented by each of the units in the electronic device 1000. For example, the editing module 502, the generating module 503, and the control module in the GIF picture generating apparatus 50 may be implemented by the above-mentioned processor 1010. The first display module and the second display in the GIF picture generation apparatus 50 are realized by the above-described display unit 1006, and the receiving module 501 and the user input module in the GIF picture generation apparatus 50 are realized by the above-described user input unit 1007. The saving module in the GIF picture generation device 50 is realized by the memory 1009 described above.
It should be appreciated that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 1009 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 1010 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the foregoing embodiment of the GIF picture generation method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, implement each process of the above embodiment of the GIF picture generation method, and achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (16)

1. A GIF picture generation method, the method comprising:
receiving a first input of a user, wherein the first input is used for selecting a target object in a target video frame of a target video;
responding to the first input, and picking up the image content of a target object in each of M first video frames in the target video to obtain M target video images;
generating a target GIF picture based on the M target video images;
wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include the target video frame;
the generating a target GIF picture based on the M target video images includes:
determining a first image size based on the image size of each target video image;
adding preset pixels around each target video image to obtain M second images, wherein the image size of each second image is the first image size;
Generating the M second images into the target GIF picture;
wherein the image size of each target video image is less than or equal to the first image size;
the determining a first image size based on the image size of each target video image includes:
determining a first width or a second width as an image width of the first image size, and setting a first height or a second height as an image height of the first image size;
wherein, the first width is: the image width of the target object with the largest image width in all M target video images; the first height is: the image height of the target object with the largest image height in the M target video images;
the second width is the first width plus a first preset value, and the second height is the first height plus a second preset value.
2. The method of claim 1, wherein the target object in each of the first video frames comprises the target video object;
before the receiving the first input of the user, the method further comprises:
receiving a second input from the user;
in response to the second input, displaying at least one video object identification, each video object identification indicating one video object in the target video frame;
The receiving a first input from a user includes:
receiving the first input of a user to a target video object identifier of the at least one video object identifier;
the method further comprises the steps of:
and determining the video object indicated by the target video object identification as the target video object, wherein the target object comprises the target video object.
3. The method of claim 1, wherein the target object in each of the first video frames is an object in the object selection frame; the video frame displayed on the screen also comprises the object selection frame;
the method further comprises the steps of:
determining the object framed by the object selection frame as the target object;
the step of matting the image content of the target object in each of the M first video frames in the target video to obtain M target video images includes:
And the image content selected by the object selection frame in each first video frame in the M first video frames is scratched, so that the M target video images are obtained.
4. The method of claim 3, wherein prior to determining the object framed by the object selection box as the target object, the method further comprises:
receiving a third input of a user to the object selection frame;
and adjusting the display position and the display size of the object selection frame in response to the third input.
5. The method of claim 3, wherein the step of,
the method further includes, before the step of matting the image content framed by the object selection frame in each of the M first video frames to obtain the M target video images:
controlling the object selection frame to move along with the target video object in K second video frames based on the display position and the display size of the object selection frame on the target video frame, so that the objects selected by the object selection frame in each second video frame are the same;
the K second video frames are video frames except the target video frame in the M first video frames, and K is a positive integer smaller than M.
6. The method of claim 1, wherein the matting the image content of the target object in each of the M first video frames in the target video to obtain M target video images, the method further comprising:
acquiring N third video frames of which the image quality parameters are in a preset numerical range in the target video frame;
the M first video frames are video frames in the N third video frames, and N is a positive integer greater than or equal to M.
7. The method of claim 1, wherein the generating a target GIF picture based on the M target video images comprises:
setting the transparency of a target area in each target video image to be a preset transparency to obtain M first images;
synthesizing the M first images to generate the target GIF picture;
the target area in each target video image is an image area except for the area where the target object is located.
8. The method of claim 1, wherein prior to the receiving the first input from the user, the method further comprises:
displaying video frames indicated by first thumbnails on a first area of a screen, and displaying P thumbnails at a first frame rate on a second area of the screen;
Receiving a third input of a user to a second thumbnail of the P thumbnails;
in response to the third input, displaying content updates on the first region as the target video frames indicated by the second thumbnail and content updates on the second region as Q thumbnails displayed at the first frame rate;
the P thumbnails are different from the Q thumbnails, the first frame rate is smaller than the frame rate of the target video, each thumbnail is used for indicating one video frame in the target video, the first thumbnail is one thumbnail in the P thumbnails, the second thumbnail is one thumbnail in the Q thumbnails, and P and Q are all positive integers smaller than or equal to M.
9. The method according to any one of claims 1 to 8, wherein after generating a target GIF picture based on the M target video images, the method further comprises:
storing the target GIF picture and an identifier corresponding to the target GIF picture into a target storage area, wherein the identifier corresponding to the target GIF picture is used for indicating the target video object contained in the target GIF picture;
Acquiring target information input by a user in a content input area in a target application program;
and displaying the target GIF picture under the condition that the target information is the same as the information indicated by the identification of the target GIF picture.
10. A GIF picture generation apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a first input of a user, wherein the first input is used for selecting a target object in a target video frame of a target video;
the editing module is used for responding to the first input received by the receiving module, and extracting the image content of the target object in each first video frame in M first video frames in the target video to obtain M target video images;
the generation module is used for generating a target GIF picture based on the M target video images obtained by the editing module;
wherein the target object in the different first video frame satisfies at least one of: the target objects in the different first video frames all contain target video objects, and the target objects in the different first video frames are objects in an object selection frame; m is a positive integer; the M first video frames include the target video frame;
The generation module is specifically configured to determine a first image size based on an image size of each target video image;
adding preset pixels around each target video image to obtain M second images, wherein the image size of each second image is the first image size;
generating the M second images into the target GIF picture;
wherein the image size of each target video image is less than or equal to the first image size;
the generating module is specifically configured to determine a first width or a second width as an image width of the first image size, and set a first height or a second height as an image height of the first image size;
wherein, the first width is: the image width of the target object with the largest image width in all M target video images; the first height is: the image height of the target object with the largest image height in the M target video images;
the second width is the first width plus a first preset value, and the second height is the first height plus a second preset value.
11. The apparatus of claim 10, wherein the target object in each of the first video frames comprises the target video object;
The receiving module is further configured to receive a second input of the user before receiving the first input of the user;
the apparatus further comprises:
a first display module for displaying at least one video object identification, each video object identification indicating one video object in the target video frame, in response to the second input received by the receiving module;
the receiving module is specifically configured to receive the first input of a user to a target video object identifier in the at least one video object identifier;
the editing module is further configured to extract image content of a target object in each of the M first video frames in the target video, and determine, before obtaining the M target video images, a video object indicated by the target video object identifier as the target video object, where the target object includes the target video object.
12. The apparatus of claim 10, wherein the target object in each of the first video frames is an object in the object selection frame; the video frame displayed on the screen also comprises the object selection frame;
the editing module is further configured to extract image content of a target object in each of the M first video frames in the target video, and determine an object selected by the object selection frame as the target object before obtaining the M target video images;
The editing module is specifically configured to extract image contents selected by the object selection frame in each of the M first video frames, so as to obtain the M target video images.
13. The apparatus of claim 12, wherein the device comprises a plurality of sensors,
the apparatus further comprises:
the control module is used for picking up the image content selected by the object selection frame in each first video frame in the M first video frames, and controlling the object selection frame to move along with the target video object in the K second video frames based on the display position and the display size of the object selection frame on the target video frame before obtaining the M target video images so as to enable the objects selected by the object selection frame in each second video frame to be the same;
the K second video frames are video frames except the target video frame in the M first video frames, and K is a positive integer smaller than M.
14. The apparatus of claim 10, wherein the device comprises a plurality of sensors,
the generating module is specifically configured to set a transparency of a target area in each target video image to a preset transparency, so as to obtain M first images; synthesizing the M first images to generate the target GIF picture;
The target area in each target video image is an image area except for the area where the target object is located.
15. The device according to any one of claims 10 to 14, wherein,
the apparatus further comprises:
the storage module is used for storing the target GIF picture and the identification corresponding to the target GIF picture into a target storage area after the generation module generates the target GIF picture based on the M target video images, wherein the identification corresponding to the target GIF picture is used for indicating the target video object contained in the target GIF picture;
the user input module is used for acquiring target information input by a user in a content input area in a target application program;
and the third display module is further used for displaying the target GIF picture under the condition that the target information obtained by the user input module is the same as the information indicated by the identification of the target GIF picture stored by the storage module.
16. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the GIF picture generation method as claimed in any one of claims 1-9.
CN202010478312.9A 2020-05-29 2020-05-29 GIF picture generation method and device and electronic equipment Active CN111612873B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010478312.9A CN111612873B (en) 2020-05-29 2020-05-29 GIF picture generation method and device and electronic equipment
PCT/CN2021/095887 WO2021238943A1 (en) 2020-05-29 2021-05-25 Gif picture generation method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478312.9A CN111612873B (en) 2020-05-29 2020-05-29 GIF picture generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111612873A CN111612873A (en) 2020-09-01
CN111612873B true CN111612873B (en) 2023-07-14

Family

ID=72202163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478312.9A Active CN111612873B (en) 2020-05-29 2020-05-29 GIF picture generation method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN111612873B (en)
WO (1) WO2021238943A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612873B (en) * 2020-05-29 2023-07-14 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device
CN113099288A (en) * 2021-03-31 2021-07-09 上海哔哩哔哩科技有限公司 Video production method and device
CN113099287A (en) * 2021-03-31 2021-07-09 上海哔哩哔哩科技有限公司 Video production method and device
CN113209629B (en) * 2021-05-14 2024-02-09 苏州仙峰网络科技股份有限公司 Method and device for converting sequence frames into GIF
CN113627534A (en) * 2021-08-11 2021-11-09 百度在线网络技术(北京)有限公司 Method and device for identifying type of dynamic image and electronic equipment
CN114302009A (en) * 2021-12-06 2022-04-08 维沃移动通信有限公司 Video processing method, video processing device, electronic equipment and medium
CN114253451B (en) * 2021-12-21 2024-09-13 咪咕音乐有限公司 Screenshot method and device, electronic equipment and storage medium
CN114173203A (en) * 2022-01-05 2022-03-11 统信软件技术有限公司 Method and device for capturing image in video playing and computing equipment
CN114928761B (en) * 2022-05-07 2024-04-12 维沃移动通信有限公司 Video sharing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109874051A (en) * 2019-02-21 2019-06-11 百度在线网络技术(北京)有限公司 Video content processing method, device and equipment
CN110324663A (en) * 2019-07-01 2019-10-11 北京奇艺世纪科技有限公司 A kind of generation method of dynamic image, device, electronic equipment and storage medium
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium
CN111145308A (en) * 2019-12-06 2020-05-12 北京达佳互联信息技术有限公司 Paster obtaining method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101893151B1 (en) * 2011-08-21 2018-08-30 엘지전자 주식회사 Video display device, terminal device and operating method thereof
CN104038705B (en) * 2014-05-30 2018-08-24 无锡天脉聚源传媒科技有限公司 Video creating method and device
CN110347869B (en) * 2019-06-05 2021-07-09 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111612873B (en) * 2020-05-29 2023-07-14 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109874051A (en) * 2019-02-21 2019-06-11 百度在线网络技术(北京)有限公司 Video content processing method, device and equipment
CN110324663A (en) * 2019-07-01 2019-10-11 北京奇艺世纪科技有限公司 A kind of generation method of dynamic image, device, electronic equipment and storage medium
CN111145308A (en) * 2019-12-06 2020-05-12 北京达佳互联信息技术有限公司 Paster obtaining method and device
CN111093026A (en) * 2019-12-30 2020-05-01 维沃移动通信(杭州)有限公司 Video processing method, electronic device and computer-readable storage medium

Also Published As

Publication number Publication date
WO2021238943A1 (en) 2021-12-02
CN111612873A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111612873B (en) GIF picture generation method and device and electronic equipment
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN112449110B (en) Image processing method and device and electronic equipment
CN112532882B (en) Image display method and device
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN112422817A (en) Image processing method and device
CN112148192A (en) Image display method and device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN117194697A (en) Label generation method and device and electronic equipment
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114390205B (en) Shooting method and device and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN112367487B (en) Video recording method and electronic equipment
CN115037874A (en) Photographing method and device and electronic equipment
CN112492206B (en) Image processing method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN114143454B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111984173B (en) Expression package generation method and device
CN117292019A (en) Image processing method, device, equipment and medium
CN113572961A (en) Shooting processing method and electronic equipment
CN116501225A (en) Image capturing method and device and electronic equipment
CN117742538A (en) Message display method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant