CN109600558B - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN109600558B
CN109600558B CN201810758789.5A CN201810758789A CN109600558B CN 109600558 B CN109600558 B CN 109600558B CN 201810758789 A CN201810758789 A CN 201810758789A CN 109600558 B CN109600558 B CN 109600558B
Authority
CN
China
Prior art keywords
image
fusion
static
information
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810758789.5A
Other languages
Chinese (zh)
Other versions
CN109600558A (en
Inventor
林木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810758789.5A priority Critical patent/CN109600558B/en
Publication of CN109600558A publication Critical patent/CN109600558A/en
Application granted granted Critical
Publication of CN109600558B publication Critical patent/CN109600558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The embodiment of the application discloses a method and a device for generating information. One embodiment of the method comprises: acquiring a static image sequence corresponding to a target dynamic picture; determining fusion images and fusion information for the static image sequence, wherein the fusion images are used for fusing with the static images in the static image sequence, and the fusion information is used for indicating the fusion operation of the fusion images and the static images in the static image sequence; for a static image in the static image sequence, fusing the static image and the determined image for fusion based on the determined fusion information to obtain a fused image; and generating a new dynamic picture based on the obtained fused image. The embodiment enriches the display forms of the dynamic pictures and improves the diversity of information generation.

Description

Method and apparatus for generating information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for generating information.
Background
A dynamic image (dynamic image) is a group of images in which a specific still image is switched at a predetermined frequency to generate a certain dynamic effect. At present, dynamic pictures have been widely used in human life. For example, for an electronic device using an Android system, some mainstream picture loading libraries (e.g., Fresco, Gilde) included in the electronic device can better support loading of moving pictures.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating information.
In a first aspect, an embodiment of the present application provides a method for generating information, where the method includes: acquiring a static image sequence corresponding to a target dynamic picture; determining fusion images and fusion information for the static image sequence, wherein the fusion images are used for fusing with the static images in the static image sequence, and the fusion information is used for indicating the fusion operation of the fusion images and the static images in the static image sequence; for a static image in the static image sequence, fusing the static image and the determined image for fusion based on the determined fusion information to obtain a fused image; and generating a new dynamic picture based on the obtained fused image.
In some embodiments, fusing the still image and the determined image for fusion to obtain a fused image includes: and fusing the static image and the determined image for fusion to obtain a fused image with a shape different from that of the static image.
In some embodiments, generating a new motion picture based on the obtained fused image comprises: performing anti-aliasing processing on the fused image in the obtained fused image to obtain a processed image; based on the obtained processed image, a new moving picture is generated.
In some embodiments, the shape of the still image has a correspondence with the shape of the fusion image to which the still image corresponds; and determining images for fusion and fusion information for the sequence of still images, including: determining a shape of a still image in the sequence of still images; and selecting the image for preset fusion from the image set for preset fusion as the image for fusion corresponding to the static image sequence based on the determined shape.
In some embodiments, determining images for fusion and fusion information for a sequence of still images includes: outputting a preset fusion information set for a user to select; and acquiring preset fusion information selected by a user as fusion information corresponding to the static image sequence.
In some embodiments, fusing the still image and the determined image for fusion based on the determined fusion information to obtain a fused image, includes: creating a layer for fusion, and adding the static image and the image for fusion to the layer for fusion; and fusing the static image and the image for fusion on the layer for fusion based on the determined fusion information to obtain a fused image on the layer for fusion.
In some embodiments, after obtaining the post-fusion image on the chart level for fusion, the method further comprises: adding the obtained fused image positioned on the graph layer for fusion to a preset graph layer for display; and generating a new dynamic picture based on the obtained fused image, including: and generating a new dynamic picture based on the fused image positioned on the image layer for display.
In a second aspect, an embodiment of the present application provides an apparatus for generating information, where the apparatus includes: an acquisition unit configured to acquire a sequence of still images corresponding to a target moving picture; a determination unit configured to determine an image for fusion and fusion information for the still image sequence, wherein the image for fusion is used for fusing with the still image in the still image sequence, and the fusion information is used for indicating a fusion operation of the image for fusion and the still image in the still image sequence; a fusion unit configured to fuse, for a static image in a static image sequence, the static image and the determined image for fusion based on the determined fusion information, to obtain a fused image; a generating unit configured to generate a new moving picture based on the obtained fused image.
In some embodiments, the fusion unit is further configured to: and fusing the static image and the determined image for fusion to obtain a fused image with a shape different from that of the static image.
In some embodiments, the generating unit comprises: a processing module configured to perform anti-aliasing processing on a fused image of the obtained fused images to obtain a processed image; a generating module configured to generate a new dynamic picture based on the obtained processed image.
In some embodiments, the shape of the still image has a correspondence with the shape of the fusion image to which the still image corresponds; and the determination unit includes: a determination module configured to determine a shape of a still image in a sequence of still images; and the selecting module is configured to select the preset images for fusion from the preset image set for fusion as the images for fusion corresponding to the static image sequence based on the determined shape.
In some embodiments, the determining unit comprises: an output module configured to output a preset fusion information set for selection by a user; and the acquisition module is configured to acquire the preset fusion information selected by the user as the fusion information corresponding to the static image sequence.
In some embodiments, the fusion unit comprises: a creation module configured to create a layer for fusion, and add the static image and the image for fusion to the layer for fusion; and the fusion module is configured to fuse the static image on the fusion image layer and the fusion image based on the determined fusion information to obtain a fused image on the fusion image layer.
In some embodiments, the apparatus further comprises: an adding unit configured to add the obtained post-fusion image on the map layer for fusion to a previously set map layer for display; and the generating unit is further configured to: and generating a new dynamic picture based on the fused image positioned on the image layer for display.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method of any of the embodiments of the method for generating information described above.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above-described methods for generating information.
The method and the device for generating information provided by the embodiment of the application acquire the static image sequence corresponding to the target dynamic picture, then determine the fusion image and the fusion information for the static image sequence, wherein the fusion image is used for fusing with the static image in the static image sequence, the fusion information is used for indicating the fusion operation of the fusion image and the static image in the static image sequence, then for the static image in the static image sequence, the static image and the determined fusion image are fused based on the determined fusion information to obtain the fused image, finally, a new dynamic picture is generated based on the obtained fused image, so that the shape of the dynamic picture can be adjusted by fusing the fusion image and the static image corresponding to the dynamic picture, the display form of the dynamic picture is enriched, the diversity of information generation is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating information according to the present application;
FIG. 3 is a schematic illustration of a fused message of a method for generating a message according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an application scenario of a method for generating information according to an embodiment of the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method for generating information according to the present application;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for generating information according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for generating information or the apparatus for generating information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as graphic software, image processing applications, web browser applications, search applications, instant messaging tools, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices with a display screen, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as an image processing server that processes a target moving picture displayed on the terminal apparatuses 101, 102, 103. The image processing server may analyze and otherwise process data such as a still image sequence corresponding to the received target moving picture, and feed back a processing result (e.g., a new moving picture) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the data used in the process of generating a new moving picture or the still image sequence corresponding to the target moving picture does not need to be acquired from a remote location, the system architecture may not include a network, but only include a terminal device or a server.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for generating information in accordance with the present application is shown. The method for generating information comprises the following steps:
step 201, a static image sequence corresponding to a target dynamic picture is obtained.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for generating information may acquire a still image sequence corresponding to a target moving picture by a wired connection manner or a wireless connection manner. The target dynamic picture may be a dynamic picture whose shape is to be adjusted. A moving picture is essentially composed of a sequence of still images arranged in chronological order, and a dynamic effect is produced when the still images in the sequence of still images are switched at a specified frequency.
It should be noted that, here, the execution main body may first obtain the target moving picture, and then analyze the target moving picture (for example, analyze the target moving picture through a picture loading library) to obtain a static image sequence corresponding to the target moving picture; alternatively, the execution main body may directly acquire the still image sequence corresponding to the target moving picture from an electronic device (for example, a terminal device shown in fig. 1) locally or communicatively connected thereto.
At step 202, images for fusion and fusion information for a still image sequence are determined.
In the present embodiment, based on the still image sequence obtained in step 201, the execution subject can determine the image for fusion and the fusion information for the still image sequence.
Wherein the fusion image can be used for fusion with a still image in the still image sequence to change the shape of the still image. The fusion image may be an image preset by a technician or an image customized by a user.
The fusion information may include, but is not limited to, at least one of: text, numbers, symbols, pictures, voice, gestures. The fusion information may be used to indicate a fusion operation of the image for fusion with the still image in the still image sequence.
Specifically, in one aspect, the fusion information corresponding to the static image sequence may be information preset by a technician and used for indicating a fusion operation. For example, the fusion information shown in fig. 3, where the fusion information is a picture, black and white interlaced squares in the picture are a background of the picture, sixteen pictures in fig. 3 are used to represent sixteen fusion results after performing a fusion operation on a circular image and a square image, and as an example, the first picture at the upper left is used to represent that the circular image and the square image are fused, and finally the circular image and the square image are removed; the transverse second picture is used for representing and fusing the circular image and the square image so as to keep the square image; the third horizontal picture is used for representing the fusion of the circular image and the square image so as to keep the circular image. On the other hand, the fusion information corresponding to the still image sequence may be information (e.g., a voice command) input by the user and used to instruct the fusion operation.
The fusion operation may be an operation of fusing the image for fusion and the still image to change the corners of the still image. For example, the still image is rectangular with corners at right angles. The fusion image may be a rounded image. The blending information may be the text "rounded corner left," and the indicated blending operation may include blending the static image of the rectangle and the rounded corner image to adjust the right angle included in the static image to the rounded corner in the rounded corner image.
It is understood that adjusting the corners of the still image is equivalent to adjusting the shape of the still image. In order to adjust the shape of the target moving picture, the fusion operation usually retains the image features of the fusion image, that is, the fusion image usually has a correspondence relationship with the fusion operation, that is, the fusion image has a correspondence relationship with the fusion information.
In particular, since the still image may have a plurality of corners, if the plurality of corners of the still image are to be adjusted, the execution main body may perform operations such as rotating the image for fusion, so as to adjust different corners of the still image.
In this embodiment, the execution subject may determine the fusion image corresponding to the still image sequence by various methods. For example, the execution subject may determine the fusion image drawn in advance by the technician as the fusion image corresponding to the still image sequence; alternatively, the execution body may output the still image sequence for display, and receive, in response to receiving the fusion-use image drawing request input by the user, the image input by the user as the fusion-use image corresponding to the still image sequence.
In some optional implementation manners of this embodiment, a technician may preset a correspondence relationship between a shape of the still image and a shape of the fusion image corresponding to the still image (for example, when the shape of the still image is a rectangle, the corresponding shape of the fusion image includes a rounded corner; when the shape of the still image is a circle, the corresponding shape of the fusion image includes a right angle); and the execution subject may determine the image for fusion for the still image sequence by: first, the execution subject described above may determine the shape of a still image in a still image sequence. Then, the execution subject may select, based on the determined shape, a preset fusion image (a preset fusion image corresponding to the determined shape of the still image) from a preset fusion image set as a fusion image corresponding to the still image sequence.
The preset fusion image set may be an image set preset by a technician. For each preset image for fusion, the technician can also set the corresponding preset fusion information. For example, for a preset blending image including a rounded corner, the corresponding preset blending information may be "rounded corner reserved"; for a preset fusion image including a right angle, the corresponding preset fusion information may be "right angle left". Furthermore, in this implementation, when the execution subject selects a preset fusion image from the preset fusion image set as a fusion image, the execution subject may directly use preset fusion information corresponding to the selected preset fusion image as fusion information corresponding to the still image sequence.
In this embodiment, the execution subject may also determine fusion information corresponding to the still image sequence by other methods. Optionally, the execution main body may output a preset fusion information set for a user to select, and further obtain the preset fusion information selected by the user as fusion information corresponding to the static image sequence.
And step 203, fusing the static image in the static image sequence and the determined image for fusion based on the determined fusion information to obtain a fused image.
In this embodiment, for a still image in the still image sequence, based on the fusion image and the fusion information determined in step 202, the execution subject may fuse the still image and the determined fusion image to obtain a fused image.
Here, the technician may specify a fusion area (for example, an image corner area) between the still image and the fusion image in advance, and the execution agent may move the fusion image to the fusion area for the specified fusion image, and fuse the still image and the fusion image in accordance with an operation indicated by the fusion information. Alternatively, the execution body may first output the still image and the image for fusion so that the user moves the image for fusion to the fusion area on the still image; then, the moved fusion image and the still image input by the user are acquired, and the still image and the moved fusion image are fused in accordance with the operation indicated by the fusion information.
In some optional implementations of the embodiment, for a still image in the still image sequence, based on the determined fusion information, the executing entity may fuse the still image and the determined image for fusion, and obtain a fused image with a shape different from that of the still image.
In practice, for a sequence of still images, the executing entity may sequentially select a still image from the sequence, execute step 203, or randomly select a still image from the sequence, and execute step 203, i.e. the selection manner of the still image is not limited herein.
In some optional implementations of this embodiment, for a still image in a still image sequence, the executing entity may obtain a fused image corresponding to the still image by:
first, the executing agent may create a layer for fusion, where the layer for fusion may be a layer on which a fusion operation is executed. Here, the execution body may delete the layer for fusion after the completion of the execution of the fusion operation.
Then, the execution subject may add the still image and the image for fusion to the layer for fusion. The still image and the fusion image may be added to the fusion layer in various ways. Specifically, the execution subject may add the still image and the image for fusion to the layer for fusion by moving, or may add the still image and the image for fusion to the layer for fusion by copying, as an example.
Finally, based on the determined fusion information, the execution body may fuse the still image and the image for fusion on the layer for fusion, and obtain a fused image located on the layer for fusion.
Here, for the subsequent processing of the fused image on the fusion layer, the execution entity may generate a new moving picture on the fusion layer directly based on the fused image on the fusion layer, or may first process (e.g., move) the fused image on the fusion layer to obtain a processed fused image and then generate a new moving picture based on the processed fused image.
And step 204, generating a new dynamic picture based on the obtained fused image.
In this embodiment, the execution subject may generate a new moving picture based on the fused image obtained in step 203. Here, the shape of the new moving picture is different from the shape of the target moving picture, and thus the display form of the moving picture can be enriched.
Specifically, the executing entity may first sort the fused images corresponding to the static images according to the arrangement order of the static images in the static image sequence obtained in step 201, so as to obtain a fused image sequence; then, the execution main body can set the switching frequency of the fused image in the fused image sequence according to the switching frequency of the static image corresponding to the target dynamic picture, and further generate a new dynamic picture.
In some optional implementations of this embodiment, for the optional implementation in step 203, after obtaining the fused image located on the layer for fusion, the executing body may add the obtained fused image located on the layer for fusion to a preset layer for display; and the execution body may generate a new moving picture based on the fused image on the display chart layer. The display layer may be a layer for displaying an image, and when a new moving picture is generated on the display layer, the new moving picture is simultaneously displayed.
With continued reference to fig. 4, fig. 4 is a schematic diagram of an application scenario of the method for generating information according to the present embodiment. In the application scenario of fig. 4, the server 401 may first acquire a still image sequence 403 corresponding to the target moving picture 402. The server 401 may then determine an image 404 for fusion and fusion information (e.g., "rounded corners") 405 for the still image sequence 403, where the image 404 for fusion is used to fuse with the still images in the still image sequence 403, and the fusion information 405 is used to indicate a fusion operation of the image 404 for fusion with the still images in the still image sequence 403. Next, for the still images 4031, 4032, 4033 in the still image sequence 403, the server 401 may respectively fuse the still images 4031, 4032, 4033 and the image 404 for fusion based on the fusion information 405, and obtain fused images 4061, 4062, 4063. Finally, the server 401 may generate a new moving picture 407 based on the obtained fused images 4061, 4062, 4063.
The method provided by the above embodiment of the present application obtains the still image sequence corresponding to the target moving picture, and then determines the fusion image and the fusion information for the still image sequence, wherein the fusion image is used for fusing with a static image in the static image sequence, the fusion information is used for indicating the fusion operation of the fusion image and the static image in the static image sequence, then for the static image in the static image sequence, based on the determined fusion information, fusing the static image and the determined image for fusion to obtain a fused image, generating a new dynamic image based on the obtained fused image, therefore, the shape of the dynamic picture can be adjusted by fusing the image for fusion and the static image corresponding to the dynamic picture, the display form of the dynamic picture is enriched, and the diversity of information generation is improved.
With further reference to fig. 5, a flow 500 of yet another embodiment of a method for generating information is shown. The flow 500 of the method for generating information includes the steps of:
step 501, a static image sequence corresponding to a target dynamic picture is obtained.
In the present embodiment, an execution subject (e.g., a server shown in fig. 1) of the method for generating information may acquire a still image sequence corresponding to a target moving picture by a wired connection manner or a wireless connection manner. The target dynamic picture may be a dynamic picture whose shape is to be adjusted. A moving picture is essentially composed of a sequence of still images arranged in chronological order, and a dynamic effect is produced when the still images in the sequence of still images are switched at a specified frequency.
Step 502 determines images for fusion and fusion information for a sequence of still images.
In the present embodiment, based on the still image sequence obtained in step 501, the execution subject can determine the images for fusion and the fusion information for the still image sequence. Wherein the fusion image can be used for fusion with a still image in the still image sequence to change the shape of the still image. The fusion image may be an image preset by a technician or an image customized by a user. The fusion information may include, but is not limited to, at least one of: text, numbers, symbols, pictures, voice, gestures. The fusion information may be used to indicate a fusion operation of the image for fusion with the still image in the still image sequence.
Step 503, for the static images in the static image sequence, based on the determined fusion information, fusing the static images and the determined images for fusion to obtain fused images.
In this embodiment, based on the fusion image and the fusion information determined in step 502, for a still image in the still image sequence, the executing entity may fuse the still image and the determined fusion image based on the determined fusion information to obtain a fused image.
Step 501, step 502, and step 503 are respectively the same as step 201, step 202, and step 203 in the foregoing embodiment, and the above description for step 201, step 202, and step 203 also applies to step 501, step 502, and step 503, which is not described herein again.
And step 504, for the fused image in the obtained fused images, performing anti-aliasing processing on the fused image to obtain a processed image.
In this embodiment, for the fused image in the fused images obtained in step 503, the executing entity may perform antialiasing processing on the fused image to obtain a processed image. It should be noted that the anti-aliasing process is a widely studied and prior art at present, and is not described herein.
Step 505, based on the obtained processed image, a new dynamic picture is generated.
In this embodiment, the execution subject may generate a new moving picture based on the processed image obtained in step 504. Here, the shape of the new moving picture is different from the shape of the target moving picture, and thus the display form of the moving picture can be enriched.
Specifically, the executing body may first sort the processed images corresponding to the still images according to the arrangement order of the still images in the still image sequence obtained in step 501, so as to obtain a processed image sequence; then, the execution main body can set the switching frequency of the processed images in the processed image sequence according to the switching frequency of the static images corresponding to the target dynamic picture, and further generate a new dynamic picture.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the method for generating information in the present embodiment highlights the steps of performing antialiasing processing on the fused image, obtaining a processed image, and generating a new dynamic picture based on the obtained processed image. Therefore, the scheme described in this embodiment can reduce the adverse effect (e.g., burr generation) of the fusion on the image edge through the anti-aliasing process, thereby generating a more beautiful dynamic picture with better dynamic effect.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for generating information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for generating information of the present embodiment includes: an acquisition unit 601, a determination unit 602, a fusion unit 603, and a generation unit 604. Wherein the obtaining unit 601 is configured to obtain a still image sequence corresponding to a target moving picture; the determination unit 602 is configured to determine an image for fusion and fusion information for a still image sequence, wherein the image for fusion is used for fusing with a still image in the still image sequence, and the fusion information is used for indicating a fusion operation of the image for fusion and the still image in the still image sequence; the fusion unit 603 is configured to fuse, for a static image in the static image sequence, the static image and the determined image for fusion based on the determined fusion information, and obtain a fused image; the generating unit 604 is configured to generate a new moving picture based on the obtained fused image.
In this embodiment, the acquiring unit 601 of the apparatus 600 for generating information may acquire the still image sequence corresponding to the target moving picture by a wired connection manner or a wireless connection manner. The target dynamic picture may be a dynamic picture whose shape is to be adjusted. A moving picture is essentially composed of a sequence of still images arranged in chronological order, and a dynamic effect is produced when the still images in the sequence of still images are switched at a specified frequency.
It should be noted that, here, the obtaining unit 601 may first obtain the target moving picture, and then parse the target moving picture (for example, parse through a picture loading library) to obtain a static image sequence corresponding to the target moving picture; alternatively, the acquiring unit 601 may directly acquire the still image sequence corresponding to the target moving picture from an electronic device (for example, a terminal device shown in fig. 1) locally or communicatively connected thereto.
In this embodiment, based on the still image sequence obtained by the acquisition unit 601, the determination unit 602 may determine the images for fusion and the fusion information for the still image sequence. Wherein the fusion image can be used for fusion with a still image in the still image sequence to change the shape of the still image. The fusion image may be an image preset by a technician or an image customized by a user. The fusion information may include, but is not limited to, at least one of: text, numbers, symbols, pictures, voice, gestures. The fusion information may be used to indicate a fusion operation of the image for fusion with the still image in the still image sequence. The fusion information corresponding to the static image sequence may be information preset by a technician and used for indicating the fusion operation, or may be information input by a user and used for indicating the fusion operation.
In this embodiment, based on the image for fusion and the fusion information determined by the determination unit 602, for a still image in the still image sequence, the fusion unit 603 may fuse the still image and the determined image for fusion based on the determined fusion information to obtain a fused image.
In the present embodiment, the generation unit 604 may generate a new moving picture based on the fused image obtained by the fusion unit 603. Here, the shape of the new moving picture is different from the shape of the target moving picture, and thus the display form of the moving picture can be enriched.
In some optional implementations of this embodiment, the fusion unit 603 may be further configured to: and fusing the static image and the determined image for fusion to obtain a fused image with a shape different from that of the static image.
In some optional implementations of this embodiment, the generating unit 604 may include: a processing module (not shown in the figure) configured to perform anti-aliasing processing on the fused image in the obtained fused images to obtain a processed image; a generating module (not shown in the figure) configured to generate a new dynamic picture based on the obtained processed image.
In some optional implementations of the present embodiment, the shape of the still image may have a correspondence with the shape of the image for fusion corresponding to the still image; and the determining unit 602 may include: a determining module (not shown in the figures) configured to determine a shape of a still image in the sequence of still images; and a selecting module (not shown in the figures) configured to select the preset fusion image from the preset fusion image set as a fusion image corresponding to the static image sequence based on the determined shape.
In some optional implementations of this embodiment, the determining unit 603 may further include: an output module (not shown in the figure) configured to output a preset fusion information set for selection by a user; and an obtaining module (not shown in the figure) configured to obtain the preset fusion information selected by the user as the fusion information corresponding to the static image sequence.
In some optional implementations of this embodiment, the fusion unit 603 may include: a creation module (not shown in the figure) configured to create a layer for fusion, and add the static image and the image for fusion to the layer for fusion; and a fusion module (not shown in the figure) configured to fuse the static image and the image for fusion on the layer for fusion based on the determined fusion information to obtain a fused image located on the layer for fusion.
In some optional implementations of this embodiment, the apparatus 600 may further include: an adding unit (not shown in the figure) configured to add the obtained post-fusion image on the layer for fusion to a layer for display set in advance; and the generating unit 604 may be further configured to: and generating a new dynamic picture based on the fused image positioned on the image layer for display.
It will be understood that the elements of the apparatus 600 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 600 and the units included therein, and are not described herein again.
The apparatus 600 according to the above embodiment of the present application acquires a still image sequence corresponding to a target moving picture by the acquisition unit 601, then the determination unit 602 determines a fusion image and fusion information for the still image sequence, where the fusion image is used for fusing with a still image in the still image sequence, the fusion information is used for indicating a fusion operation of the fusion image and the still image in the still image sequence, then for the still image in the still image sequence, the fusion unit 603 fuses the still image and the determined fusion image based on the determined fusion information to obtain a fused image, and finally the generation unit 604 generates a new moving picture based on the obtained fused image, so that the shape of the moving picture can be adjusted by fusing the fusion image and the still image corresponding to the moving picture, the display forms of the dynamic pictures are enriched, and the diversity of information generation is improved.
Referring now to FIG. 7, a block diagram of a computer system 700 suitable for use in implementing an electronic device (e.g., the terminal device or server shown in FIG. 1) of an embodiment of the present application is shown. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 601, ROM 702, and RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 605 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by a Central Processing Unit (CPU)701, performs the above-described functions defined in the method of the present application. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a determination unit, a fusion unit, and a generation unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires a still image sequence corresponding to a target moving picture".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a static image sequence corresponding to a target dynamic picture; determining fusion images and fusion information for the static image sequence, wherein the fusion images are used for fusing with the static images in the static image sequence, and the fusion information is used for indicating the fusion operation of the fusion images and the static images in the static image sequence; for a static image in the static image sequence, fusing the static image and the determined image for fusion based on the determined fusion information to obtain a fused image; and generating a new dynamic picture based on the obtained fused image.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A method for generating information, comprising:
acquiring a static image sequence corresponding to a target dynamic picture;
determining a fusion image and fusion information for the static image sequence, wherein the fusion image is used for fusing with a static image in the static image sequence, the fusion information is used for indicating the fusion operation of the fusion image and the static image in the static image sequence, the shape of the static image has a corresponding relation with the shape of the fusion image corresponding to the static image, and the fusion image has a corresponding relation with the fusion information;
for the static images in the static image sequence, fusing the static images and the determined images for fusion based on the determined fusion information to obtain fused images with different shapes from the static images;
and generating a new dynamic picture based on the obtained fused image.
2. The method of claim 1, wherein generating a new motion picture based on the obtained fused image comprises:
performing anti-aliasing processing on the fused image in the obtained fused image to obtain a processed image;
based on the obtained processed image, a new moving picture is generated.
3. The method of claim 1, wherein the determining images for fusion and fusion information for the sequence of still images comprises:
determining a shape of a still image in the sequence of still images;
and selecting a preset image for fusion from a preset image set for fusion based on the determined shape as the image for fusion corresponding to the static image sequence.
4. The method of claim 1, wherein the determining images for fusion and fusion information for the sequence of still images comprises:
outputting a preset fusion information set for a user to select;
and acquiring preset fusion information selected by a user as fusion information corresponding to the static image sequence.
5. The method according to one of claims 1 to 4, wherein the fusing the still image and the determined image for fusion based on the determined fusion information to obtain a fused image comprises:
creating a layer for fusion, and adding the static image and the image for fusion to the layer for fusion;
and fusing the static image and the image for fusion on the image layer for fusion based on the determined fusion information to obtain a fused image on the image layer for fusion.
6. The method of claim 5, wherein after the obtaining the fused image on the chart layer for fusion, the method further comprises:
adding the obtained fused image positioned on the graph layer for fusion to a preset graph layer for display; and
generating a new dynamic picture based on the obtained fused image, comprising:
and generating a new dynamic picture based on the fused image on the image layer for display.
7. An apparatus for generating information, comprising:
an acquisition unit configured to acquire a sequence of still images corresponding to a target moving picture;
a determination unit configured to determine a fusion image for the still image sequence, the fusion image being used for fusion with a still image in the still image sequence, and fusion information indicating a fusion operation of the fusion image and the still image in the still image sequence, a shape of the still image having a correspondence with a shape of the fusion image to which the still image corresponds, the fusion image having a correspondence with the fusion information;
a fusion unit configured to fuse, for a still image in the still image sequence, the still image and the determined image for fusion based on the determined fusion information, to obtain a fused image having a shape different from that of the still image;
a generating unit configured to generate a new moving picture based on the obtained fused image.
8. The apparatus of claim 7, wherein the generating unit comprises:
a processing module configured to perform anti-aliasing processing on a fused image of the obtained fused images to obtain a processed image;
a generating module configured to generate a new dynamic picture based on the obtained processed image.
9. The apparatus of claim 7, wherein the determining unit comprises:
a determination module configured to determine a shape of a still image in the sequence of still images;
and the selecting module is configured to select a preset image for fusion from a preset image set for fusion as the image for fusion corresponding to the static image sequence based on the determined shape.
10. The apparatus of claim 7, wherein the determining unit comprises:
an output module configured to output a preset fusion information set for selection by a user;
and the acquisition module is configured to acquire preset fusion information selected by a user as fusion information corresponding to the static image sequence.
11. The apparatus according to one of claims 7-10, wherein the fusion unit comprises:
a creation module configured to create a layer for fusion, and add the static image and the image for fusion to the layer for fusion;
and the fusion module is configured to fuse the static image and the image for fusion on the image layer for fusion based on the determined fusion information to obtain a fused image on the image layer for fusion.
12. The apparatus of claim 11, wherein the apparatus further comprises:
an adding unit configured to add the obtained post-fusion image on the map layer for fusion to a preset map layer for display; and
the generation unit is further configured to:
and generating a new dynamic picture based on the fused image on the image layer for display.
13. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201810758789.5A 2018-07-11 2018-07-11 Method and apparatus for generating information Active CN109600558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810758789.5A CN109600558B (en) 2018-07-11 2018-07-11 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810758789.5A CN109600558B (en) 2018-07-11 2018-07-11 Method and apparatus for generating information

Publications (2)

Publication Number Publication Date
CN109600558A CN109600558A (en) 2019-04-09
CN109600558B true CN109600558B (en) 2021-08-13

Family

ID=65956573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810758789.5A Active CN109600558B (en) 2018-07-11 2018-07-11 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN109600558B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084306B (en) * 2019-04-30 2022-03-29 北京字节跳动网络技术有限公司 Method and apparatus for generating dynamic image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002052565A1 (en) * 2000-12-22 2002-07-04 Muvee Technologies Pte Ltd System and method for media production
CN1705976A (en) * 2003-10-23 2005-12-07 微软公司 Markup language and object model for vector graphics
JP2008015619A (en) * 2006-07-03 2008-01-24 Fuji Xerox Co Ltd Image processor, image processing method, and image processing program
CN105516610A (en) * 2016-02-19 2016-04-20 深圳新博科技有限公司 Method and device for shooting local dynamic image
CN107038735A (en) * 2017-03-31 2017-08-11 武汉斗鱼网络科技有限公司 It is a kind of to realize the method and system that entity opens animation
WO2017203477A1 (en) * 2016-05-26 2017-11-30 Typito Technologies Pvt Ltd Media content editing platform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029860A1 (en) * 2009-07-30 2011-02-03 Ptucha Raymond W Artistic digital template for image display
US8560933B2 (en) * 2011-10-20 2013-10-15 Microsoft Corporation Merging and fragmenting graphical objects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002052565A1 (en) * 2000-12-22 2002-07-04 Muvee Technologies Pte Ltd System and method for media production
CN1705976A (en) * 2003-10-23 2005-12-07 微软公司 Markup language and object model for vector graphics
JP2008015619A (en) * 2006-07-03 2008-01-24 Fuji Xerox Co Ltd Image processor, image processing method, and image processing program
CN105516610A (en) * 2016-02-19 2016-04-20 深圳新博科技有限公司 Method and device for shooting local dynamic image
WO2017203477A1 (en) * 2016-05-26 2017-11-30 Typito Technologies Pvt Ltd Media content editing platform
CN107038735A (en) * 2017-03-31 2017-08-11 武汉斗鱼网络科技有限公司 It is a kind of to realize the method and system that entity opens animation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Android自定义view利用Xfermode实现动态文字加载动画;网友;《http://www.phperz.com/article/17/1012/346053.html》;20171012;全文 *

Also Published As

Publication number Publication date
CN109600558A (en) 2019-04-09

Similar Documents

Publication Publication Date Title
CN110046021B (en) Page display method, device, system, equipment and storage medium
US11132114B2 (en) Method and apparatus for generating customized visualization component
CN108833787B (en) Method and apparatus for generating short video
CN109472264B (en) Method and apparatus for generating an object detection model
CN109981787B (en) Method and device for displaying information
CN109446442B (en) Method and apparatus for processing information
CN108882025B (en) Video frame processing method and device
CN109377508B (en) Image processing method and device
CN110007936B (en) Data processing method and device
US20220050562A1 (en) Methods and apparatuses for generating a hosted application
CN111951356A (en) Animation rendering method based on JSON data format
CN109600558B (en) Method and apparatus for generating information
CN107330087B (en) Page file generation method and device
CN109522429B (en) Method and apparatus for generating information
CN109947528B (en) Information processing method and device
CN110288523B (en) Image generation method and device
CN110619615A (en) Method and apparatus for processing image
CN115878115A (en) Page rendering method, device, medium and electronic equipment
CN111199519B (en) Method and device for generating special effect package
CN111367592B (en) Information processing method and device
CN113284174A (en) Method and device for processing pictures
CN110620805B (en) Method and apparatus for generating information
CN111125501A (en) Method and apparatus for processing information
CN112308074A (en) Method and device for generating thumbnail
CN110716699A (en) Method and apparatus for writing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.