WO2022068631A1 - 图片转视频的方法、装置、设备及存储介质 - Google Patents

图片转视频的方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022068631A1
WO2022068631A1 PCT/CN2021/119409 CN2021119409W WO2022068631A1 WO 2022068631 A1 WO2022068631 A1 WO 2022068631A1 CN 2021119409 W CN2021119409 W CN 2021119409W WO 2022068631 A1 WO2022068631 A1 WO 2022068631A1
Authority
WO
WIPO (PCT)
Prior art keywords
black
white image
area
color
pixels
Prior art date
Application number
PCT/CN2021/119409
Other languages
English (en)
French (fr)
Inventor
张树鹏
甘小锐
文杰
王一同
潘景祥
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to JP2023507403A priority Critical patent/JP7471510B2/ja
Priority to EP21874292.2A priority patent/EP4181517A4/en
Publication of WO2022068631A1 publication Critical patent/WO2022068631A1/zh
Priority to US18/148,372 priority patent/US11893770B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40012Conversion of colour to monochrome
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the technical field of picture processing, for example, to a method, apparatus, device, and storage medium for converting pictures to videos.
  • the present disclosure provides a method, device, device and storage medium for converting pictures to videos, which can convert static pictures into dynamic videos, realize the automatic production of photo albums with color in the foreground area, do not need to manually create videos, and improve the production efficiency of photo albums. Convenience.
  • the present disclosure provides a method for converting a picture to a video, including:
  • the multi-frame images are spliced to obtain a target video.
  • the present disclosure also provides a picture-to-video device, including:
  • the black-and-white image acquisition module is set to fade the original image to obtain a black-and-white image
  • an image area determination module configured to determine a foreground area and a background area of the black and white image
  • the color restoration module is configured to iteratively restore the color of the pixels in the black-and-white image based on the processing order of the foreground area and the background area, and store the image obtained by each restoration as a picture frame, so as to obtain multiple frame image;
  • the target video acquisition module is configured to splicing the multi-frame images to obtain the target video.
  • the present disclosure also provides an electronic device, the electronic device comprising:
  • storage means configured to store one or more instructions
  • the one or more processing apparatuses When the one or more instructions are executed by the one or more processing apparatuses, the one or more processing apparatuses implement the above-mentioned method for converting a picture to a video.
  • the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processing apparatus, the above-mentioned method for converting a picture to a video is implemented.
  • FIG. 1 is a flowchart of a method for converting a picture to a video according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of an apparatus for converting a picture to a video according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a flowchart of a method for converting a picture to a video provided by an embodiment of the present disclosure. This embodiment is applicable to the case of converting a static picture into a dynamic video.
  • the method can be executed by a device for converting a picture to a video.
  • the device It can be composed of hardware and/or software, and can generally be integrated into a device with a picture-to-video function, which can be an electronic device such as a mobile terminal, a server, or a server cluster.
  • the method includes the following steps:
  • Step 110 Fading the original image to obtain a black and white image.
  • a black-and-white image can consist of a single black-and-white image, or a black-and-white image can be superimposed on a color image.
  • the black-and-white image includes a black-and-white image layer and an original image layer, and the original image is faded to obtain a black-and-white image. , to obtain a black and white image.
  • the black and white picture layer is located above the original picture layer.
  • Step 120 Determine the foreground area and the background area of the black and white image.
  • the foreground area may be a target area to be identified, such as a portrait area, an animal area, and a building area, and the background area is an area other than the foreground area.
  • the foreground area and the background area of the black-and-white image may be determined by using the pre-cut foreground segmentation area or the background segmentation area as a mask.
  • the method of determining the foreground area and the background area of the black and white image may be: sending the original image to the server, so that the server performs foreground cropping on the original image to obtain the foreground segmentation area; receiving the foreground segmentation area returned by the server; determining the black and white image according to the foreground segmentation area the foreground and background regions.
  • the server may perform foreground cropping on the original image by: first identifying the foreground image in the original image, and then cropping the area where the foreground image is located to obtain the foreground segmented area and the background segmented area.
  • a recognition model can be used to recognize the foreground image in the original picture. For example, if the foreground image is a human portrait, the portrait recognition model is used for recognition, and if the foreground image is an animal, the animal recognition model is used for recognition. This embodiment does not limit the identified target type.
  • the server may return the foreground segmented area to the mobile terminal, and may also return the background segmented area to the mobile terminal to be used as a mask to separate the foreground area and the background area in the black and white image.
  • Step 130 based on the processing sequence of the foreground area and the background area, iteratively restore the color of the pixels in the black-and-white image, and store the image obtained each time as a picture frame to obtain a multi-frame image.
  • the processing sequence includes processing the foreground area first and then processing the background area, or processing the background area first and then processing the foreground area.
  • Color restoration can be understood as restoring the color of the pixels in the black and white image to the color of the original picture.
  • the way to restore the color of the pixel point in the black and white image to the color of the original image can be to directly replace the color value of the pixel point in the black and white image with the color value of the pixel point in the original image, or use mask filtering to change the color of the black and white image. The image undergoes color recovery.
  • the method of iteratively restoring the color of the pixels in the black-and-white image may be: performing iteratively restoring the color of the pixels in the black-and-white image according to the progress of a set number or a set proportion.
  • the ratio can be understood as the ratio of all pixels contained in the black and white image, or the ratio of all pixels contained in the current area.
  • iterative restoration of color is performed at a progress of restoring 100 pixels each time; or, iterative restoration of color is performed at a progress of 5% of the total pixels restored each time; Iterative restoration is performed at the progress of restoring 10% of the total pixels in the foreground area each time, and when restoring the background area, iterative restoration is performed at the progress of restoring 10% of the total pixels in the background area each time.
  • the process of iteratively restoring the color of the pixels in the black and white image may be: if the processing order is to process the foreground area first and then the background area, then the black and white image layer The pixels in the foreground area are subjected to iterative mask filtering according to the first direction, and the pixels in the background area of the black and white image layer are iteratively masked and filtered according to the second direction.
  • processing order is to process the background area first and then the foreground area, then perform iterative mask filtering on the pixels in the background area of the black-and-white image layer according to the third direction, and filter the pixels in the foreground area of the black-and-white image layer according to the third direction. Iterative mask filtering in the fourth direction.
  • the first direction, the second direction, the third direction and the fourth direction can be from top to bottom, bottom to top, left to right, right to left, upper left corner to lower right corner, upper right corner to lower left corner , from the lower left corner to the upper right corner, or from the lower right corner to the lower left corner, etc.
  • the second direction can be top to bottom, bottom to top, left to right, right to left, upper left to lower right, upper right to lower left, lower left to upper right, or lower right to Bottom left, etc.
  • the first direction and the second direction may be the same or different, and the third direction and the fourth direction may be the same or different.
  • iterative mask filtering may also be performed randomly on the pixels in the foreground area of the black and white image layer, and iterative mask filtering may be performed randomly on the pixels in the background area of the black and white image layer.
  • iterative mask filtering may be performed randomly on the pixels in the background area of the black and white image layer.
  • the process of mask processing can be understood as the process of erasing the pixels in the black and white picture layer, so that the original picture layer located under the black and white picture layer is displayed.
  • the method of iteratively restoring the color of the pixels in the black-and-white image may be: acquiring the first color value of the pixel in the original image; and performing iteratively restoring the color of the pixel in the black-and-white image according to the first color value.
  • the black and white image is composed of a single black and white picture.
  • the color value of the pixel point in the original image can be read in advance and stored, and when the color is iteratively restored, the color value of the pixel point in the black and white image is replaced with the color value of the pixel point in the original image.
  • the processing order is to process the foreground area first and then the background area, firstly replace the color value of the pixel point of the foreground area in the black and white image with the pixel point of the foreground area in the original image according to the preset direction and preset progress.
  • the color value of the background area in the black and white image is iteratively replaced with the color value of the pixel point in the background area in the original image according to the preset direction and preset progress.
  • the processing order is to process the background area first and then the foreground area, firstly replace the color value of the pixel point of the background area in the black and white image with the pixel point of the background area in the original image according to the preset direction and preset progress. Then, the color value of the pixel point in the foreground area in the black and white image is iteratively replaced with the color value of the pixel point in the foreground area in the original image according to the preset direction and preset progress.
  • Step 140 splicing multiple frames of images to obtain a target video.
  • the way of splicing multiple frames of images can be spliced according to the time stamps of the multiple frames of images.
  • the target video is a foreground color album.
  • the method of splicing multiple frames of images to obtain the target video may be: splicing the multiple frames of images by adding special transition effects between adjacent images; rendering the spliced multiple frames of images to obtain the target video.
  • the set transition effects between adjacent images can be the same or different.
  • Setting transition effects can be pre-set effects or optional effects.
  • the original image is first subjected to fading processing to obtain a black and white image, then the foreground area and the background area of the black and white image are determined, and then based on the processing order of the foreground area and the background area, pixel points in the black and white image are processed. Iterative restoration of color is performed, and the restored images are stored as picture frames to obtain multiple frames of images. Finally, the multiple frames of images are spliced to obtain the target video.
  • iteratively restores the color of the pixels in the black-and-white image stores the restored image as a picture frame, obtains multiple frames of images, and splices the multiple frames of images to obtain Target video, which can convert static pictures into dynamic videos, realizes the automatic production of albums with color in the foreground area, and does not need to manually create videos, which improves the convenience of album production.
  • the method for converting a picture to a video can be launched as a function of a video application (Application, APP), and the function can realize automatic editing, creation, and sharing of a video.
  • the user selects the picture-to-video function in the video APP and selects the picture to be converted into a video
  • the client uploads the picture to the server
  • the server obtains the picture uploaded by the client, and generates the picture according to the algorithm.
  • the mask area of the portrait segmentation, and the generated result is returned to the client; the client downloads the result, completes the realization of the mask animation and renders the content of the occlusion/display area, and generates a video after adding transition effects, and automatically performs automatic processing on the video.
  • the preview of the playback, the user can share or publish the video.
  • the solution of the present application does not require users to manually create videos, but only needs to upload pictures, which greatly reduces the cost of generating videos from pictures.
  • FIG. 2 is a schematic structural diagram of an apparatus for converting a picture to a video according to an embodiment of the present disclosure.
  • the apparatus includes: a black and white image acquisition module 210 , an image region determination module 220 , a color restoration module 230 and a target video acquisition module 240 .
  • the black-and-white image acquisition module 210 is set to perform fading processing on the original picture to obtain a black-and-white image; the image area determination module 220 is set to determine the foreground area and the background area of the black-and-white image; the color restoration module 230 is set to be based on the foreground area and the background area In the processing sequence, the color of the pixels in the black and white image is iteratively restored, and the image obtained each time is stored as a picture frame to obtain a multi-frame image; the target video acquisition module 240 is set to splicing the multi-frame images, Get the target video.
  • the black and white image includes a black and white picture layer and an original picture layer
  • the black and white image acquisition module 210 is set to:
  • the image area determination module 220 is set to:
  • the color restoration module 230 is configured to perform color iterative restoration on the pixels in the black and white image based on the processing order of the foreground area and the background area in the following manner:
  • processing order is to process the foreground area first and then the background area, then perform iterative mask filtering on the pixels in the foreground area of the black-and-white image layer according to the first direction, and perform iterative mask filtering on the pixels in the background area of the black-and-white image layer. Iterative mask filtering in the second direction.
  • the color restoration module 230 is configured to perform color iterative restoration on the pixels in the black and white image based on the processing order of the foreground area and the background area in the following manner:
  • processing order is to process the background area first and then the foreground area, then perform iterative mask filtering on the pixels in the background area of the black-and-white image layer according to the third direction, and filter the pixels in the foreground area of the black-and-white image layer according to the third direction. Iterative mask filtering in the fourth direction.
  • the color restoration module 230 is configured to perform color iterative restoration on the pixels in the black and white image in the following manner:
  • the color restoration module 230 is configured to iteratively restore the color of the pixels in the black and white image according to the first color value in the following manner:
  • the color restoration module 230 is configured to perform color iterative restoration on the pixels in the black and white image in the following manner:
  • the target video acquisition module 240 is set to:
  • Multi-frame images are spliced by adding set transition effects between adjacent images; the spliced multi-frame images are rendered to obtain the target video.
  • the foreground area includes a portrait area.
  • the foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules and effects corresponding to executing the foregoing methods.
  • the foregoing apparatus can execute the methods provided by all the foregoing embodiments of the present disclosure, and has functional modules and effects corresponding to executing the foregoing methods.
  • FIG. 3 it shows a schematic structural diagram of an electronic device 300 suitable for implementing an embodiment of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistants, PDAs), tablet computers (PADs), portable multimedia players (Portable Media Players) , PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), and fixed terminals such as digital (Television, TV), desktop computers, etc., or various forms of servers, such as independent servers or server clusters.
  • PDAs Personal Digital Assistants
  • PMP portable multimedia players
  • PMP portable multimedia players
  • the electronic device 300 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 301, which may be stored in accordance with a program stored in a read-only storage device (Read-Only Memory, ROM) 302 or from a storage device Device 308 loads a program into Random Access Memory (RAM) 303 to perform various appropriate actions and processes.
  • a processing device eg, a central processing unit, a graphics processor, etc.
  • RAM Random Access Memory
  • various programs and data required for the operation of the electronic device 300 are also stored.
  • the processing device 301, the ROM 302, and the RAM 303 are connected to each other through a bus 304.
  • An Input/Output (I/O) interface 305 is also connected to the bus 304 .
  • the following devices can be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) Output device 307 , speaker, vibrator, etc.; storage device 308 including, eg, magnetic tape, hard disk, etc.; and communication device 309 .
  • Communication means 309 may allow electronic device 300 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 3 shows electronic device 300 having various means, it is not required to implement or have all of the illustrated means. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing a recommended method of a word.
  • the computer program may be downloaded and installed from the network via the communication device 309, or from the storage device 308, or from the ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocols, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • Communication eg, a communication network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet eg, the Internet
  • peer-to-peer networks eg, ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: fades the original picture to obtain a black-and-white image; determines the foreground of the black-and-white image area and background area; based on the processing order of the foreground area and the background area, iteratively restore the color of the pixels in the black and white image, and store the image obtained each time as a picture frame to obtain multiple frames image; splicing the multiple frames of images to obtain a target video.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself in one case.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. Examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM or flash memory, optical fibers, CD-ROMs, optical storage devices, magnetic storage devices, or Any suitable combination of the above.
  • the embodiment of the present disclosure discloses a method for converting a picture to a video, including:
  • Fading the original picture to obtain a black and white image Fading the original picture to obtain a black and white image; determining the foreground area and background area of the black and white image; based on the processing order of the foreground area and the background area, performing color iteration on the pixels in the black and white image restore, and store the image obtained each time as a picture frame to obtain multiple frames of images; splicing the multiple frames of images to obtain a target video.
  • the black-and-white image includes a black-and-white image layer and an original image layer; the original image is faded to obtain a black-and-white image, including:
  • processing order is to process the foreground area first and then process the background area, then perform iterative mask filtering on the pixels in the foreground area of the black and white picture layer according to the first direction, The pixels in the background area of the black and white image layer are iteratively masked in the second direction.
  • the processing sequence is to process the background area first and then process the foreground area, then perform iterative mask filtering on the pixels in the background area of the black-and-white image layer according to the third direction, The pixels in the foreground area of the black and white image layer are iteratively masked according to the fourth direction.
  • Iteratively recover the color of the pixels in the black and white image including:
  • the iterative restoration of the color of the pixels in the black-and-white image according to the first color value includes:
  • Iteratively recover the color of the pixels in the black and white image including:
  • Iterative restoration of color is performed on the pixels in the black-and-white image according to a set number or a set proportion of the progress.
  • the multi-frame images are spliced to obtain a target video, including:
  • the multi-frame images are spliced by adding a set transition special effect between adjacent images; the spliced multi-frame images are rendered to obtain the target video.
  • the foreground area includes a portrait area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种图片转视频的方法、装置、设备及存储介质。图片转视频的方法包括:对原始图片进行褪色处理,获得黑白图像;确定所述黑白图像的前景区域和背景区域;基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;将所述多帧图像进行拼接,获得目标视频。

Description

图片转视频的方法、装置、设备及存储介质
本申请要求在2020年09月29日提交中国专利局、申请号为202011052771.7的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图片处理技术领域,例如涉及一种图片转视频的方法、装置、设备及存储介质。
背景技术
随着智能设备的不断普及,拍照功能已经成为手机中不可或缺的一个功能。手机拍出来的照片只是一个静态的图片,趣味性差。
发明内容
本公开提供一种图片转视频的方法、装置、设备及存储介质,可以将静态的图片转化为动态的视频,实现了前景区域留色影集的自动制作,无需手动制作视频,提高了影集制作的便捷性。
本公开提供了一种图片转视频的方法,包括:
对原始图片进行褪色处理,获得黑白图像;
确定所述黑白图像的前景区域和背景区域;
基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;
将所述多帧图像进行拼接,获得目标视频。
本公开还提供了一种图片转视频的装置,包括:
黑白图像获取模块,设置为对原始图片进行褪色处理,获得黑白图像;
图像区域确定模块,设置为确定所述黑白图像的前景区域和背景区域;
颜色恢复模块,设置为基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;
目标视频获取模块,设置为将所述多帧图像进行拼接,获得目标视频。
本公开还提供了一种电子设备,所述电子设备包括:
一个或多个处理装置;
存储装置,设置为存储一个或多个指令;
当所述一个或多个指令被所述一个或多个处理装置执行,使得所述一个或多个处理装置实现上述的图片转视频的方法。
本公开还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现上述的图片转视频的方法。
附图说明
图1为本公开实施例提供的一种图片转视频的方法的流程图;
图2为本公开实施例提供的一种图片转视频的装置的结构示意图;
图3为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,提供这些实施例是为了更加透彻和完整地理解本公开。
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,除非在上下文另有指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例提供的一种图片转视频的方法的流程图,本实施例可 适用于将静态图片转换为动态视频的情况,该方法可以由图片转视频的装置来执行,该装置可由硬件和/或软件组成,并一般可集成在具有图片转视频功能的设备中,该设备可以是移动终端、服务器或服务器集群等电子设备。如图1所示,该方法包括如下步骤:
步骤110,对原始图片进行褪色处理,获得黑白图像。
黑白图像可以由单一的黑白图片构成,或者将黑白图片叠加于彩色图片上形成。
可选的,黑白图像包括黑白图片层和原始图片层,对原始图片进行褪色处理,获得黑白图像的过程可以是:对原始图片进行二值化处理,获得黑白图片,将黑白图片叠加至原始图片上,获得黑白图像。本公开实施例中,黑白图片层位于原始图片层的上方。
步骤120,确定黑白图像的前景区域和背景区域。
前景区域可以是人像区域、动物区域及建筑物区域等要识别的目标区域,背景区域为前景区域除外的区域。本实施例中,可以通过预先裁剪出的前景分割区域或者背景分割区域作为掩膜(mask)来确定黑白图像的前景区域和背景区域。
确定黑白图像的前景区域和背景区域的方式可以是:将原始图片发送至服务器,使得服务器对原始图片进行前景裁剪,获得前景分割区域;接收服务器返回的前景分割区域;根据前景分割区域确定黑白图像的前景区域和背景区域。
服务器对原始图片进行前景裁剪的方式可以是:首先识别原始图片中的前景图像,然后将前景图像所在的区域裁剪出来,获得前景分割区域和背景分割区域。本公开实施例中,可以采用识别模型识别原始图片中的前景图像,例如,假设前景图像为人像,则采用人像识别模型进行识别,假设前景图像是动物,则采用动物识别模型进行识别。本实施例对识别的目标类型不做限定。
本公开实施例中,服务器可以将前景分割区域返回至移动终端,也可以将背景分割区域返回至移动终端,以用作mask,分离出黑白图像中的前景区域和背景区域。
步骤130,基于前景区域和背景区域的处理顺序,对黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像。
处理顺序包括先对前景区域进行处理后对背景区域进行处理或者先对背景区域进行处理后对前景区域进行处理。颜色恢复可以理解为将黑白图像中像素点的颜色恢复为原始图片的颜色。将黑白图像中像素点的颜色恢复为原始图片的颜色的方式可以是,将黑白图像中像素点的颜色值直接替换为原始图片中像 素点的颜色值,或者,采用遮罩过滤的方式对黑白图像进行颜色的恢复。
可选的,对黑白图像中的像素点进行颜色的迭代恢复的方式可以是:对黑白图像中的像素点按照设定数量或者设定比例的进度进行颜色的迭代恢复。
设定比例可以理解为黑白图像中包含的所有像素点的比例,或者当前区域包含的所有像素点的比例。示例性的,以每次恢复100个像素点的进度进行颜色的迭代恢复;或者,以每次恢复总像素点的5%的进度进行颜色的迭代恢复;或者,在恢复前景区域时,以每次恢复前景区域中总像素点的10%的进度进行迭代恢复,在恢复背景区域时,以每次恢复背景区域中总像素点的10%的进度进行迭代恢复。
基于前景区域和背景区域的处理顺序,对黑白图像中的像素点进行颜色的迭代恢复的过程可以是:若处理顺序为先对前景区域进行处理后对背景区域进行处理,则对黑白图片层的前景区域中的像素点按照第一方向进行迭代遮罩过滤,对黑白图片层的背景区域中的像素点按照第二方向进行迭代遮罩过滤。若处理顺序为先对背景区域进行处理后对前景区域进行处理,则对黑白图片层的背景区域中的像素点按照第三方向进行迭代遮罩过滤,对黑白图片层的前景区域中的像素点按照第四方向进行迭代遮罩过滤。
第一方向、第二方向、第三方向和第四方向可以是从上到下、从下到上、从左到右、从右到左、从左上角到右下角、从右上角到左下角、从左下角到右上角或者从右下角到左下角等。第二方向可以是从上到下、从下到上、从左到右、从右到左、从左上角到右下角、从右上角到左下角、从左下角到右上角或者从右下角到左下角等。第一方向和第二方向可以相同或者不同,第三方向和第四方向可以相同或者不同。本实施例中,还可以对黑白图片层的前景区域中的像素点随机的进行迭代遮罩过滤,对黑白图片层的背景区域中的像素点随机的进行迭代遮罩过滤。此处不做限定。
遮罩处理的过程可以理解为将黑白图片层中的像素点擦除的过程,从而将位于黑白图片层下面的原始图片层显示出来。
对黑白图像中的像素点进行颜色的迭代恢复的方式可以是:获取原始图片中像素点的第一颜色值;根据第一颜色值对黑白图像中的像素点进行颜色的迭代恢复。
本实施例中,黑白图像由单一的黑白图片构成。可以预先读取原始图片中像素点的颜色值,并进行存储,在进行颜色迭代恢复时,将黑白图像中的像素点的颜色值替换为原始图像中像素点的颜色值。
若处理顺序为先对前景区域进行处理后对背景区域进行处理,则先将黑白 图像中前景区域的像素点的颜色值按照预设方向和预设进度迭代替换为原始图片中前景区域的像素点的颜色值,然后再将黑白图像中背景区域的像素点的颜色值照预设方向和预设进度迭代替换为原始图片中背景区域的像素点的颜色值。
若处理顺序为先对背景区域进行处理后对前景区域进行处理,则先将黑白图像中背景区域的像素点的颜色值照预设方向和预设进度迭代替换为原始图片中背景区域的像素点的颜色值,然后将黑白图像中前景区域的像素点的颜色值按照预设方向和预设进度迭代替换为原始图片中前景区域的像素点的颜色值。
步骤140,将多帧图像进行拼接,获得目标视频。
将多帧图像进行拼接的方式可以按照多帧图像的时间戳进行拼接。目标视频为一个前景留色影集。
将多帧图像进行拼接,获得目标视频的方式可以是:采用在相邻图像间添加设定转场特效的方式对多帧图像进行拼接;对拼接后的多帧图像进行渲染,获得目标视频。
相邻图像间的设定转场特效可以相同或者不同。设定转场特效可以是预先设置好的特效也可以是任意选择的特效。
本公开实施例的技术方案,首先对原始图片进行褪色处理,获得黑白图像,然后确定黑白图像的前景区域和背景区域,再然后基于前景区域和背景区域的处理顺序,对黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像,最后将多帧图像进行拼接,获得目标视频。本公开实施例提供的图片转视频方法,对黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像,将多帧图像进行拼接,获得目标视频,可以将静态的图片转化为动态的视频,实现了前景区域留色影集的自动制作,无需手动制作视频,提高了影集制作的便捷性。
本公开实施例提供的图片转视频的方法可以作为视频应用(Application,APP)的一个功能进行上线,该功能可以实现视频的自动编辑、创作及分享。在本应用场景下,用户选择视频APP中的图片转视频的功能并选择需要转换为视频的图片,客户端将图片上传至服务端,服务端获取到客户端上传的图片,根据算法生成图片的人像分割的mask区域,将生成的结果返回给客户端;客户端下载该结果,完成遮罩动画的实现并渲染遮挡/显示区域的内容,以及加转场特效后生成视频,对该视频进行自动播放的预览,用户可以将该视频进行分享或者发布。本申请的方案,无需用户手动制作视频,只需要将图片上传即可,极大地降低了图片生成视频的成本。
图2为本公开实施例提供的一种图片转视频的装置的结构示意图。如图2 所示,该装置包括:黑白图像获取模块210,图像区域确定模块220,颜色恢复模块230和目标视频获取模块240。
黑白图像获取模块210,设置为对原始图片进行褪色处理,获得黑白图像;图像区域确定模块220,设置为确定黑白图像的前景区域和背景区域;颜色恢复模块230,设置为基于前景区域和背景区域的处理顺序,对黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;目标视频获取模块240,设置为将多帧图像进行拼接,获得目标视频。
可选的,黑白图像包括黑白图片层和原始图片层,黑白图像获取模块210,设置为:
对原始图片进行二值化处理,获得黑白图片;将黑白图片叠加至原始图片上,获得黑白图像。
可选的,图像区域确定模块220,设置为:
将原始图片发送至服务器,使得服务器对原始图片进行前景裁剪,获得前景分割区域;接收服务器返回的前景分割区域;根据前景分割区域确定黑白图像的前景区域和背景区域。
可选的,颜色恢复模块230,设置为通过如下方式基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复:
若处理顺序为先对前景区域进行处理后对背景区域进行处理,则对黑白图片层的前景区域中的像素点按照第一方向进行迭代遮罩过滤,对黑白图片层的背景区域中的像素点按照第二方向进行迭代遮罩过滤。
可选的,颜色恢复模块230,设置为通过如下方式基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复:
若处理顺序为先对背景区域进行处理后对前景区域进行处理,则对黑白图片层的背景区域中的像素点按照第三方向进行迭代遮罩过滤,对黑白图片层的前景区域中的像素点按照第四方向进行迭代遮罩过滤。
可选的,颜色恢复模块230,设置为通过如下方式对所述黑白图像中的像素点进行颜色的迭代恢复:
获取原始图片中像素点的第一颜色值;根据第一颜色值对黑白图像中的像素点进行颜色的迭代恢复。
可选的,颜色恢复模块230,设置为通过如下方式根据第一颜色值对黑白图像中的像素点进行颜色的迭代恢复:
将黑白图像中像素点的第二颜色值迭代替换为第一颜色值。
可选的,颜色恢复模块230,设置为通过如下方式对所述黑白图像中的像素点进行颜色的迭代恢复:
对黑白图像中的像素点按照设定数量或者设定比例的进度进行颜色的迭代恢复。
可选的,目标视频获取模块240,设置为:
采用在相邻图像间添加设定转场特效的方式对多帧图像进行拼接;对拼接后的多帧图像进行渲染,获得目标视频。
可选的,前景区域包括人像区域。
上述装置可执行本公开前述所有实施例所提供的方法,具备执行上述方法相应的功能模块和效果。未在本实施例中详尽描述的技术细节,可参见本公开前述所有实施例所提供的方法。
下面参考图3,其示出了适于用来实现本公开实施例的电子设备300的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字(Television,TV)、台式计算机等等的固定终端,或者多种形式的服务器,如独立服务器或者服务器集群。图3示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图3所示,电子设备300可以包括处理装置(例如中央处理器、图形处理器等)301,其可以根据存储在只读存储装置(Read-Only Memory,ROM)302中的程序或者从存储装置308加载到随机访问存储装置(Random Access Memory,RAM)303中的程序而执行多种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的多种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。输入/输出(Input/Output,I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置306;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图3示出了具有多种装置的电子设备300,但是并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行词语的推荐方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置308被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对原始图片进行褪色处理,获得黑白 图像;确定所述黑白图像的前景区域和背景区域;基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;将所述多帧图像进行拼接,获得目标视频。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地 使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开实施例的一个或多个实施例,本公开实施例公开一种图片转视频的方法,包括:
对原始图片进行褪色处理,获得黑白图像;确定所述黑白图像的前景区域和背景区域;基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;将所述多帧图像进行拼接,获得目标视频。
所述黑白图像包括黑白图片层和原始图片层;所述对原始图片进行褪色处理,获得黑白图像,包括:
对所述原始图片进行二值化处理,获得黑白图片;将所述黑白图片叠加至所述原始图片上,获得所述黑白图像。
所述确定所述黑白图像的前景区域和背景区域,包括:
将所述原始图片发送至服务器,使得服务器对所述原始图片进行前景裁剪,获得前景分割区域;接收所述服务器返回的所述前景分割区域;根据所述前景分割区域确定所述黑白图像的所述前景区域和所述背景区域。
所述基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
若所述处理顺序为先对所述前景区域进行处理后对所述背景区域进行处理,则对所述黑白图片层的前景区域中的像素点按照第一方向进行迭代遮罩过滤,对所述黑白图片层的背景区域中的像素点按照第二方向进行迭代遮罩过滤。
所述基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
若所述处理顺序为先对所述背景区域进行处理后对所述前景区域进行处理,则对所述黑白图片层的背景区域中的像素点按照第三方向进行迭代遮罩过滤,对所述黑白图片层的前景区域中的像素点按照第四方向进行迭代遮罩过滤。
对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
获取所述原始图片中像素点的第一颜色值;根据所述第一颜色值对所述黑 白图像中的像素点进行颜色的迭代恢复。
所述根据所述第一颜色值对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
将所述黑白图像中像素点的第二颜色值迭代替换为所述第一颜色值。
对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
对所述黑白图像中的像素点按照设定数量或者设定比例的进度进行颜色的迭代恢复。
将所述多帧图像进行拼接,获得目标视频,包括:
采用在相邻图像间添加设定转场特效的方式对所述多帧图像进行拼接;对拼接后的所述多帧图像进行渲染,获得所述目标视频。
所述前景区域包括人像区域。

Claims (13)

  1. 一种图片转视频的方法,包括:
    对原始图片进行褪色处理,获得黑白图像;
    确定所述黑白图像的前景区域和背景区域;
    基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;
    将所述多帧图像进行拼接,获得目标视频。
  2. 根据权利要求1所述的方法,其中,所述黑白图像包括黑白图片层和原始图片层;所述对原始图片进行褪色处理,获得黑白图像,包括:
    对所述原始图片进行二值化处理,获得黑白图片;
    将所述黑白图片叠加至所述原始图片上,获得所述黑白图像。
  3. 根据权利要求2所述的方法,其中,所述确定所述黑白图像的前景区域和背景区域,包括:
    将所述原始图片发送至服务器,使得服务器对所述原始图片进行前景裁剪,获得前景分割区域;
    接收所述服务器返回的所述前景分割区域;
    根据所述前景分割区域确定所述黑白图像的所述前景区域和所述背景区域。
  4. 根据权利要求3所述的方法,其中,所述基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
    在所述处理顺序为先对所述前景区域进行处理后对所述背景区域进行处理的情况下,对所述黑白图片层的前景区域中的像素点按照第一方向进行迭代遮罩过滤,对所述黑白图片层的背景区域中的像素点按照第二方向进行迭代遮罩过滤。
  5. 根据权利要求3所述的方法,其中,所述基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
    在所述处理顺序为先对所述背景区域进行处理后对所述前景区域进行处理的情况下,对所述黑白图片层的背景区域中的像素点按照第三方向进行迭代遮罩过滤,对所述黑白图片层的前景区域中的像素点按照第四方向进行迭代遮罩过滤。
  6. 根据权利要求1所述的方法,其中,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
    获取所述原始图片中像素点的第一颜色值;
    根据所述第一颜色值对所述黑白图像中的像素点进行颜色的迭代恢复。
  7. 根据权利要求6所述的方法,其中,根据所述第一颜色值对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
    将所述黑白图像中像素点的第二颜色值迭代替换为所述第一颜色值。
  8. 根据权利要求4-6中任一项所述的方法,其中,对所述黑白图像中的像素点进行颜色的迭代恢复,包括:
    对所述黑白图像中的像素点按照设定数量或者设定比例的进度进行颜色的迭代恢复。
  9. 根据权利要求1所述的方法,其中,所述将所述多帧图像进行拼接,获得目标视频,包括:
    采用在相邻图像间添加设定转场特效的方式对所述多帧图像进行拼接;
    对拼接后的所述多帧图像进行渲染,获得所述目标视频。
  10. 根据权利要求1所述的方法,其中,所述前景区域包括人像区域。
  11. 一种图片转视频的装置,包括:
    黑白图像获取模块,设置为对原始图片进行褪色处理,获得黑白图像;
    图像区域确定模块,设置为确定所述黑白图像的前景区域和背景区域;
    颜色恢复模块,设置为基于所述前景区域和所述背景区域的处理顺序,对所述黑白图像中的像素点进行颜色的迭代恢复,并将每次恢复得到的图像存储为画面帧,得到多帧图像;
    目标视频获取模块,设置为将所述多帧图像进行拼接,获得目标视频。
  12. 一种电子设备,包括:
    至少一个处理装置;
    存储装置,设置为存储至少一个指令;
    当所述至少一个指令被所述至少一个处理装置执行,使得所述至少一个处理装置实现如权利要求1-10中任一项所述的图片转视频的方法。
  13. 一种计算机可读存储介质,存储有计算机程序,其中,所述程序被处理装置执行时实现如权利要求1-10中任一项所述的图片转视频的方法。
PCT/CN2021/119409 2020-09-29 2021-09-18 图片转视频的方法、装置、设备及存储介质 WO2022068631A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023507403A JP7471510B2 (ja) 2020-09-29 2021-09-18 ピクチャのビデオへの変換の方法、装置、機器および記憶媒体
EP21874292.2A EP4181517A4 (en) 2020-09-29 2021-09-18 METHOD AND APPARATUS FOR CONVERTING IMAGE INTO VIDEO, APPARATUS AND STORAGE MEDIUM
US18/148,372 US11893770B2 (en) 2020-09-29 2022-12-29 Method for converting a picture into a video, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011052771.7 2020-09-29
CN202011052771.7A CN114339447B (zh) 2020-09-29 2020-09-29 图片转视频的方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/148,372 Continuation US11893770B2 (en) 2020-09-29 2022-12-29 Method for converting a picture into a video, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022068631A1 true WO2022068631A1 (zh) 2022-04-07

Family

ID=80949544

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119409 WO2022068631A1 (zh) 2020-09-29 2021-09-18 图片转视频的方法、装置、设备及存储介质

Country Status (5)

Country Link
US (1) US11893770B2 (zh)
EP (1) EP4181517A4 (zh)
JP (1) JP7471510B2 (zh)
CN (1) CN114339447B (zh)
WO (1) WO2022068631A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797494A (zh) * 2023-02-06 2023-03-14 武汉精臣智慧标识科技有限公司 图片二值化处理方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823973B (zh) * 2023-08-25 2023-11-21 湖南快乐阳光互动娱乐传媒有限公司 一种黑白视频上色方法、装置及计算机可读介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299277A (zh) * 2008-06-25 2008-11-05 北京中星微电子有限公司 一种黑白图像彩色化处理的方法和系统
US20130141439A1 (en) * 2011-12-01 2013-06-06 Samsung Electronics Co., Ltd. Method and system for generating animated art effects on static images
CN104202540A (zh) * 2014-09-28 2014-12-10 北京金山安全软件有限公司 一种利用图片生成视频的方法及系统
CN108322837A (zh) * 2018-01-10 2018-07-24 链家网(北京)科技有限公司 基于图片的视频生成方法及装置
CN109741408A (zh) * 2018-11-23 2019-05-10 成都品果科技有限公司 一种图像及视频漫画效果实时渲染方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69308024T2 (de) * 1993-11-23 1997-08-14 Agfa Gevaert Nv Verfahren und Anordnung zur Lokalisierung von gesättigten Bildelementen auf einer Röntgenbildanzeigevorrichtung
US5668596A (en) * 1996-02-29 1997-09-16 Eastman Kodak Company Digital imaging device optimized for color performance
JP4158332B2 (ja) 2000-02-03 2008-10-01 コニカミノルタビジネステクノロジーズ株式会社 カラー画像処理装置
JP3830350B2 (ja) * 2001-01-31 2006-10-04 株式会社リコー カラー画像処理方法、カラー画像処理装置、プログラム、及び記録媒体
JP2004142423A (ja) * 2002-08-29 2004-05-20 Seiko Epson Corp モノクローム画像印刷のための色調設定
US7728844B2 (en) * 2004-07-09 2010-06-01 Nokia Corporation Restoration of color components in an image model
JP2009141678A (ja) 2007-12-06 2009-06-25 Fujifilm Corp デジタルフォトフレームおよびその画像表示方法
CN101325664A (zh) * 2008-07-25 2008-12-17 航天恒星科技股份有限公司 一种数字图像中处理叠加信息颜色的方法
JP2010092094A (ja) 2008-10-03 2010-04-22 Nikon Corp 画像処理装置、画像処理プログラム及びデジタルカメラ
JP2010101924A (ja) * 2008-10-21 2010-05-06 Sony Corp 画像処理装置、画像処理方法、及び、プログラム
JP5004309B2 (ja) * 2009-02-18 2012-08-22 ソニーモバイルコミュニケーションズ, エービー 動画出力方法および動画出力装置
JP2012068713A (ja) 2010-09-21 2012-04-05 Sony Corp 情報処理装置、および情報処理方法
CN102542526B (zh) * 2011-11-10 2014-04-16 浙江大学 一种图像去色方法
CN102547063B (zh) * 2012-02-08 2014-06-11 南京航空航天大学 基于颜色对比度增强的自然感彩色融合方法
US8897548B2 (en) * 2013-04-02 2014-11-25 National Chung Cheng University Low-complexity method of converting image/video into 3D from 2D
WO2016029395A1 (en) 2014-08-28 2016-03-03 Qualcomm Incorporated Temporal saliency map
JP2017112550A (ja) 2015-12-18 2017-06-22 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
CN107968946B (zh) * 2016-10-18 2021-09-21 深圳万兴信息科技股份有限公司 视频帧率提升方法及装置
WO2018132760A1 (en) 2017-01-13 2018-07-19 Warner Bros. Entertainment, Inc. Adding motion effects to digital still images
US20200221165A1 (en) 2019-01-07 2020-07-09 NoviSign Ltd Systems and methods for efficient video content transition effects generation
CN109729286B (zh) * 2019-01-28 2021-08-17 北京晶品特装科技股份有限公司 一种视频中叠加实现动态图形的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299277A (zh) * 2008-06-25 2008-11-05 北京中星微电子有限公司 一种黑白图像彩色化处理的方法和系统
US20130141439A1 (en) * 2011-12-01 2013-06-06 Samsung Electronics Co., Ltd. Method and system for generating animated art effects on static images
CN104202540A (zh) * 2014-09-28 2014-12-10 北京金山安全软件有限公司 一种利用图片生成视频的方法及系统
CN108322837A (zh) * 2018-01-10 2018-07-24 链家网(北京)科技有限公司 基于图片的视频生成方法及装置
CN109741408A (zh) * 2018-11-23 2019-05-10 成都品果科技有限公司 一种图像及视频漫画效果实时渲染方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4181517A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797494A (zh) * 2023-02-06 2023-03-14 武汉精臣智慧标识科技有限公司 图片二值化处理方法、装置、电子设备及存储介质
CN115797494B (zh) * 2023-02-06 2023-05-23 武汉精臣智慧标识科技有限公司 图片二值化处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
EP4181517A4 (en) 2023-11-29
EP4181517A1 (en) 2023-05-17
US11893770B2 (en) 2024-02-06
JP7471510B2 (ja) 2024-04-19
CN114339447A (zh) 2022-04-12
CN114339447B (zh) 2023-03-21
US20230140558A1 (en) 2023-05-04
JP2023538825A (ja) 2023-09-12

Similar Documents

Publication Publication Date Title
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
CN112380379B (zh) 歌词特效展示方法、装置、电子设备及计算机可读介质
CN110070496B (zh) 图像特效的生成方法、装置和硬件装置
CN112165632B (zh) 视频处理方法、装置及设备
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
CN111629151B (zh) 视频合拍方法、装置、电子设备及计算机可读介质
EP4131983A1 (en) Method and apparatus for processing three-dimensional video, readable storage medium, and electronic device
FR3047579A1 (zh)
WO2022095840A1 (zh) 直播间创建方法、装置、电子设备及存储介质
GB2594214A (en) Image display method and apparatus
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN112418249A (zh) 掩膜图像生成方法、装置、电子设备和计算机可读介质
WO2023138441A1 (zh) 视频生成方法、装置、设备及存储介质
WO2023098576A1 (zh) 图像处理方法、装置、设备及介质
WO2022237435A1 (zh) 更换画面中的背景的方法、设备、存储介质及程序产品
WO2023035973A1 (zh) 视频处理方法、装置、设备及介质
WO2022213801A1 (zh) 视频处理方法、装置及设备
CN113473236A (zh) 录屏视频的处理方法、装置、可读介质和电子设备
CN113837918A (zh) 多进程实现渲染隔离的方法及装置
WO2023185511A1 (zh) 一种视频生成方法、装置、电子设备和存储介质
WO2022213798A1 (zh) 图像处理方法、装置、电子设备和存储介质
WO2023016150A1 (zh) 图像处理方法、装置、设备及存储介质
WO2024082863A1 (zh) 图像处理方法及电子设备
WO2023098609A1 (zh) 图像特效包的生成方法、装置、设备及存储介质
CN117692699A (zh) 视频生成方法、装置、设备、存储介质和程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874292

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023507403

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021874292

Country of ref document: EP

Effective date: 20230209

NENP Non-entry into the national phase

Ref country code: DE