WO2021135864A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2021135864A1
WO2021135864A1 PCT/CN2020/134683 CN2020134683W WO2021135864A1 WO 2021135864 A1 WO2021135864 A1 WO 2021135864A1 CN 2020134683 W CN2020134683 W CN 2020134683W WO 2021135864 A1 WO2021135864 A1 WO 2021135864A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video frame
frame image
video
target object
Prior art date
Application number
PCT/CN2020/134683
Other languages
English (en)
French (fr)
Inventor
李小奇
倪光耀
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP20910034.6A priority Critical patent/EP4068794A4/en
Priority to KR1020227026365A priority patent/KR20220123073A/ko
Priority to JP2022540466A priority patent/JP7467642B2/ja
Priority to BR112022012896A priority patent/BR112022012896A2/pt
Publication of WO2021135864A1 publication Critical patent/WO2021135864A1/zh
Priority to US17/849,859 priority patent/US11798596B2/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular, to an image processing method, device, and computer-readable storage medium.
  • the technical problem solved by the present disclosure is to provide an image processing method to at least partially solve the technical problem that the display effect of the image special effect function in the prior art is not flexible enough.
  • an image processing device, an image processing hardware device, a computer-readable storage medium, and an image processing terminal are also provided.
  • An image processing method including:
  • the second video frame image is replaced with the special effect image in the original video to form a target video.
  • An image processing device including:
  • the video acquisition module is used to acquire the original video
  • An image selection module for selecting a first video frame image from the original video
  • the image selection module is further configured to select a second video frame image containing a target object from the original video, and separate the target object in the second video frame from the second video frame image The part of the image occupied by the image;
  • a transparentization processing module for transparentizing the image portion occupied by the target object in the second video frame image to obtain a transparentized image, and superimposing the transparentized image on the first video frame image To get special effects images;
  • the image replacement module is used to replace the second video frame image with the special effect image in the original video to form a target video.
  • An electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions, so that the processor implements the image processing method described in any one of the foregoing when executed.
  • a computer-readable storage medium for storing non-transitory computer-readable instructions.
  • the computer is caused to execute the image processing method described in any one of the above.
  • An image processing terminal includes any image processing device described above.
  • the original video is obtained, and the first video frame image is selected from the original video; the second video frame image containing the target object is selected from the original video, and the second video frame image is selected from the original video.
  • the image part occupied by the target object in the second video frame image is separated, the image part occupied by the target object in the second video frame image is transparentized to obtain a transparentized image, and the The transparentized image is superimposed on the first video frame image to obtain a special effect image, and the special effect image is used to replace the second video frame image in the original video to form a target video, so that the user can see the same
  • the image of the target object in different video frames has a more flexible display effect.
  • Fig. 1a is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • Fig. 1b is a schematic diagram of a first video frame image in an image processing method according to an embodiment of the present disclosure
  • FIG. 1c is a schematic diagram of a special effect image of the soul out of the body in the image processing method according to an embodiment of the present disclosure
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • Fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • Fig. 4 is a schematic structural diagram of an image processing device according to an embodiment of the present disclosure.
  • Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image processing method. As shown in Fig. 1a, the image processing method mainly includes the following steps S11 to S15.
  • Step S11 Obtain the original video.
  • the original video image may be a live video, or a video pre-stored locally in the terminal, or a dynamic image, or an image sequence composed of a series of static pictures.
  • the video image can be obtained in real time through the camera or video camera of the terminal.
  • the terminal may be a mobile terminal (for example, a smart phone, an iPhone, a tablet computer, a notebook or a wearable device), or a fixed terminal (for example, a desktop computer).
  • Step S12 Select a first video frame image from the original video.
  • the first video frame image may or may not include the target object.
  • the first video frame image may be the first frame image, or the intermediate frame image, or the last frame of the original video.
  • the target object can be preset, for example, it can be a portrait of a person, an animal (for example, a cat, a dog, etc.), etc.
  • the existing target detection algorithm can be used to detect the video image to obtain the target object.
  • the target detection algorithm that can be used can be a target detection algorithm based on deep learning and an image recognition algorithm based on neural network.
  • Step S13 Select a second video frame image containing a target object from the original video, and separate the image portion occupied by the target object in the second video frame image from the second video frame image .
  • the second video frame image contains a target object
  • the target object may be a portrait of a person or an animal (for example, a cat, a dog, etc.).
  • the first video frame image and the second video frame image may contain the same target object.
  • the second video frame image may be the first frame image, or the intermediate frame image, or the last frame image of the original video.
  • the first video frame image and the second video frame image are two different images (for example, the position and/or posture of the target object in the video are different). Except for the different image parts of the target object in the video, the remaining image parts in the first video frame image and the second video frame image, that is, the background image part, may be the same or different.
  • the second video frame image may be an image that is played after the first video frame image in the original video, that is, the playback time of the first video frame image in the original video is earlier than the playback time of the second video frame image.
  • the latest video is collected as the second video frame image, and one frame of image is selected as the first video frame image from historical video frame images that are the same as or similar to the second video frame image.
  • Step S14 Perform transparency processing on the image part occupied by the target object in the second video frame image to obtain a transparentized image, and superimpose the transparentized image on the first video frame image to obtain a special effect image .
  • Fig. 1b is the first video frame image
  • Fig. 1c is the special effect image, which means that the soul is out of the body. Special effects image.
  • Step S15 Replace the second video frame image with the special effect image in the original video to form a target video.
  • the special effect production scene is a post-production of an existing video
  • the second video frame image is replaced by the special effect image in the original video, so that the target video can be obtained.
  • the special effect production scene is a real-time special effect processing scene for the video being shot and played
  • the display during playback forms the target video, where the original video is the captured video, and the target video is the displayed video.
  • the first video frame image is selected from the original video
  • the second video frame image containing the target object is selected from the original video
  • the target object is separated from the second video frame image.
  • the image part occupied by the second video frame image is transparentized to obtain the transparentized image by performing the transparentization process on the image part occupied by the target object in the second video frame image, and the transparentized image is superimposed on all the images.
  • a special effect image is obtained on the first video frame image, and the second video frame image is replaced with the special effect image in the original video to form a target video, which enables the user to see the same target object in different video frames at the same time
  • the display effect is more flexible.
  • the first video frame image contains the target object, and the part of the image occupied by the target object in the first video frame image and the target object in the second The part of the image occupied in the video frame image is different.
  • the target object contained in the first video frame image may be an image of a person squatting down
  • the target object contained in the second video frame image may be an image of a person squatting down or standing upright.
  • step S14 specifically includes:
  • Step S141 Obtain the first pixel values of the red, blue and green three color channels corresponding to the first video frame image.
  • the first pixel value includes at least one pixel value of a red three-color channel, a blue three-color channel, and a green three-color channel.
  • the first video frame image is a non-red, blue-green image, it can be converted into a red-blue-green image through color space conversion, and then the pixel values of the red, blue, and green three color channels are obtained.
  • Step S142 Obtain the second pixel values of the red, blue and green three-color channels corresponding to the image portion occupied by the target object in the second video frame image.
  • the second pixel value includes at least one pixel value of a red three-color channel, a blue three-color channel, and a green three-color channel.
  • the image part occupied by the target object in the second video frame image is a non-red, blue-green image, it can be converted into a red, blue, and green image through color space conversion, and then the pixel values of the red, blue, and green channels are obtained.
  • Step S143 Determine a special effect image according to the first pixel value, the second pixel value and the preset transparency.
  • the preset transparency can be customized, and the value can be 0.5.
  • step S143 specifically includes:
  • r , G, b are the pixel values of the red, green, and blue channels corresponding to the special effect image
  • r1, g1, and b1 are the pixel values of the red, green, and blue channels corresponding to the first video frame image, respectively
  • r0, g0, b0 are respectively the pixel values of the red, green, and blue channels corresponding to the image portion occupied by the target object in the second video frame image.
  • this embodiment further defines the selection of the first video frame image from the original video in the step. Based on this limitation, the specific implementation of this embodiment is shown in FIG. 2, and the following steps S21 to S25 are performed.
  • Step S21 Obtain the original video.
  • Step S22 In response to the image portion occupied by the target object in a video clip of a preset duration in the original video does not change, the first video frame image is selected from the video clip.
  • the first video frame image includes the target object.
  • the target object for example, the user
  • the target object maintains the same pose in the original video for a preset time (for example, 2s)
  • a preset time for example, 2s
  • the frame image is used as the first video frame image.
  • the last frame image of the original video in the time period is used as the first video frame image.
  • the currently captured image is acquired as the first video frame image, that is, the last frame image within 2s.
  • Step S23 Select a second video frame image containing a target object from the original video, and separate the image portion occupied by the target object in the second video frame image from the second video frame image .
  • the second video image is the latest video image collected when the original video playback time exceeds a preset duration (for example, 2s).
  • a preset duration for example, 2s.
  • the part of the image occupied by the target object in the video clip of the preset duration does not change, and when the preset duration is exceeded, the part of the image occupied by the target object changes. That is, the image part occupied by the target object in the first video frame image and the image part occupied by the target object in the second video frame image contained in the finally obtained first video frame image is different .
  • Step S24 Perform transparency processing on the image portion occupied by the target object in the second video frame image to obtain a transparentized image, and superimpose the transparentized image on the first video frame image to obtain a special effect image .
  • the target object is separated from the current image.
  • the image part occupied by the target object in the second video frame image is superimposed on the first video frame image with a preset transparency.
  • Step S25 Replace the second video frame image with the special effect image in the original video to form a target video.
  • the target object for example, the user
  • a preset time for example, 2s.
  • a translucent portrait activity is triggered as shown in FIG. 1c.
  • step S21 step S23 to step S25, please refer to the above embodiment 1, which will not be repeated here.
  • this embodiment further defines the selection of the first video frame image from the original video in the step. Based on this limitation, the specific implementation of this embodiment is shown in FIG. 3, as follows: Step S31 to Step S35.
  • Step S31 Obtain the original video.
  • Step S32 In response to a preset action of the target object in the original video, a video frame image in which the target object has a preset action is selected from the original video as the first video frame image.
  • the preset action can be a preset posture (for example, OK gesture, hand shaking, head shaking, etc.), or the posture of the target object changes, for example, from squatting to squatting, or from squatting to getting up, or from Bowing turned into looking up and waiting.
  • a preset posture for example, OK gesture, hand shaking, head shaking, etc.
  • the posture of the target object changes, for example, from squatting to squatting, or from squatting to getting up, or from Bowing turned into looking up and waiting.
  • Step S33 Select a second video frame image containing a target object from the original video, and separate the image portion occupied by the target object in the second video frame image from the second video frame image .
  • the first video frame image contains the target object, the image portion occupied by the target object in the first video frame image and the image occupied by the target object in the second video frame image Partially different.
  • Step S34 Perform transparency processing on the image part occupied by the target object in the second video frame image to obtain a transparentized image, and superimpose the transparentized image on the first video frame image to obtain a special effect image .
  • Step S35 Replace the second video frame image with the special effect image in the original video to form a target video.
  • step S31 step S33 to step S35, please refer to the above embodiment 1, which will not be repeated here.
  • the device embodiments of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • an embodiment of the present disclosure provides an image processing device.
  • the device can execute the steps in the image processing method embodiments described in the first embodiment, the second embodiment, and the third embodiment.
  • the device mainly includes: a video acquisition module 41, an image selection module 42, a transparency processing module 43, and an image replacement module 44; among them,
  • the video acquisition module 41 is used to acquire the original video
  • the image selection module 42 is configured to select a first video frame image from the original video
  • the image selection module 42 is further configured to select a second video frame image containing a target object from the original video, and separate the target object in the second video frame from the second video frame image. The part of the image occupied by the image;
  • the transparentization processing module 43 is configured to perform transparentization processing on the image portion occupied by the target object in the second video frame image to obtain a transparentized image, and superimpose the transparentized image on the first video frame image To get special effects images;
  • the image replacement module 44 is configured to replace the second video frame image with the special effect image in the original video to form a target video.
  • the first video frame image includes the target object, and the image portion occupied by the target object in the first video frame image is different from the image portion occupied by the target object in the second video frame image.
  • the image part is different.
  • the image selection module 42 is specifically configured to: in response to the image portion occupied by the target object in a video clip of a preset duration in the original video, the image portion occupied by the target object does not change, and select all the images from the video clip.
  • the first video frame image is specifically configured to: in response to the image portion occupied by the target object in a video clip of a preset duration in the original video, the image portion occupied by the target object does not change, and select all the images from the video clip. The first video frame image.
  • the first video frame image is the last frame image in the time period of the original video.
  • the image selection module 42 is specifically configured to: in response to a preset action of the target object in the original video, select a video frame image in which the target object has a preset action from the original video as The first video frame image.
  • the target object is a portrait.
  • the transparency processing module 43 is specifically configured to: obtain the first pixel value of the red, blue and green three-color channels corresponding to the first video frame image; obtain the target object in the second video frame image The second pixel value of the red, blue and green three color channels corresponding to the occupied image part; the special effect image is determined according to the first pixel value, the second pixel value and the preset transparency.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 502 or from a storage device 508.
  • the program in the memory (RAM) 503 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices can be connected to the I/O interface 505: including input devices 506 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 507 such as a device; a storage device 508 such as a magnetic tape, a hard disk, etc.; and a communication device 509.
  • the communication device 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device 500 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 509, or installed from the storage device 508, or installed from the ROM 502.
  • the processing device 501 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device obtains the original video; selects the first video frame image from the original video; Selecting a second video frame image that includes a target object from the original video, and separating the image portion occupied by the target object in the second video frame image from the second video frame image;
  • the image part occupied by the target object in the second video frame image is transparentized to obtain a transparentized image, and the transparentized image is superimposed on the first video frame image to obtain a special effect image; in the original video Replace the second video frame image with the special effect image to form the target video.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include but are not limited to object-oriented programming languages such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first obtaining unit can also be described as "a unit that obtains at least two Internet protocol addresses.”
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • an image processing method including:
  • the first video frame image includes the target object, and the image portion occupied by the target object in the first video frame image is different from the image portion occupied by the target object in the second video frame image.
  • the image part is different.
  • the selection of the first video frame image from the original video is specifically:
  • the first video frame image is selected from the video clip.
  • the first video frame image is the last frame image in the time period of the original video.
  • the selection of the first video frame image from the original video is specifically:
  • a video frame image in which the target object has a preset action is selected from the original video as the first video frame image.
  • the target object is a portrait.
  • the transparentizing process is performed on the image portion occupied by the target object in the second video frame image to obtain a transparentized image, and the transparentized image is superimposed on the first video frame image to obtain Special effects images, including:
  • the special effect image is determined according to the first pixel value, the second pixel value and the preset transparency.
  • the determining a special effect image according to the first pixel value, the second pixel value and a preset transparency includes:
  • r , G, b are the pixel values of the red, green, and blue channels corresponding to the special effect image
  • r1, g1, and b1 are the pixel values of the red, green, and blue channels corresponding to the first video frame image
  • r0, g0 and b0 are respectively the pixel values of the red, green, and blue channels corresponding to the image portion occupied by the target object in the second video frame image.
  • an image processing apparatus including:
  • the video acquisition module is used to acquire the original video
  • An image selection module for selecting a first video frame image from the original video
  • the image selection module is further configured to select a second video frame image containing a target object from the original video, and separate the target object in the second video frame from the second video frame image The part of the image occupied by the image;
  • a transparentization processing module for transparentizing the image portion occupied by the target object in the second video frame image to obtain a transparentized image, and superimposing the transparentized image on the first video frame image To get special effects images;
  • the image replacement module is used to replace the second video frame image with the special effect image in the original video to form a target video.
  • the first video frame image includes the target object, and the image portion occupied by the target object in the first video frame image is different from the image portion occupied by the target object in the second video frame image.
  • the image part is different.
  • the image selection module is specifically configured to: in response to no change in the image portion occupied by the target object in a video clip of a preset duration in the original video, select the video clip from the video clip.
  • the first video frame image is specifically configured to: in response to no change in the image portion occupied by the target object in a video clip of a preset duration in the original video.
  • the first video frame image is the last frame image in the time period of the original video.
  • the image selection module is specifically configured to: in response to a preset action of the target object in the original video, select a video frame image in which the target object has a preset action from the original video as the target object.
  • the first video frame image is specifically configured to: in response to a preset action of the target object in the original video, select a video frame image in which the target object has a preset action from the original video as the target object. The first video frame image.
  • the target object is a portrait.
  • the transparency processing module is specifically configured to: obtain the first pixel value of the red, blue, and green three color channels corresponding to the first video frame image; obtain the target object occupies in the second video frame image The second pixel value of the red, blue, and green three color channels corresponding to the image portion of the image portion; the special effect image is determined according to the first pixel value, the second pixel value and the preset transparency.
  • the pixel values of the red, green, and blue channels, r0, g0, and b0 are respectively the pixel values of the red, green, and blue channels corresponding to the image portion occupied by the target object in the second video frame image.
  • an electronic device including:
  • Memory for storing non-transitory computer readable instructions
  • the processor is configured to run the computer-readable instructions so that the processor implements the above-mentioned image processing method when executed.
  • a computer-readable storage medium for storing non-transitory computer-readable instructions.
  • the non-transitory computer-readable instructions are executed by a computer, the The computer executes the above-mentioned image processing method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本公开公开了一种图像处理方法、装置、电子设备和计算机可读存储介质。其中方法包括:获取原始视频;从所述原始视频中选取第一视频帧图像;从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。本公开实施例可以使得用户同时看到同一目标对象在不同的视频帧中的图像,显示效果更为灵活。

Description

图像处理方法及装置
本申请要求申请于2019年12月30交中国专利局、申请号为201911397521.4申请名称为《图像处理方法及装置》的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,特别是涉及一种图像处理方法、装置和计算机可读存储介质。
背景技术
随着互联网技术以及图像处理技术的不断发展,在图像拍摄时,在图像中添加特效的方式逐渐受到了人们的追捧。用户可以通过选择相应的特效功能在拍摄图像中添加自己喜欢的特效,进而增加图像拍摄的趣味性。
现有技术中的图像特效功能不够灵活。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
本公开解决的技术问题是提供一种图像处理方法,以至少部分地解决现有技术中图像特效功能显示效果不够灵活的技术问题。此外,还提供一种图像处理装置、图像处理硬件装置、计算机可读存储介质和图像处理终端。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种图像处理方法,包括:
获取原始视频;
从所述原始视频中选取第一视频帧图像;
从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目 标视频。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种图像处理装置,包括:
视频获取模块,用于获取原始视频;
图像选取模块,用于从所述原始视频中选取第一视频帧图像;
所述图像选取模块,还用于从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
透明化处理模块,用于对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
图像替换模块,用于在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一项所述的图像处理方法。
为了实现上述目的,根据本公开的一个方面,提供以下技术方案:
一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一项所述的图像处理方法。
为了实现上述目的,根据本公开的又一个方面,还提供以下技术方案:
一种图像处理终端,包括上述任一图像处理装置。
本公开实施例通过获取原始视频,从所述原始视频中选取第一视频帧图像;从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分,对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像,在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频,可以使得用户同时看到同一目标对象在不同的视频帧中的图像,显示效果更为灵活。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1a为根据本公开一个实施例的图像处理方法的流程示意图;
图1b为根据本公开一个实施例的图像处理方法中的第一视频帧图像示意图;
图1c为根据本公开一个实施例的图像处理方法中的灵魂出窍特效图像示意图;
图2为根据本公开一个实施例的图像处理方法的流程示意图;
图3为根据本公开一个实施例的图像处理方法的流程示意图;
图4为根据本公开一个实施例的图像处理装置的结构示意图;
图5为根据本公开一个实施例的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
实施例一
为了解决现有技术中图像特效功能显示效果不够灵活的技术问题,本公开实施例提供一种图像处理方法。如图1a所示,该图像处理方法主要包括如下步骤S11至步骤S15。
步骤S11:获取原始视频。
其中,原始视频图像可以为直播视频,或者预先存储在终端本地的视频、或者动态图像、或者由一系列静态图片组成的图像序列。具体的,可以通过终端的摄像头或摄像机实时获取视频图像。其中,终端可以为移动终端(例如,智能手机、iPhone、平板电脑、笔记本或可穿戴设备),也可以为固定终端(例如,台式电脑)。
步骤S12:从所述原始视频中选取第一视频帧图像。
其中,所述第一视频帧图像中可以包含目标对象,也可以不包含目标对象。所述第一视频帧图像可以为所述原始视频的首帧图像、或中间帧图像、或尾帧。
其中,目标对象可以预先设置,例如可以为人像、动物像(例如,猫、狗等)等,具体的,可以采用现有的目标检测算法对视频图像进行检测得到目标对象。可采用的目标检测算法可以为基于深度学习的目标检测算法、基于神经网络的图像识别算法。
步骤S13:从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分。
其中,第二视频帧图像中包含目标对象,该目标对象可以为人像、动物像(例如,猫、狗等)等。当第一视频帧图像也包含目标对象时,第一视频帧图像和第二视频帧图像可以包含同一目标对象。第二视频帧图像可以为所述原始视频的首帧图像、或中间帧图像、或尾帧图像。
其中,第一视频帧图像和第二视频帧图像为两张不同的图像(例如,目标对象在视频中的位置和/或姿势不同)。除了目标对象在视频中的图像部分不同外,第一视频帧图像和第二视频帧图像中的其余图像部分,即背景图像部分,可以相同也可以不同。
具体的,当所述原始视频中的每一帧图像中均包含目标对象时,可随机选取两张相同或相似的图像分别作为第一视频帧图像和第二视频帧图像。或者,第二视频帧图像可以为原始视频中在第一视频帧图像之后播放的图像,即在原始视频中第一视频帧图像的播放时间早于第二视频帧图像的播放时间。例如,采集最新的视频作为第二视频帧图像,从与所述第二视频帧图像相同或相似的历史视频帧图像中选取一帧图像作为第一视频帧图像。
步骤S14:对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像。
具体的,如图1b所示,以所述目标对象为人像为例对本实施例的特效效果进行说明,其中,图1b为第一视频帧图像,图1c为特效图像,即实现灵魂出窍的特效图像。
步骤S15:在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
具体的,如果特效制作场景为已有视频的后期制作,则在所述原始视频中,将所述特效图像替换掉所述第二视频帧图像,从而可以得到目标视频。如果特效制作场景为针对正在拍摄并播放的视频进行实时的特效处理场景,则可以在拍摄到第二视频帧图像的时候以特效图像替换掉第二视频帧图像进行播放,也就是说原始视频在播放过程中的显示形成了目标视频,其中,原始视频是拍摄到的视频,目标视频则是显示播放的视频。
本实施例通过从原始视频中选取第一视频帧图像,从所述原始视频中选 取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分,对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像,在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频,可以使得用户同时看到同一目标对象在不同的视频帧中的图像,显示效果更为灵活。
在一个可选的实施例中,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
例如,第一视频帧图像中包含的所述目标对象可以为人全蹲下的图像,所述第二视频帧图像中包含的所述目标对象可以为人半蹲下或直立的图像。
在一个可选的实施例中,步骤S14具体包括:
步骤S141:获取所述第一视频帧图像对应的红蓝绿三颜色通道的第一像素值。
其中,第一像素值包含红三颜色通道、蓝三颜色通道和绿三颜色通道中的至少一种像素值。
如果第一视频帧图像为非红蓝绿图像,可以通过颜色空间转换将其转换为红蓝绿图像,然后在获取红蓝绿三颜色通道的像素值。
步骤S142:获取所述目标对象在所述第二视频帧图像中占据的图像部分对应的红蓝绿三颜色通道的第二像素值。
其中,第二像素值包含红三颜色通道、蓝三颜色通道和绿三颜色通道中的至少一种像素值。
如果目标对象在所述第二视频帧图像中占据的图像部分为非红蓝绿图像,可以通过颜色空间转换将其转换为红蓝绿图像,然后在获取红蓝绿三颜色通道的像素值。
步骤S143:根据所述第一像素值、所述第二像素值和预设透明度确定特效图像。
其中,预设透明度可以自定义设置,可以取值为0.5。
在一个可选的实施例中,步骤S143具体包括:
根据公式r=r1×(1-a)+r0×a、g=g1×(1-a)+g0×a和b=b1×(1-a)+b0×a确定特效图像;其中,r、g、b分别为特效图像对应的红、绿、蓝通道的像素值,r1、g1、b1分别为所述第一视频帧图像对应的红、绿、蓝通道的像素值,r0、g0、b0分别为所述目标对象在所述第二视频帧图像中占据的图像部分对应的红、绿、蓝通道的像素值。
实施例二
本实施例在上述实施例的基础上,对步骤所述从所述原始视频中选取第一视频帧图像进行进一步限定。基于该限定本实施例具体实现如图2所示,如下步骤S21至步骤S25。
步骤S21:获取原始视频。
步骤S22:响应于在所述原始视频中一个预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,从所述视频片段内中选取所述第一视频帧图像。
其中,所述第一视频帧图像中包含所述目标对象。具体的,以目标对象为人像为例,在本步骤中目标对象(例如用户)在原始视频中保持同一姿势预设时间(例如,2s),则将原始视频在此时间段内获取的任意一帧图像作为第一视频帧图像。例如,将所述原始视频在所述时间段内的最后一帧图像作为所述第一视频帧图像。例如,在边拍摄边播放视频的的场景中,当检测到目标对象2s内没有发生变化时获取当前拍摄到的图像作为第一视频帧图像,即2s内的最后一帧图像。
步骤S23:从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分。
其中,第二视频图像为在原始视频播放时间超过预设时长(例如2s)时,采集的最新的视频图像。其中,在所述预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,在超过预设时长时,所述目标对象占据的图像部分发生变化。即最终得到的所述第一视频帧图像中包含的所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
步骤S24:对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像。
在预设时间内,从所述当前图像中分离出所述目标对象。在到达所述预设时间时,将所述目标对象在所述第二视频帧图像中占据的图像部分以预设透明度叠加在第一视频帧图像上。
步骤S25:在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
具体的,以目标对象为人像为例,在本步骤中需要目标对象(例如用户)在视频图像中保持同一姿势预设时间(例如,2s)。在到达所述预设时间时,触发如图1c所示的灵魂出窍特效即具有半透明的人像活动。
关于上述步骤S21、步骤S23至步骤S25的详细描述参见上述实施例一,这里不再赘述。
实施例三
本实施例在上述实施例的基础上,对步骤所述从所述原始视频中选取第一视频帧图像进行进一步限定。基于该限定本实施例具体实现如图3所示,如下步骤S31至步骤S35。
步骤S31:获取原始视频。
步骤S32:响应于在所述原始视频中所述目标对象出现预设动作,从所述原始视频中选取所述目标对象出现预设动作的视频帧图像作为所述第一视频帧图像。
其中,预设动作可以为预设姿势(例如,OK手势、摆手、摇头等),或者目标对象的姿势发生变化,例如,由蹲下变为半蹲,或由半蹲变为起身、或由低头变为抬头等。
步骤S33:从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分。
其中,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
步骤S34:对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像。
步骤S35:在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
关于上述步骤S31、步骤S33至步骤S35的详细描述参见上述实施例一,这里不再赘述。
本领域技术人员应能理解,在上述各个实施例的基础上,还可以进行明显变型(例如,对所列举的模式进行组合)或等同替换。
在上文中,虽然按照上述的顺序描述了图像处理方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。
实施例四
为了解决现有技术中图像特效功能显示效果不够灵活的技术问题,本公开实施例提供一种图像处理装置。该装置可以执行上述实施例一、实施例二和实施例三所述的图像处理方法实施例中的步骤。如图4所示,该装置主要包括:视频获取模块41、图像选取模块42、透明化处理模块43和图像替换模块44;其中,
视频获取模块41用于获取原始视频;
图像选取模块42用于从所述原始视频中选取第一视频帧图像;
所述图像选取模块42还用于从所述原始视频中选取包含了目标对象的第 二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
透明化处理模块43用于对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
图像替换模块44用于在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
进一步的,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
进一步的,所述图像选取模块42具体用于:响应于在所述原始视频中一个预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,从所述视频片段内中选取所述第一视频帧图像。
进一步的,所述第一视频帧图像为所述原始视频的所述时间段内的最后一帧图像。
进一步的,所述图像选取模块42具体用于:响应于在所述原始视频中所述目标对象出现预设动作,从所述原始视频中选取所述目标对象出现预设动作的视频帧图像作为所述第一视频帧图像。
进一步的,所述目标对象为人像。
进一步的,所述透明化处理模块43具体用于:获取所述第一视频帧图像对应的红蓝绿三颜色通道的第一像素值;获取所述目标对象在所述第二视频帧图像中占据的图像部分对应的红蓝绿三颜色通道的第二像素值;根据所述第一像素值、所述第二像素值和预设透明度确定特效图像。
进一步的,所述透明化处理模块43具体用于:根据公式r=r1×(1-a)+r0×a、g=g1×(1-a)+g0×a和b=b1×(1-a)+b0×a确定特效图像;其中,r、g、b分别为所述特效图像对应的红、绿、蓝通道的像素值,r1、g1、b1分别为所述第一视频帧图像对应的红、绿、蓝通道的像素值,r0、g0、b0分别为所述目标对象在所述第二视频帧图像中占据的图像部分对应的红、绿、蓝通道的像素值。
有关图像处理装置实施例的工作原理、实现的技术效果等详细说明可以参考前述图像处理方法实施例中的相关说明,在此不再赘述。
实施例五
下面参考图5,其示出了适于用来实现本公开实施例的电子设备500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任 何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取原始视频;从所述原始视频中选取第一视频帧图像;从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网 际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种图像处理方法,包括:
获取原始视频;
从所述原始视频中选取第一视频帧图像;
从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
进一步的,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
进一步的,所述从所述原始视频中选取第一视频帧图像,具体为:
响应于在所述原始视频中一个预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,从所述视频片段内中选取所述第一视频帧图像。
进一步的,所述第一视频帧图像为所述原始视频的所述时间段内的最后一帧图像。
进一步的,所述从所述原始视频中选取第一视频帧图像,具体为:
响应于在所述原始视频中所述目标对象出现预设动作,从所述原始视频中选取所述目标对象出现预设动作的视频帧图像作为所述第一视频帧图像。
进一步的,所述目标对象为人像。
进一步的,所述对所述目标对象在所述第二视频帧图像中占据的图像部 分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像,包括:
获取所述第一视频帧图像对应的红蓝绿三颜色通道的第一像素值;
获取所述目标对象在所述第二视频帧图像中占据的图像部分对应的红蓝绿三颜色通道的第二像素值;
根据所述第一像素值、所述第二像素值和预设透明度确定特效图像。
进一步的,所述根据所述第一像素值、所述第二像素值和预设透明度确定特效图像,包括:
根据公式r=r1×(1-a)+r0×a、g=g1×(1-a)+g0×a和b=b1×(1-a)+b0×a确定特效图像;其中,r、g、b分别为所述特效图像对应的红、绿、蓝通道的像素值,r1、g1、b1分别为所述第一视频帧图像对应的红、绿、蓝通道的像素值,r0、g0、b0分别为所述目标对象在所述第二视频帧图像中占据的图像部分对应的红、绿、蓝通道的像素值。
根据本公开的一个或多个实施例,提供了一种图像处理装置,包括:
视频获取模块,用于获取原始视频;
图像选取模块,用于从所述原始视频中选取第一视频帧图像;
所述图像选取模块,还用于从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
透明化处理模块,用于对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
图像替换模块,用于在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
进一步的,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
进一步的,所述图像选取模块具体用于:响应于在所述原始视频中一个预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,从所述视频片段内中选取所述第一视频帧图像。
进一步的,所述第一视频帧图像为所述原始视频的所述时间段内的最后一帧图像。
进一步的,所述图像选取模块具体用于:响应于在所述原始视频中所述目标对象出现预设动作,从所述原始视频中选取所述目标对象出现预设动作的视频帧图像作为所述第一视频帧图像。
进一步的,所述目标对象为人像。
进一步的,所述透明化处理模块具体用于:获取所述第一视频帧图像对应的红蓝绿三颜色通道的第一像素值;获取所述目标对象在所述第二视频帧图像中占据的图像部分对应的红蓝绿三颜色通道的第二像素值;根据所述第 一像素值、所述第二像素值和预设透明度确定特效图像。
进一步的,所述透明化处理模块具体用于:根据公式r=r1×(1-a)+r0×a、g=g1×(1-a)+g0×a和b=b1×(1-a)+b0×a确定特效图像;其中,r、g、b分别为所述特效图像对应的红、绿、蓝通道的像素值,r1、g1、b1分别为所述第一视频帧图像对应的红、绿、蓝通道的像素值,r0、g0、b0分别为所述目标对象在所述第二视频帧图像中占据的图像部分对应的红、绿、蓝通道的像素值。
根据本公开的一个或多个实施例,提供了一种电子设备,包括:
存储器,用于存储非暂时性计算机可读指令;以及
处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述的图像处理方法。
根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (12)

  1. 一种图像处理方法,其特征在于,包括:
    获取原始视频;
    从所述原始视频中选取第一视频帧图像;
    从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
    对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
    在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
  2. 根据权利要求1所述的方法,其特征在于,所述第一视频帧图像中包含所述目标对象,所述目标对象在所述第一视频帧图像中占据的图像部分与所述目标对象在所述第二视频帧图像中占据的图像部分不同。
  3. 根据权利要求2所述的方法,其特征在于,所述从所述原始视频中选取第一视频帧图像,具体为:
    响应于在所述原始视频中一个预设时长的视频片段内所述目标对象占据的图像部分没有发生变化,从所述视频片段内中选取所述第一视频帧图像。
  4. 根据权利要求3所述的方法,其特征在于,所述第一视频帧图像为所述原始视频的时间段内的最后一帧图像。
  5. 根据权利要求1所述的方法,其特征在于,所述从所述原始视频中选取第一视频帧图像,具体为:
    响应于在所述原始视频中所述目标对象出现预设动作,从所述原始视频中选取所述目标对象出现预设动作的视频帧图像作为所述第一视频帧图像。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述目标对象为人像。
  7. 根据权利要求1-5任一项所述的方法,其特征在于,所述对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像,包括:
    获取所述第一视频帧图像对应的红蓝绿三颜色通道的第一像素值;
    获取所述目标对象在所述第二视频帧图像中占据的图像部分对应的红蓝绿三颜色通道的第二像素值;
    根据所述第一像素值、所述第二像素值和预设透明度确定特效图像。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第一像素值、所述第二像素值和预设透明度确定特效图像,包括:
    根据公式r=r1×(1-a)+r0×a、g=g1×(1-a)+g0×a和b=b1×(1-a)+b0×a确定特效图像;其中,r、g、b分别为所述特效图像对应的红、绿、蓝通道的像 素值,r1、g1、b1分别为所述第一视频帧图像对应的红、绿、蓝通道的像素值,r0、g0、b0分别为所述目标对象在所述第二视频帧图像中占据的图像部分对应的红、绿、蓝通道的像素值。
  9. 一种图像处理装置,其特征在于,包括:
    视频获取模块,用于获取原始视频;
    图像选取模块,用于从所述原始视频中选取第一视频帧图像;
    所述图像选取模块,还用于从所述原始视频中选取包含了目标对象的第二视频帧图像,并从所述第二视频帧图像中分离出所述目标对象在所述第二视频帧图像中占据的图像部分;
    透明化处理模块,用于对所述目标对象在所述第二视频帧图像中占据的图像部分进行透明化处理得到透明化图像,并将所述透明化图像叠加在所述第一视频帧图像上得到特效图像;
    图像替换模块,用于在所述原始视频中以所述特效图像替换所述第二视频帧图像,以形成目标视频。
  10. 一种电子设备,包括:
    存储器,用于存储非暂时性计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-8任一项所述的图像处理方法。
  11. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-8任一项所述的图像处理方法。
  12. 一种计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1~8任一项所述的方法。
PCT/CN2020/134683 2019-12-30 2020-12-08 图像处理方法及装置 WO2021135864A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP20910034.6A EP4068794A4 (en) 2019-12-30 2020-12-08 IMAGE PROCESSING METHOD AND APPARATUS
KR1020227026365A KR20220123073A (ko) 2019-12-30 2020-12-08 이미징 프로세싱 방법 및 장치
JP2022540466A JP7467642B2 (ja) 2019-12-30 2020-12-08 画像処理方法及び装置
BR112022012896A BR112022012896A2 (pt) 2019-12-30 2020-12-08 Método de processamento de imagem, aparelho de processamento de imagem, dispositivo eletrônico e meio de armazenamento legível por computador
US17/849,859 US11798596B2 (en) 2019-12-30 2022-06-27 Image processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911397521.4A CN113132795A (zh) 2019-12-30 2019-12-30 图像处理方法及装置
CN201911397521.4 2019-12-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/849,859 Continuation US11798596B2 (en) 2019-12-30 2022-06-27 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2021135864A1 true WO2021135864A1 (zh) 2021-07-08

Family

ID=76686436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134683 WO2021135864A1 (zh) 2019-12-30 2020-12-08 图像处理方法及装置

Country Status (7)

Country Link
US (1) US11798596B2 (zh)
EP (1) EP4068794A4 (zh)
JP (1) JP7467642B2 (zh)
KR (1) KR20220123073A (zh)
CN (1) CN113132795A (zh)
BR (1) BR112022012896A2 (zh)
WO (1) WO2021135864A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923391A (zh) * 2021-09-08 2022-01-11 荣耀终端有限公司 视频处理的方法、设备、存储介质和程序产品
CN114827754A (zh) * 2022-02-23 2022-07-29 阿里巴巴(中国)有限公司 视频首帧时间检测方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040129B (zh) * 2021-11-30 2023-12-05 北京字节跳动网络技术有限公司 视频生成方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590810A (zh) * 2017-09-22 2018-01-16 北京奇虎科技有限公司 实现双重曝光的视频数据处理方法及装置、计算设备
CN107665482A (zh) * 2017-09-22 2018-02-06 北京奇虎科技有限公司 实现双重曝光的视频数据实时处理方法及装置、计算设备
CN107705279A (zh) * 2017-09-22 2018-02-16 北京奇虎科技有限公司 实现双重曝光的图像数据实时处理方法及装置、计算设备
US20180174370A1 (en) * 2015-09-11 2018-06-21 Intel Corporation Scalable real-time face beautification of video images
CN108702463A (zh) * 2017-10-30 2018-10-23 深圳市大疆创新科技有限公司 一种图像处理方法、装置以及终端
CN108933905A (zh) * 2018-07-26 2018-12-04 努比亚技术有限公司 视频拍摄方法、移动终端和计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4682928B2 (ja) 2005-06-17 2011-05-11 富士ゼロックス株式会社 アクションキーフレーム内における複数のビデオフレーム全体にわたる動作を視覚化する装置とそのためのプログラム
US20080183822A1 (en) 2007-01-25 2008-07-31 Yigang Cai Excluding a group member from receiving an electronic message addressed to a group alias address
CN100505707C (zh) 2007-03-30 2009-06-24 腾讯科技(深圳)有限公司 一种即时通信中群组邮件通信的方法、装置及系统
US20100250693A1 (en) 2007-12-29 2010-09-30 Tencent Technology (Shenzhen) Company Ltd. Method, apparatus for converting group message and system for exchanging group message
US8904031B2 (en) 2007-12-31 2014-12-02 Genesys Telecommunications Laboratories, Inc. Federated uptake throttling
US8103134B2 (en) 2008-02-20 2012-01-24 Samsung Electronics Co., Ltd. Method and a handheld device for capturing motion
CN102025514A (zh) 2009-09-11 2011-04-20 中兴通讯股份有限公司 即时消息与电子邮件互通的方法与系统
JP2013186521A (ja) 2012-03-06 2013-09-19 Casio Comput Co Ltd 画像処理装置、画像処理方法及びプログラム
CN104346157A (zh) * 2013-08-06 2015-02-11 腾讯科技(深圳)有限公司 一种图片处理方法及装置、终端设备
KR20150025214A (ko) * 2013-08-28 2015-03-10 삼성전자주식회사 동영상에 비주얼 객체를 중첩 표시하는 방법, 저장 매체 및 전자 장치
WO2015029392A1 (ja) 2013-08-30 2015-03-05 パナソニックIpマネジメント株式会社 メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム
JP2015114694A (ja) 2013-12-09 2015-06-22 ソニー株式会社 画像処理装置、画像処理方法およびプログラム
CN110062269A (zh) * 2018-01-18 2019-07-26 腾讯科技(深圳)有限公司 附加对象显示方法、装置及计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180174370A1 (en) * 2015-09-11 2018-06-21 Intel Corporation Scalable real-time face beautification of video images
CN107590810A (zh) * 2017-09-22 2018-01-16 北京奇虎科技有限公司 实现双重曝光的视频数据处理方法及装置、计算设备
CN107665482A (zh) * 2017-09-22 2018-02-06 北京奇虎科技有限公司 实现双重曝光的视频数据实时处理方法及装置、计算设备
CN107705279A (zh) * 2017-09-22 2018-02-16 北京奇虎科技有限公司 实现双重曝光的图像数据实时处理方法及装置、计算设备
CN108702463A (zh) * 2017-10-30 2018-10-23 深圳市大疆创新科技有限公司 一种图像处理方法、装置以及终端
CN108933905A (zh) * 2018-07-26 2018-12-04 努比亚技术有限公司 视频拍摄方法、移动终端和计算机可读存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923391A (zh) * 2021-09-08 2022-01-11 荣耀终端有限公司 视频处理的方法、设备、存储介质和程序产品
CN113923391B (zh) * 2021-09-08 2022-10-14 荣耀终端有限公司 视频处理的方法、设备和存储介质
CN114827754A (zh) * 2022-02-23 2022-07-29 阿里巴巴(中国)有限公司 视频首帧时间检测方法及装置
CN114827754B (zh) * 2022-02-23 2023-09-12 阿里巴巴(中国)有限公司 视频首帧时间检测方法及装置

Also Published As

Publication number Publication date
US20220328072A1 (en) 2022-10-13
JP7467642B2 (ja) 2024-04-15
KR20220123073A (ko) 2022-09-05
EP4068794A4 (en) 2022-12-28
BR112022012896A2 (pt) 2022-09-06
EP4068794A1 (en) 2022-10-05
CN113132795A (zh) 2021-07-16
US11798596B2 (en) 2023-10-24
JP2023509429A (ja) 2023-03-08

Similar Documents

Publication Publication Date Title
WO2021135864A1 (zh) 图像处理方法及装置
WO2021031850A1 (zh) 图像处理的方法、装置、电子设备及存储介质
US10181203B2 (en) Method for processing image data and apparatus for the same
WO2021027631A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2023051185A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2021139372A1 (zh) 图像的处理方法、装置、可读介质和电子设备
CN110070496B (zh) 图像特效的生成方法、装置和硬件装置
WO2021170013A1 (zh) 图像特效处理方法及装置
WO2021218318A1 (zh) 视频传输方法、电子设备和计算机可读介质
WO2021197024A1 (zh) 视频特效配置文件生成方法、视频渲染方法及装置
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
WO2021031847A1 (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
WO2023040749A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023109829A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2021129628A1 (zh) 视频特效处理方法及装置
WO2023165515A1 (zh) 拍摄方法、装置、电子设备和存储介质
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
CN111352560B (zh) 分屏方法、装置、电子设备和计算机可读存储介质
WO2021027547A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2023231918A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023143240A1 (zh) 图像处理方法、装置、设备、存储介质和程序产品
WO2021027597A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质
WO2021073204A1 (zh) 对象的显示方法、装置、电子设备及计算机可读存储介质
WO2021052095A1 (zh) 图像处理方法及装置
WO2021027632A1 (zh) 图像特效处理方法、装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910034

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022540466

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020910034

Country of ref document: EP

Effective date: 20220630

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022012896

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20227026365

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112022012896

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20220628