CN111327840A - Multi-frame special-effect video acquisition method, terminal and computer readable storage medium - Google Patents

Multi-frame special-effect video acquisition method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN111327840A
CN111327840A CN202010125516.4A CN202010125516A CN111327840A CN 111327840 A CN111327840 A CN 111327840A CN 202010125516 A CN202010125516 A CN 202010125516A CN 111327840 A CN111327840 A CN 111327840A
Authority
CN
China
Prior art keywords
frame
image
image frame
transformation matrix
effect video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010125516.4A
Other languages
Chinese (zh)
Inventor
陈国庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN202010125516.4A priority Critical patent/CN111327840A/en
Publication of CN111327840A publication Critical patent/CN111327840A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • G06T3/14

Abstract

The invention discloses a multi-frame special-effect video acquisition method, a terminal and a computer readable storage medium, wherein the method comprises the steps of obtaining a transformation matrix according to the image registration between acquired image frames; according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence; the invention also discloses a terminal and a computer readable storage medium, which can repeatedly use the original data frame of the last composite photo by implementing the scheme, thereby reducing the shooting time and achieving the purpose that the video with multi-frame special effects can be shot in each frame only in the same time as the common video.

Description

Multi-frame special-effect video acquisition method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, a terminal, and a computer-readable storage medium for acquiring a multi-frame special-effect video.
Background
The existing photo shooting methods of a plurality of 'multi-frame special effects' continuously shoot a plurality of frames of images, synthesize the plurality of frames of images into a photo through certain algorithms, and present special effects such as optical flow, smear and the like of moving objects on the synthesized photo; the end result of the special effect on the composite photograph is static, if a video with special effect is to be obtained, many composite photographs are continuously taken and then the photographs are combined into a video, which takes a long time, and a tripod is also necessary for auxiliary photographing during the photographing for the video effect.
Disclosure of Invention
The technical problem to be solved by the invention is that it takes a long time to produce a video with special effect in the prior art, and aiming at the technical problem, a multi-frame special-effect video obtaining method, a terminal and a computer readable storage medium are provided.
In order to solve the technical problem, the invention provides a method for acquiring a multi-frame special effect video, which comprises the following steps:
obtaining a transformation matrix according to the image registration between the obtained image frames;
according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence;
and obtaining a special effect video according to the superposed images.
Optionally, the obtaining a transformation matrix according to the image registration between the acquired image frames includes:
sequencing the image frames in sequence according to the acquisition time of the image frames;
carrying out image registration on at least two image frames which are adjacent to each other to obtain a corresponding transformation matrix;
and establishing the corresponding relation between any at least two adjacent image frames and the transformation matrix.
Optionally, before performing cumulative overlapping on the last synthesized frame obtained by stacking in sequence and the current image frame according to the transformation matrix, the method includes:
and superposing the target image frame and the first image frame according to a transformation matrix corresponding to the target image frame and the adjacent first image frame to obtain the previous composite frame.
Optionally, the target image frame includes a first image frame, or a last image frame, or a certain image frame determined according to the image determination instruction.
Optionally, the performing, according to the transformation matrix, cumulative overlapping the previous synthesized frame obtained by stacking and the current image frame in sequence includes:
according to the first image frame and a transformation matrix corresponding to the current image frame adjacent to the first image frame, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame;
and sequentially superposing the new synthesized frames.
Optionally, the performing, according to the transformation matrix, cumulative overlapping the previous synthesized frame obtained by stacking and the current image frame in sequence includes:
determining a current image frame according to the image selection instruction;
carrying out image registration on the first image frame and the current image frame to obtain a transformation matrix;
according to the change matrix, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame;
and sequentially superposing the new synthesized frames.
Optionally, the sequentially overlapping the new synthesized frames includes:
and according to the transformation matrix, overlapping the obtained new composite frame with the next image frame until the total number of the overlapped image frames reaches a threshold value.
Optionally, the obtaining a special effect video according to the superimposed image includes:
transmitting at least two superposed images to an encoder to obtain the special effect video;
after obtaining the special effect video according to the superposed images, the method comprises the following steps:
and performing multi-frame noise reduction on the special-effect video and then previewing in real time.
Furthermore, the invention also provides a terminal, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the multi-frame special effect video acquisition method as described above.
Further, the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the multi-frame special effect video acquisition method as described above.
Advantageous effects
The invention provides a method, a terminal and a computer readable storage medium for acquiring a multi-frame special-effect video, aiming at the problem that the existing video with special effect takes longer time to manufacture, a transformation matrix is obtained by image registration between acquired image frames; according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence; obtaining a special effect video according to the superposed images; the method comprises the steps of accurately obtaining a transformation matrix of an image frame piece through image registration between image frames, then performing accumulative superposition on the image frames according to the transformation matrix, and repeatedly using an original data frame of a previous synthesized photo when the image frames are superposed, thereby reducing shooting time and achieving the purpose that a video with multi-frame special effects on each frame can be shot only in the same time as a common video.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a basic flowchart of a method for acquiring a multi-frame special-effect video according to a first embodiment of the present invention;
fig. 4 is a detailed flowchart of a multi-frame special effect video obtaining method according to a second embodiment of the present invention;
fig. 5 is a flowchart of a registration thread according to a third embodiment of the present invention;
FIG. 6 is a flowchart of a multi-frame synthetic thread according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to a fourth embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and charging functions Entity) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
First embodiment
In order to solve the problem that a long time is required to continuously take a plurality of synthesized photos and synthesize the photos into a video when an existing video with a special effect is produced, this embodiment provides a multi-frame special effect video obtaining method, as shown in fig. 3, fig. 3 is a basic flow chart of the multi-frame special effect video obtaining method provided by this embodiment, and the multi-frame special effect video obtaining method includes:
s301, obtaining a transformation matrix according to the image registration between the acquired image frames.
In this embodiment, a plurality of image frames may be acquired in real time according to a terminal camera, or the terminal may receive a plurality of image frames sent by other terminals, and then perform image registration on the acquired image frames to obtain a transformation matrix. It will be appreciated that image registration is the process of transforming different images of the same scene into the same coordinate system. These images may be taken at different times (multi-time registration), may be taken at different sensors (multi-mode registration), and may be taken at different perspectives. The spatial relationship between these images may be rigid (translation and rotation), affine (e.g. miscut), or homography, or complex large deformation models; the flow of the registration technique is as follows: firstly, extracting the characteristics of two images to obtain characteristic points; finding matched characteristic point pairs by carrying out similarity measurement; then obtaining image space coordinate transformation parameters through the matched feature point pairs; and finally, carrying out image registration by the coordinate transformation parameters, so that the two images are aligned.
In this embodiment, a transformation matrix is obtained according to the image registration between the acquired image frames: usually, one-to-one corresponding matching point pairs (feature point extraction, description, point pair matching) are found in two images, and then a transformation matrix is obtained through the matching point pairs, for example, a central symmetry local binary pattern descriptor based on similarity is used to obtain the transformation matrix; specifically, two image frames are obtained from a camera in real time, for example, a current image frame and a previous image frame are used for image registration to obtain a transformation matrix; and then two image frames are acquired from the camera in real time to obtain another transformation matrix. In some embodiments, the transformation matrix may be obtained according to the image registration between the acquired image frames, and after a plurality of image frames are acquired, the image frames are sequentially ordered according to the acquisition time of the image frames, and at least two arbitrary adjacent image frames are subjected to image registration to obtain the corresponding transformation matrix; and establishing the corresponding relation between any at least two adjacent image frames and the transformation matrix. For example, sequentially acquiring 1-6 image frames, and performing image registration on the 1 st image frame and the 2 nd image frame to obtain a corresponding transformation matrix 1-2; carrying out image registration on the 2 nd image frame and the 3 rd image frame to obtain a corresponding transformation matrix 2-3; and carrying out image registration on the 3 rd image frame and the 4 th image frame to obtain a corresponding transformation matrix 3-4, and carrying out image registration on the 5 th image frame and the 6 th image frame to obtain a corresponding transformation matrix 5-6.
In some embodiments, image registration may also be performed on any three adjacent image frames to obtain corresponding transformation matrices, for example, image registration is performed on the 1 st image frame, the 2 nd image frame, and the 3 rd image frame to obtain corresponding transformation matrices 1-2-3.
And S302, according to the transformation matrix, accumulating and superposing the previous synthesized frame obtained by superposition and the current image frame in sequence.
In this embodiment, before the previous combined frame obtained by the superposition and the current image frame are sequentially and cumulatively superposed according to the transformation matrix, the previous combined frame is further determined, specifically, the target image frame and the first image frame are superposed according to the transformation matrix corresponding to the target image frame and the adjacent first image frame to obtain the previous combined frame, for example, the target image frame and the first image frame are subjected to rigid body transformation or radial transformation by using the transformation matrix to obtain the previous combined frame.
It should be noted that the target image frame may be a default first or last image frame; for example, when the target image frame is the first image frame obtained, the first image frame (namely, the 1 st image frame) and the adjacent 2 nd image frame are superposed, and the last composite frame is obtained according to the transformation matrix 1-2 of the 1 st image frame and the 2 nd image frame; when the target image frame is a tail image frame, overlapping the tail image frame (namely, the 6 th image frame) and an adjacent 5 th image frame, and according to a transformation matrix 5-6 of the 5 th image frame and the 6 th image frame; the target image frame may also be a certain image frame determined according to the image determination instruction, that is, when the user specifies which image frame is the target image frame through the image determination instruction, the terminal may further determine the adjacent first image frame according to the image determination instruction, for example, when the terminal specifies as the 3 rd image frame, the image determination instruction further determines that the adjacent first image frame is the 4 th image frame, and further superimposes the 3 rd image frame and the 4 th image frame, according to the transformation matrix 3-4 of the 3 rd image frame and the 4 th image frame. In some embodiments, the target image frame may also be an image frame randomly determined by the terminal, and the adjacent first image frame is not limited.
In this embodiment, the sequentially performing the cumulative overlapping of the previous synthesized frame obtained by the overlapping and the current image frame according to the transformation matrix includes: according to the first image frame and a transformation matrix corresponding to the adjacent current image frame, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame; the new composite frames continue to be superimposed in order. According to the transformation matrix, overlapping a previous synthesized frame and a current image frame to obtain a new synthesized frame, wherein the current image frame is an image frame adjacent to a first image frame according to an image frame acquisition sequence, such as a next image frame or a previous image frame of the first image frame; for example, the 1 st image frame and the adjacent 2 nd image frame are overlapped, and the last composite frame is obtained according to the transformation matrix 1-2 of the 1 st image frame and the 2 nd image frame, and can be used as a new 2 nd image frame; at the moment, if the current image frame is the 3 rd image frame, the new 2 nd image frame and the 3 rd image frame are superposed according to the transformation matrixes 2-3 corresponding to the new 2 nd image frame and the 3 rd image frame to obtain a new composite frame; then, according to the transformation matrix, continuing to superimpose a new synthesized frame, specifically, according to the transformation matrix, superimposing the obtained new synthesized frame and a next image frame, wherein the next image frame is a next image frame of the current image frame, and obtaining a new image frame again, repeating the superimposition of the new synthesized frame and the next image frame until the total number of the superimposed image frames reaches a threshold value; for example, when a new 2 nd image frame and a new 3 rd image frame are superposed to obtain a new composite frame; the new synthesized frame may be used as a new 3 rd image frame, and the new 3 rd image frame and the new 4 th image frame are superimposed according to the transformation matrix 3-4 corresponding to the 3 rd image frame and the 4 th image frame, so as to obtain a new synthesized frame again, and so on, the new synthesized frame and the next image frame are superimposed until the total number of superimposed image frames reaches a threshold, which may be set by a user, including but not limited to the last frame. Namely, the original data frame of the previous composite photo is repeatedly used, and the accumulated superposition of the image frames is realized according to the transformation matrix with continuity, so that the superposed image has a multi-frame special effect.
In some embodiments, the last composite frame and the current image frame may be cumulatively overlapped according to the selection order of the user; specifically, a current image frame is determined according to an image selection instruction, and image registration is performed on a first image frame and the current image frame to obtain a transformation matrix; according to the transformation matrix, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame; continuously overlapping the new synthesized frames in sequence; for example, the 1 st image frame and the adjacent 2 nd image frame are overlapped, the last composite frame is obtained according to the transformation matrix 1-2 of the 1 st image frame and the 2 nd image frame, the last composite frame can be used as a new 2 nd image frame, when the terminal is arranged according to the sequence of the image frames, the current image frame is the 3 rd image frame, but when an image selection instruction is received, the image selection instruction indicates that the current image frame is the 4 th image frame to be overlapped with the last composite frame, and the current image frame is the 4 th image frame according to the selection sequence of a user; when the transformation matrixes corresponding to the 2 nd image frame and the 4 th image frame do not exist, image registration is carried out on the 2 nd image frame and the 4 th image frame to obtain a transformation matrix 2-4, then the last composite frame and the 4 th image frame are overlapped based on the transformation matrix 2-4 to obtain a new composite frame, and then the obtained new composite frame and the next image frame are overlapped according to the transformation matrix until the total number of the overlapped image frames reaches the threshold value.
And S303, obtaining a special effect video according to the superposed image.
It can be understood that the special effect video is made of a plurality of composite images, in this embodiment, the images obtained in steps S301 and S302 after the superposition are an image 1 with a multi-frame special effect, and then S301 and S302 are repeated, and an image 2 with a multi-frame special effect is obtained again according to the newly obtained image frame; in this embodiment, the user can also view the superimposed image on the terminal interface in real time; in some embodiments, after the user confirms that the images after the superimposition are not problematic, the terminal transmits at least two images after the superimposition to the encoder to obtain the special effect video, that is, transmits the image 1 and the image 2 to the encoder, and the encoder performs encoding synthesis to obtain the special effect video. In this embodiment, after obtaining the special effect video according to the superimposed image, the special effect video may be subjected to multi-frame denoising and then previewed in real time.
In this embodiment, the terminal may further establish two threads, obtain the special effect video through the two threads, and obtain a transformation matrix according to image registration of the image frame in one thread, for example; in another thread, according to the transformation matrix, sequentially performing accumulative superposition on a last synthesized frame obtained by superposition and a current image frame; and obtaining a special effect video according to the superposed images.
The embodiment provides a multi-frame special-effect video acquisition method, aiming at the problem that the existing method for manufacturing a video with a special effect takes a long time, a transformation matrix is obtained by image registration between acquired image frames; according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence; obtaining a special effect video according to the superposed images; the method comprises the steps of accurately obtaining a transformation matrix of an image frame piece through image registration between image frames, then performing accumulative superposition on the image frames according to the transformation matrix, and repeatedly using an original data frame of a previous synthesized photo when the image frames are superposed, thereby reducing shooting time and achieving the purpose that a video with multi-frame special effects on each frame can be shot only in the same time as a common video.
Second embodiment
For convenience of understanding, this embodiment describes a method for acquiring a multi-frame special effect video by using a specific example, as shown in fig. 4, fig. 4 is a detailed flowchart of the method for acquiring a multi-frame special effect video according to the second embodiment of the present invention, where the method for acquiring a multi-frame special effect video includes:
s401, acquiring a plurality of continuously shot image frames from a terminal camera.
S402, carrying out image registration on any two adjacent image frames to obtain a corresponding transformation matrix.
Performing image registration on a 1 st image frame and a2 nd image frame on the assumption that 8 image frames are included, and obtaining a transformation matrix 1-2 by using a centrosymmetric local binary pattern based on similarity; and in the same way, obtaining a transformation matrix 2-3 and a transformation matrix 3-4 … … transformation matrix 7-8, storing all the transformation matrices, and establishing the corresponding relation between the image frame and the transformation matrices.
And S403, superposing the target image frame and the first image frame according to the transformation matrix corresponding to the target image frame and the adjacent first image frame to obtain a composite frame.
In this embodiment, taking the target image frame as the end image frame as an example for description, the first image frame adjacent to the target image frame is the 7 th image frame, and the 8 th image frame and the 7 th image frame are superimposed according to the transformation matrix 7-8 to obtain a composite frame.
S404, according to the first image frame and the transformation matrix corresponding to the adjacent current image frame, a composite frame and the current image frame are superposed to obtain a new composite frame.
Since the first image frame is a 7 th image frame, the adjacent current image frame is a 6 th image frame; and after the synthesized frame is obtained, the synthesized frame is used as a new 7 th image frame, and the synthesized frame and the 6 th image frame are superposed according to the transformation matrix 6-7 to obtain the new synthesized frame.
And S405, overlapping the obtained new composite frame with the next image frame according to the transformation matrix until the total number of the overlapped image frames reaches a threshold value.
In this embodiment, the next image frame is the 5 th image frame, and similarly, the new synthesized frame is used as the new 6 th image frame, the new synthesized frame and the 5 th image frame are overlapped according to the transformation matrix 5-6, the overlapped synthesized frame is used as the new 5 th image frame, the overlapped synthesized frame and the 4 th image frame are continuously overlapped according to the transformation matrix 4-5, and so on until the total number of the overlapped image frames reaches the threshold value.
And S406, transmitting the at least two superposed images to an encoder to obtain a special effect video.
And transmitting at least two different images obtained by the two steps S401-S405 after the superposition to an encoder, and performing encoding synthesis by the encoder to obtain the special effect video.
And S407, performing multi-frame noise reduction on the special-effect video and then previewing in real time.
The embodiment describes a method for acquiring a multi-frame special-effect video by using a specific example, and can repeatedly use the original data frame of the previous synthesized photo, thereby reducing the shooting time, and achieving the purpose that a video with multi-frame special effects in each frame can be shot only in the same time as a common video; and the continuous registration method is used, so that video shooting can be carried out in a handheld manner, and the overlapped result can be seen on an interface in real time, so that the satisfaction degree of a user is improved.
Third embodiment
The embodiment provides a method for acquiring a multi-frame special-effect video; the method comprises two processing flows which are respectively completed in two threads, namely a registration thread and a multi-frame synthesis thread; as shown in fig. 5, the registration thread processing flow includes:
and S501, acquiring a current image frame from the camera.
And S502, carrying out image registration by using the current image frame and the previous image frame to obtain a transformation matrix.
S503, storing the transformation matrix in a shared buffer queue.
Steps S501 to S503 are repeated.
As shown in fig. 6, the process flow of the multi-frame synthetic thread is as follows:
s601, acquiring a plurality of image frames from the camera and storing the image frames in an image frame buffer queue.
S602, superposing the current image (supposing to be the 10 th frame) and the 9 th frame image, and transforming the 10 th frame image and the 9 th frame image according to a transformation matrix.
The synthesized image is taken as a new current frame, and the sequence number of the current frame is 9.
And S603, overlapping the overlapped image and the 8 th frame image, and according to the transformation matrix of the 9 th frame image and the 8 th frame image.
And S604, executing the third step in an analogical manner until the number of superposed sheets reaches a set threshold or the 1 st frame.
And S605, transmitting the superposed images to an encoder.
And S606, releasing redundant image frames in the image frame buffer queue.
Steps S601 to S607 are repeated.
Fourth embodiment
The present embodiment further provides a terminal, as shown in fig. 7, which includes a processor 701, a memory 702, and a communication bus 703, where:
the communication bus 703 is used for realizing connection communication between the processor 701 and the memory 702;
the processor 701 is configured to execute one or more programs stored in the memory 702 to implement the following steps:
specifically, a transformation matrix is obtained according to the image registration between the acquired image frames;
according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence;
and obtaining a special effect video according to the superposed images.
In this embodiment, the processor 701 implements image registration between acquired image frames to obtain a transformation matrix, which specifically includes: sequentially ordering the image frames according to the acquisition time of the image frames; carrying out image registration on at least two image frames which are adjacent to each other to obtain a corresponding transformation matrix; and establishing the corresponding relation between any at least two adjacent image frames and the transformation matrix.
In this embodiment, before the processor 701 performs the accumulated overlapping on the last synthesized frame obtained by the overlapping and the current image frame in sequence according to the transformation matrix, the method further includes: and superposing the target image frame and the first image frame according to the transformation matrix corresponding to the target image frame and the adjacent first image frame to obtain the last synthesized frame. The target image frame comprises a first image frame, a last image frame or a certain image frame determined according to the image determination instruction. Specifically, according to the transformation matrix, sequentially performing cumulative overlapping on the previous synthesized frame obtained by the overlapping and the current image frame, including: according to the first image frame and a transformation matrix corresponding to the adjacent current image frame, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame; the new composite frames continue to be superimposed in order.
In some embodiments, the sequentially adding the previous synthesized frame to the current image frame according to the transformation matrix comprises: determining a current image frame according to the image selection instruction; carrying out image registration on the first image frame and the current image frame to obtain a transformation matrix; according to the transformation matrix, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame; the new composite frames continue to be superimposed in order.
In this embodiment, specifically, the processor 701 implements the sequential continuous superposition on the new synthesized frame, which specifically includes: and superposing the obtained new synthesized frame with the next image frame according to the transformation matrix until the total number of superposed image frames reaches a threshold value.
In this embodiment, the processor 701 obtains a special effect video according to the image after the superimposition is completed, and specifically includes: transmitting at least two superposed images to an encoder to obtain a special effect video; after obtaining the special effect video according to the superposed images, the method comprises the following steps: and performing multi-frame noise reduction on the special-effect video and then previewing in real time.
The present embodiment also provides a computer-readable storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs can be executed by one or more processors to implement the steps of the multi-frame special effect video acquisition method in the embodiments, where the steps of the multi-frame special effect video acquisition method are specifically please refer to the first embodiment to the third embodiment, and are not described herein again.
The embodiment provides a terminal and a computer-readable storage medium, so as to implement the multi-frame special-effect video acquisition method in the above embodiments, where the method obtains a transformation matrix according to the image registration between the acquired image frames; according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence; obtaining a special effect video according to the superposed images; the method comprises the steps of accurately obtaining a transformation matrix of an image frame piece through image registration between image frames, then performing accumulative superposition on the image frames according to the transformation matrix, and repeatedly using an original data frame of a previous synthesized photo when the image frames are superposed, thereby reducing shooting time and achieving the purpose that a video with multi-frame special effects on each frame can be shot only in the same time as a common video.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A multi-frame special effect video obtaining method is characterized by comprising the following steps:
obtaining a transformation matrix according to the image registration between the obtained image frames;
according to the transformation matrix, performing accumulative superposition on the last synthesized frame obtained by superposition and the current image frame in sequence;
and obtaining a special effect video according to the superposed images.
2. The multi-frame special effect video acquisition method according to claim 1, wherein the obtaining of the transformation matrix according to the image registration between the acquired image frames comprises:
sequencing the image frames in sequence according to the acquisition time of the image frames;
carrying out image registration on at least two image frames which are adjacent to each other to obtain a corresponding transformation matrix;
and establishing the corresponding relation between any at least two adjacent image frames and the transformation matrix.
3. The multi-frame special effect video obtaining method according to claim 2, wherein before performing the cumulative overlapping of the previous synthesized frame obtained by the overlapping and the current image frame in sequence according to the transformation matrix, the method comprises:
and superposing the target image frame and the first image frame according to a transformation matrix corresponding to the target image frame and the adjacent first image frame to obtain the previous composite frame.
4. The multi-frame special effect video acquisition method according to claim 3, wherein the target image frame comprises a first image frame, or a last image frame, or a certain image frame determined according to an image determination instruction.
5. The multi-frame special effect video obtaining method according to claim 3, wherein the performing, according to the transformation matrix, cumulative overlapping of the previous synthesized frame obtained by overlapping and the current image frame in sequence comprises:
according to the first image frame and a transformation matrix corresponding to the current image frame adjacent to the first image frame, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame;
and sequentially superposing the new synthesized frames.
6. The multi-frame special effect video obtaining method according to claim 3, wherein the performing, according to the transformation matrix, cumulative overlapping of the previous synthesized frame obtained by overlapping and the current image frame in sequence comprises:
determining a current image frame according to the image selection instruction;
carrying out image registration on the first image frame and the current image frame to obtain a transformation matrix;
according to the change matrix, overlapping the previous synthesized frame and the current image frame to obtain a new synthesized frame;
and sequentially superposing the new synthesized frames.
7. The multi-frame special effect video acquisition method according to claim 5 or 6, wherein said sequentially continuing to superimpose said new composite frame comprises:
and according to the transformation matrix, overlapping the obtained new composite frame with the next image frame until the total number of the overlapped image frames reaches a threshold value.
8. The multi-frame special effect video acquisition method according to claim 7, wherein obtaining a special effect video according to the superimposed image includes:
transmitting at least two superposed images to an encoder to obtain the special effect video;
after obtaining the special effect video according to the superposed images, the method comprises the following steps:
and performing multi-frame noise reduction on the special-effect video and then previewing in real time.
9. A terminal, characterized in that the terminal comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the multi-frame special effects video acquisition method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the multi-frame special effects video acquisition method according to any one of claims 1 to 8.
CN202010125516.4A 2020-02-27 2020-02-27 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium Pending CN111327840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010125516.4A CN111327840A (en) 2020-02-27 2020-02-27 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010125516.4A CN111327840A (en) 2020-02-27 2020-02-27 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111327840A true CN111327840A (en) 2020-06-23

Family

ID=71167320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010125516.4A Pending CN111327840A (en) 2020-02-27 2020-02-27 Multi-frame special-effect video acquisition method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111327840A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112087582A (en) * 2020-09-14 2020-12-15 努比亚技术有限公司 Special effect video generation method, mobile terminal and computer readable storage medium
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112508773A (en) * 2020-11-20 2021-03-16 小米科技(武汉)有限公司 Image processing method and device, electronic device and storage medium
CN113240577A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN113393505A (en) * 2021-06-25 2021-09-14 浙江商汤科技开发有限公司 Image registration method, visual positioning method, related device and equipment
CN113841112A (en) * 2020-08-06 2021-12-24 深圳市大疆创新科技有限公司 Image processing method, camera and mobile terminal
CN114401360A (en) * 2021-12-07 2022-04-26 影石创新科技股份有限公司 Multi-frame delay special effect generation method, device, equipment and medium of video
WO2023030176A1 (en) * 2021-09-03 2023-03-09 上海商汤智能科技有限公司 Video processing method and apparatus, computer-readable storage medium, and computer device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
JP2010233001A (en) * 2009-03-27 2010-10-14 Casio Computer Co Ltd Image compositing apparatus, image reproducing apparatus, and program
KR101007409B1 (en) * 2010-05-26 2011-01-14 삼성탈레스 주식회사 Apparatus and method for processing image fusion signal for improvement of target detection
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
JP2013101552A (en) * 2011-11-09 2013-05-23 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion matrix estimation success/failure determination device and object coordinate system conversion matrix estimation success/failure determination method, and program therefor
CN104574329A (en) * 2013-10-09 2015-04-29 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system
CN104966318A (en) * 2015-06-18 2015-10-07 清华大学 A reality augmenting method having image superposition and image special effect functions
CN105046699A (en) * 2015-07-09 2015-11-11 硅革科技(北京)有限公司 Motion video superposition contrast method
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
US20160048945A1 (en) * 2014-08-05 2016-02-18 Hitachi, Ltd. Method and Apparatus of Generating Image
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN109636714A (en) * 2018-08-30 2019-04-16 沈阳聚声医疗系统有限公司 A kind of image split-joint method of ultrasonic wide-scene imaging
CN110390688A (en) * 2019-07-23 2019-10-29 中国人民解放军国防科技大学 Steady video SAR image sequence registration method
US20200410641A1 (en) * 2018-03-15 2020-12-31 Murakami Corporation Composite video image creation apparatus, composite video image creation method, and composite video image creation program

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
JP2010233001A (en) * 2009-03-27 2010-10-14 Casio Computer Co Ltd Image compositing apparatus, image reproducing apparatus, and program
KR101007409B1 (en) * 2010-05-26 2011-01-14 삼성탈레스 주식회사 Apparatus and method for processing image fusion signal for improvement of target detection
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
JP2013101552A (en) * 2011-11-09 2013-05-23 Nippon Telegr & Teleph Corp <Ntt> Object coordinate system conversion matrix estimation success/failure determination device and object coordinate system conversion matrix estimation success/failure determination method, and program therefor
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
CN104574329A (en) * 2013-10-09 2015-04-29 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion imaging navigation system
US20160048945A1 (en) * 2014-08-05 2016-02-18 Hitachi, Ltd. Method and Apparatus of Generating Image
CN104966318A (en) * 2015-06-18 2015-10-07 清华大学 A reality augmenting method having image superposition and image special effect functions
CN105046699A (en) * 2015-07-09 2015-11-11 硅革科技(北京)有限公司 Motion video superposition contrast method
CN108282612A (en) * 2018-01-12 2018-07-13 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
US20200410641A1 (en) * 2018-03-15 2020-12-31 Murakami Corporation Composite video image creation apparatus, composite video image creation method, and composite video image creation program
CN109636714A (en) * 2018-08-30 2019-04-16 沈阳聚声医疗系统有限公司 A kind of image split-joint method of ultrasonic wide-scene imaging
CN110390688A (en) * 2019-07-23 2019-10-29 中国人民解放军国防科技大学 Steady video SAR image sequence registration method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022027447A1 (en) * 2020-08-06 2022-02-10 深圳市大疆创新科技有限公司 Image processing method, and camera and mobile terminal
CN113841112A (en) * 2020-08-06 2021-12-24 深圳市大疆创新科技有限公司 Image processing method, camera and mobile terminal
CN112087582A (en) * 2020-09-14 2020-12-15 努比亚技术有限公司 Special effect video generation method, mobile terminal and computer readable storage medium
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112508773A (en) * 2020-11-20 2021-03-16 小米科技(武汉)有限公司 Image processing method and device, electronic device and storage medium
CN112508773B (en) * 2020-11-20 2024-02-09 小米科技(武汉)有限公司 Image processing method and device, electronic equipment and storage medium
CN113240577A (en) * 2021-05-13 2021-08-10 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN113240577B (en) * 2021-05-13 2024-03-15 北京达佳互联信息技术有限公司 Image generation method and device, electronic equipment and storage medium
CN113393505B (en) * 2021-06-25 2023-11-03 浙江商汤科技开发有限公司 Image registration method, visual positioning method, related device and equipment
CN113393505A (en) * 2021-06-25 2021-09-14 浙江商汤科技开发有限公司 Image registration method, visual positioning method, related device and equipment
WO2023030176A1 (en) * 2021-09-03 2023-03-09 上海商汤智能科技有限公司 Video processing method and apparatus, computer-readable storage medium, and computer device
CN114401360A (en) * 2021-12-07 2022-04-26 影石创新科技股份有限公司 Multi-frame delay special effect generation method, device, equipment and medium of video
WO2023103944A1 (en) * 2021-12-07 2023-06-15 影石创新科技股份有限公司 Video multi-frame delay special effect generation method and apparatus, device, and medium

Similar Documents

Publication Publication Date Title
CN111327840A (en) Multi-frame special-effect video acquisition method, terminal and computer readable storage medium
CN108900790B (en) Video image processing method, mobile terminal and computer readable storage medium
CN108259781B (en) Video synthesis method, terminal and computer-readable storage medium
CN110072061B (en) Interactive shooting method, mobile terminal and storage medium
CN107105166B (en) Image photographing method, terminal, and computer-readable storage medium
CN111654628B (en) Video shooting method and device and computer readable storage medium
CN109120858B (en) Image shooting method, device, equipment and storage medium
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN111885307A (en) Depth-of-field shooting method and device and computer readable storage medium
CN112995467A (en) Image processing method, mobile terminal and storage medium
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN112511741A (en) Image processing method, mobile terminal and computer storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN112367443A (en) Photographing method, mobile terminal and computer-readable storage medium
CN109710159B (en) Flexible screen response method and device and computer readable storage medium
CN107395971B (en) Image acquisition method, image acquisition equipment and computer-readable storage medium
CN113179369A (en) Shot picture display method, mobile terminal and storage medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN111614902A (en) Video shooting method and device and computer readable storage medium
CN115134527B (en) Processing method, intelligent terminal and storage medium
CN108495033B (en) Photographing regulation and control method and device and computer readable storage medium
CN108282608B (en) Multi-region focusing method, mobile terminal and computer readable storage medium
CN111866388B (en) Multiple exposure shooting method, equipment and computer readable storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN112087582A (en) Special effect video generation method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200623