CN108933881B - Video processing method and device - Google Patents

Video processing method and device Download PDF

Info

Publication number
CN108933881B
CN108933881B CN201710362041.9A CN201710362041A CN108933881B CN 108933881 B CN108933881 B CN 108933881B CN 201710362041 A CN201710362041 A CN 201710362041A CN 108933881 B CN108933881 B CN 108933881B
Authority
CN
China
Prior art keywords
video
target position
playing
input devices
video input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710362041.9A
Other languages
Chinese (zh)
Other versions
CN108933881A (en
Inventor
丁乐乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201710362041.9A priority Critical patent/CN108933881B/en
Publication of CN108933881A publication Critical patent/CN108933881A/en
Application granted granted Critical
Publication of CN108933881B publication Critical patent/CN108933881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The document discloses a video processing method and a device, comprising the following steps: calling at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device; the at least two video input devices simultaneously acquire frame images; and synthesizing the frame images acquired by the at least two video input devices to form a video. The shooting effect of the electronic equipment can be improved under the condition that the hardware cost is not increased.

Description

Video processing method and device
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a video processing method and device.
Background
At present, electronic devices (such as mobile phones) generally have two or more cameras to perfect the camera function thereof and meet the user requirements.
In the related art, the electronic apparatus mainly adopts the single-camera mode at the time of shooting even if it has two or more cameras. Only one camera can be called to complete current shooting in a single-camera mode, and a single camera can shoot 30-60 frames at most every second, namely, the electronic equipment can shoot 30-60 frames at most, and if the effects of shooting clearer picture quality, high-speed video exceeding 60 frames, quadruple or even more-time slow playing and the like are required, other shooting equipment needs to be configured.
Aiming at the problem that the electronic equipment in the related art cannot call two or more cameras simultaneously to shoot so as not to achieve the shooting effect required by a user, an effective solution is not provided at present.
Disclosure of Invention
In order to solve the above technical problem, embodiments of the present invention provide a video processing method and apparatus.
The present application provides:
a video processing method, comprising:
calling at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device;
the at least two video input devices simultaneously acquire frame images;
and synthesizing the frame images acquired by the at least two video input devices to form a video.
Wherein, in the process that the at least two video input devices simultaneously acquire the frame images, the method further comprises the following steps: detecting a target position selected by a user in a shooting process; calling one of the other video input devices to perform amplification shooting on the target position; and synthesizing the enlarged shot frame images to form an enlarged video at the target position.
The calling one of the other video input devices to perform enlarged shooting aiming at the target position comprises: sending an amplifying shooting instruction to one of the other video input devices; and the video input equipment receives the zooming shooting instruction, zooms according to preset zooming shooting parameters, and zooms and shoots by taking the target position as a focus and the magnification after zooming.
Wherein, after synthesizing the enlarged photographed frame images to form an enlarged video at the target position, the method further comprises: and recording the target position and the frame image corresponding to the target position.
Wherein after the forming the video, further comprising: and when a playing instruction is received, popping up a playing window and playing the video through the playing window.
Wherein, the playing the video through the playing window further comprises: and marking the target position with the amplified video in the presented frame image when the video is played.
Wherein, in the process of playing the video through the playing window, the method further comprises: in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position; and popping up a floating window in the playing window, and playing the amplified video at the target position in the floating window.
Wherein, when playing the amplified video at the target position in the floating window, the method further comprises: and pausing the video currently played by the playing window and fading the playing window.
A video processing apparatus comprising: the system comprises a CPU, at least two video input devices and an application module;
the CPU is used for calling at least two video input devices and controlling the average difference of the shooting time of other video input devices to be between every two frames of images of one video input device;
the video input devices are used for simultaneously acquiring frame images of a current video and respectively transmitting the acquired frame images to the application module through the CPU;
and the application module is used for synthesizing the frame images acquired by the at least two video input devices to form a video.
The CPU is also used for detecting a target position selected by a user in the shooting process and calling one of the other video input devices to carry out amplification shooting on the target position;
one of the other video input devices is also used for carrying out amplification shooting on the target position under the control of the CPU, and transmitting the amplified and shot frame image to the application module through the CPU;
the application module is further configured to synthesize the enlarged and shot frame images to form an enlarged video at the target position.
Wherein, still include: a display device; the CPU is further used for acquiring a video from the application module and controlling the display equipment to play the video when a playing instruction is received; and the display equipment is used for popping up a play window under the control of the CPU and playing the video through the play window.
The display device is further configured to transmit an instruction for playing the amplified video to the CPU when detecting an operation of the user on the target position in a video playing process; and the floating window is also used for popping up a floating window in the playing window, and the amplified video at the target position is played in the floating window; the CPU is further configured to receive the instruction for playing the amplified video, acquire the amplified video at the target position from the application module, and control the display device to play the amplified video.
An electronic device comprising at least:
at least two video input devices configured to simultaneously capture frame images;
a memory storing a video processing program;
a processor configured to execute the video processing program to perform the following operations: calling the at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device; and synthesizing the frame images acquired by the at least two video input devices to form a video.
Wherein, the processor is configured to execute the video processing program to execute the process of simultaneously acquiring the frame images by the at least two video input devices, and further comprises: detecting a target position selected by a user in a shooting process; calling one of the other video input devices to perform amplification shooting on the target position; and synthesizing the enlarged shot frame images to form an enlarged video at the target position.
Wherein, still include: the display equipment is configured to pop up a play window and play the video through the play window; the processor, after executing the video processing program to perform operations of forming a video, is further configured to perform operations of: and when a playing instruction is received, controlling the display equipment to play the video.
Wherein, the processor is configured to execute the video processing program to execute the operation of playing the video through the playing window, and further execute the following operations: in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position, and controlling the display equipment to play the amplified video;
the display device is further configured to pop up a floating window in the playing window, and play the amplified video at the target position in the floating window.
A computer-readable storage medium having stored thereon a video processing program which, when executed by a processor, implements the steps of the video processing method described above.
In the embodiment of the invention, at least two video input devices are called simultaneously when a video is shot, the average shooting time difference of other video input devices in the at least two video input devices is controlled to be between every two frames of images of one video input device, and finally the frame images collected by the at least two video input devices are synthesized to form the video, so that the shooting effect of the electronic device is improved under the condition of not increasing the hardware cost.
In the embodiment of the invention, at least two video input devices are called simultaneously when the video is shot, one of the video input devices is controlled to carry out amplification shooting aiming at the target position selected by the user, and finally, the amplified video at the target position can be obtained by synthesizing and amplifying the shot frame image, so that the shooting effect of the electronic equipment is improved under the condition of not increasing the hardware cost.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a video playing window according to an embodiment of the present invention;
FIG. 3 is a diagram of a playback window including a floating window according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Example one
A video processing method, as shown in fig. 1, comprising:
step 101, calling at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device;
102, simultaneously collecting frame images by the at least two video input devices;
and 103, synthesizing the frame images acquired by the at least two video input devices to form a video.
In the method, at least two video input devices are called simultaneously when a video is shot, the average shooting time difference of other video input devices in the at least two video input devices is controlled to be between every two frame images of one video input device, and finally the frame images collected by the at least two video input devices are synthesized to form the video. By the method, at least two video input devices can be called simultaneously for shooting, for example, when the video input devices are cameras, one camera is used for recording 30 frames of pictures every second, and at least two cameras are called to average difference of shooting time of the other camera between two frames of the first camera, so that double shooting effect is obtained finally.
For example, when the electronic device has two cameras, the execution process of the present application may be: after the shooting function of the mobile terminal is turned on, the first camera and the second camera are started, and the average difference of the shooting time of each frame of the second camera is between two frames of data of the first camera, so that within each second, one time more frames (naturally or obtaining multiple times of data by multiple cameras) can be obtained than usual. The double frames are synthesized into the video after the double data are acquired, so that more times of data can be played at the same time during playing, and the images displayed and processed by the electronic equipment are clearer and smoother.
In practical application, when the current electronic equipment has a plurality of cameras, the second camera and the third camera are differentially divided between two frames of the first camera, so that the time interval of each frame is ensured to be the same finally, and thus, the multiplied image data is obtained.
After the high-speed camera shooting function is started, the slow playing multiple of the synthesized data can be increased to multiple times, so that the method is not limited to the 2-time slow playing function of a single camera. As one camera can collect about 30-60 frames of images per second at most due to the relation of hardware, a plurality of cameras are started simultaneously during shooting, and the images collected per second are N times of original data. And the high-speed video is fixedly played to be about 20 frames per second when being played, so that the N cameras can play the high-speed video at a slow playing speed which is 2 x N times by the method of the application. Therefore, the image pickup quality is improved under the condition of not increasing hardware equipment and hardware cost.
In this application, the process of simultaneously acquiring frame images by the at least two video input devices may further include: detecting a target position selected by a user in a shooting process; calling one of the other video input devices to perform amplification shooting on the target position; and synthesizing the enlarged shot frame images to form an enlarged video at the target position. Specifically, when detecting that a user operates the target position, sending an amplification shooting instruction to one of the other video input devices; and the video input equipment receives the zooming shooting instruction, zooms according to preset zooming shooting parameters, and zooms and shoots by taking the target position as a focus and the magnification after zooming.
For example, the video input device may be a camera, and the electronic device captures images through two cameras at the same time. If a user is interested in a certain point in a picture in the shooting process, the point can be clicked, at this time, a touch display of the electronic device detects the operation of the user on a target position, an operation instruction is sent to the CPU, the CPU sends a zoom-in shooting instruction to the second camera according to the operation instruction, the second camera receives the zoom-in shooting instruction, zooming is performed according to predetermined zoom-in shooting parameters (the zoom-in shooting parameters may include a focal length, a magnification factor and the like, can be set by the user, or can be configured by default), and zoom-in shooting is performed with the target position as a focus and the magnification factor after zooming. At this time, the first camera continues to shoot the current whole picture, shooting parameters of the first camera are kept unchanged, the second camera is adjusted through zoom, and shooting times are enlarged, and a focal distance is adjusted to shoot the target position selected by the user independently. Then, the enlarged video at the target position can be formed by synthesizing the frame images shot by the second camera, and the video of the whole picture can be formed by synthesizing the frame images shot by the first camera. Thereafter, the user can view the enlarged video at the target position by clicking the target position in the process of viewing the entire screen.
And synthesizing the frame images shot in the amplification mode to form an amplified video at the target position, and then recording the target position and the frame image corresponding to the target position. In this way, the target position is conveniently marked in the corresponding frame image when the video is played, so that a user can accurately find the target position to operate the target position to view the amplified video at the target position.
After the forming the video, the method may further include: and when a playing instruction is received, popping up a playing window and playing the video through the playing window. In practical application, when a video needs to be played, a user can input a playing instruction in the electronic device by operating an input device (such as a touch device, a key or a voice input device, etc.), the input device transmits the playing instruction to a CPU of the electronic device, and after receiving the playing instruction, the CPU obtains a corresponding video and controls a display device of the electronic device to pop up a playing window, and plays the video through the playing window. Thus, video playing can be realized. Because the shooting is carried out through at least two video input devices in the shooting process, the frame images in the played video are denser, so that the played video pictures are clearer and smoother, and the slow playing is convenient for a user to operate.
Here, the video is played through the playing window, and a target position with an amplified video may be marked in a presented frame image when the video is played, so that a user can accurately find the target position, and the amplified video at the target position is conveniently viewed.
Wherein, in the process of playing the video through the playing window, the method may further include: in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position; and popping up a floating window in the playing window, and playing the amplified video at the target position in the floating window. Therefore, the amplified video at the target position is played by popping up the floating window, the amplified video playing at a certain position can be executed on the premise of not stopping the original video playing, the user can watch the local amplified video while watching the video conveniently, and the user experience can be effectively improved. Here, when the enlarged video at the target position is played in the floating window, the currently played video in the playing window can be paused, and the playing window is faded, so as to achieve a better visual effect.
For example, during shooting, when a user is interested in a certain point in the picture, the user can click the picture, and at this time, other cameras except the first camera can be activated, and by separately processing other cameras, such as the zoom function and the focusing function of the cameras, the other cameras can zoom in on the point, focus separately, and shoot the point separately. After the shooting is finished, the images at the position are synthesized to form an enlarged video at the position. When playing, the whole video is normally played at the moment. When the picture with the enlarged video is played, the playing window of the video can be marked at the target position of the corresponding picture, as shown in fig. 2, which is an exemplary diagram of the playing window marked with the target position.
When the video is played, a user clicks the target position, the playing window of the current video is faded, a floating window pops up at the upper right corner, and the amplified video at the target position is played in the floating window. Fig. 3 is a diagram showing an example of a playing window after the floating window pops up.
Example two
A video processing apparatus, as shown in fig. 4, comprising: a CPU 41, at least two video input devices 421/422, an application module 43;
the CPU 41 is used for calling at least two video input devices and controlling the average difference of the shooting time of other video input devices to be between every two frames of images of one video input device;
at least two video input devices 421/422, configured to capture frame images of a current video at the same time, and transmit the respective captured frame images to the application module 43 through the CPU 41;
and an application module 43, configured to combine the frame images captured by the at least two video input devices 421/422 to form a video.
The CPU 41 may be further configured to detect a target position selected by a user in a shooting process, and call one of the other video input devices to perform enlarged shooting on the target position;
one of the other video input devices (e.g., the video input device 422) is further configured to perform zoom-in shooting for the target position under the control of the CPU 41, and transmit a frame image obtained by zoom-in shooting to the application module 43 through the CPU 41;
the application module 43 is further configured to combine the frame images obtained by enlarged shooting to form an enlarged video at the target position.
Here, the CPU 41 is specifically configured to detect a target position selected by a user during a shooting process, and send an enlarged shooting instruction to one of the other video input devices; the video input device 422 is specifically configured to receive the zoom-in shooting instruction, zoom according to a predetermined zoom-in shooting parameter, and zoom in and shoot with the target position as a focus and the zoom-in magnification after zooming.
Here, the application module 43 may be further configured to record the target position and a frame image corresponding to the target position.
Wherein, the video processing apparatus may further include: a display device 44; the CPU 41 may be further configured to, when receiving a play instruction, acquire a video from the application module and control the display device to play the video; and the display device 44 is configured to pop up a play window under the control of the CPU, and play the video through the play window. Here, the display device 44 may be further configured to mark the target position with the enlarged video in the frame image presented when the video is played.
The display device 44 may be further configured to transmit an instruction to play an enlarged video to the CPU 41 when detecting an operation of the user on the target position in the video playing process; and the floating window is also used for popping up a floating window in the playing window, and the amplified video at the target position is played in the floating window; the CPU 41 may be further configured to receive the instruction for playing the amplified video, acquire the amplified video at the target position from the application module, and control the display device to play the amplified video. Here, the display device 44 is further configured to pause the video currently played in the playing window and fade the playing window when the enlarged video at the target position is played in the floating window.
In this embodiment, the display device 44 may be a display, a touch display, or the like; the video input device 421/422 may be a camera or the like, and the application module 43 may be software, hardware, or a combination of both, responsible for image composition. The video processing apparatus of the present embodiment may be applied to electronic devices, which may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation apparatus, a wearable device, a smart band, a pedometer, and fixed terminals such as a Digital TV, a desktop computer, and the like.
EXAMPLE III
An electronic device comprising at least:
at least two video input devices configured to simultaneously capture frame images;
a memory storing a video processing program;
a processor configured to execute the video processing program to perform the following operations: calling the at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device; and synthesizing the frame images acquired by the at least two video input devices to form a video.
Wherein, the processor is configured to execute the video processing program to execute the process of simultaneously acquiring the frame images by the at least two video input devices, and further comprises: detecting a target position selected by a user in a shooting process; calling one of the other video input devices to perform amplification shooting on the target position; and synthesizing the enlarged shot frame images to form an enlarged video at the target position.
Wherein, above-mentioned electronic equipment can also include: the display equipment is configured to pop up a play window and play the video through the play window; the processor, after executing the video processing program to perform operations of forming a video, is further configured to perform operations of: and when a playing instruction is received, controlling the display equipment to play the video.
Wherein, the processor is configured to execute the video processing program to execute the operation of playing the video through the playing window, and further execute the following operations: in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position, and controlling the display equipment to play the amplified video; the display device is further configured to pop up a floating window in the playing window, and play the amplified video at the target position in the floating window.
In this embodiment, the display device may be a touch display, and the video input device may be a camera or the like. The electronic device may be implemented in various forms. For example, the electronic devices described in the present application may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
In addition, an embodiment of the present application further provides a computer-readable storage medium, which stores computer-executable instructions, and when executed, the computer-executable instructions implement the video processing method described above.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Optionally, in this embodiment, the processor executes the method steps of the above embodiments according to the program code stored in the storage medium.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) to perform the steps, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits. Accordingly, the modules/units in the above embodiments may be implemented in hardware, for example, by an integrated circuit, or may be implemented in software, for example, by a processor executing programs/instructions stored in a memory to implement the corresponding functions. The present application is not limited to any specific form of hardware or software combination.
The foregoing shows and describes the general principles and features of the present application, together with the advantages thereof. The present application is not limited to the above-described embodiments, and the embodiments and descriptions are only illustrative of the principles of the present application, and various changes and modifications can be made without departing from the spirit and scope of the present application, and these changes and modifications are intended to fall within the scope of the present application as claimed.

Claims (14)

1. A video processing method, comprising:
calling at least two video input devices, and controlling the average difference of the shooting time of other video input devices to be between every two frames of images of one video input device;
the at least two video input devices simultaneously acquire frame images;
in the process that the at least two video input devices simultaneously acquire the frame images, the method further comprises the following steps:
detecting a target position selected by a user in a shooting process;
calling one of the other video input devices to perform amplification shooting on the target position;
synthesizing the enlarged shot frame images to form an enlarged video at the target position;
and synthesizing the frame images acquired by the at least two video input devices to form a video.
2. The method of claim 1, wherein the invoking one of the other video input devices to zoom in on the target location comprises:
sending an amplifying shooting instruction to one of the other video input devices;
and the video input equipment receives the zooming shooting instruction, zooms according to preset zooming shooting parameters, and zooms and shoots by taking the target position as a focus and the magnification after zooming.
3. The video processing method according to claim 1, wherein after synthesizing the enlarged captured frame images to form an enlarged video at the target position, further comprising: and recording the target position and the frame image corresponding to the target position.
4. The video processing method according to any one of claims 1 to 3, wherein after forming the video, further comprising: and when a playing instruction is received, popping up a playing window and playing the video through the playing window.
5. The video processing method of claim 4, wherein the playing the video through the playing window further comprises:
and marking the target position with the amplified video in the presented frame image when the video is played.
6. The video processing method according to claim 4, wherein in the process of playing the video through the playing window, the method further comprises:
in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position;
and popping up a floating window in the playing window, and playing the amplified video at the target position in the floating window.
7. The video processing method according to claim 6, wherein when the enlarged video at the target position is played in the floating window, the method further comprises: and pausing the video currently played by the playing window and fading the playing window.
8. A video processing apparatus, comprising: the system comprises a CPU, at least two video input devices and an application module;
the CPU is used for calling at least two video input devices and controlling the average difference of the shooting time of other video input devices to be between every two frames of images of one video input device; the system is also used for detecting a target position selected by a user in the shooting process and calling one of the other video input devices to carry out amplification shooting on the target position;
the system comprises at least two video input devices, a CPU and an application module, wherein the at least two video input devices are used for simultaneously acquiring frame images of a current video and respectively transmitting the acquired frame images to the application module through the CPU; one of the other video input devices is also used for carrying out amplification shooting on the target position under the control of the CPU, and transmitting the amplified and shot frame image to the application module through the CPU;
the application module is used for synthesizing the frame images acquired by the at least two video input devices to form a video; and the image synthesizing module is also used for synthesizing the enlarged shot frame images to form an enlarged video at the target position.
9. The video processing apparatus according to claim 8, further comprising: a display device;
the CPU is further used for acquiring a video from the application module and controlling the display equipment to play the video when a playing instruction is received;
and the display equipment is used for popping up a play window under the control of the CPU and playing the video through the play window.
10. The video processing apparatus according to claim 9,
the display device is further configured to transmit an instruction for playing the amplified video to the CPU when detecting an operation of the user on the target position in a video playing process; and the floating window is also used for popping up a floating window in the playing window, and the amplified video at the target position is played in the floating window;
the CPU is further configured to receive the instruction for playing the amplified video, acquire the amplified video at the target position from the application module, and control the display device to play the amplified video.
11. An electronic device, comprising at least:
at least two video input devices configured to simultaneously capture frame images;
a memory storing a video processing program;
a processor configured to execute the video processing program to perform the following operations: calling the at least two video input devices, and controlling the average difference of shooting time of other video input devices to be between every two frames of images of one video input device; synthesizing the frame images acquired by the at least two video input devices to form a video;
the processor, configured to execute the video processing program to execute the process of simultaneously acquiring the frame images by the at least two video input devices, further includes:
detecting a target position selected by a user in a shooting process;
calling one of the other video input devices to perform amplification shooting on the target position;
and synthesizing the enlarged shot frame images to form an enlarged video at the target position.
12. The electronic device of claim 11,
further comprising: the display equipment is configured to pop up a play window and play the video through the play window;
the processor, after executing the video processing program to perform operations of forming a video, is further configured to perform operations of: and when a playing instruction is received, controlling the display equipment to play the video.
13. The electronic device of claim 12,
the processor is configured to execute the video processing program to execute the operation of playing the video through the playing window, and further execute the following operations: in the video playing process, when the operation of a user for the target position is detected, acquiring an amplified video at the target position, and controlling the display equipment to play the amplified video;
the display device is further configured to pop up a floating window in the playing window, and play the amplified video at the target position in the floating window.
14. A computer-readable storage medium, having stored thereon a video processing program which, when executed by a processor, implements the steps of the video processing method according to any one of claims 1 to 7.
CN201710362041.9A 2017-05-22 2017-05-22 Video processing method and device Active CN108933881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710362041.9A CN108933881B (en) 2017-05-22 2017-05-22 Video processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710362041.9A CN108933881B (en) 2017-05-22 2017-05-22 Video processing method and device

Publications (2)

Publication Number Publication Date
CN108933881A CN108933881A (en) 2018-12-04
CN108933881B true CN108933881B (en) 2022-05-27

Family

ID=64450014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710362041.9A Active CN108933881B (en) 2017-05-22 2017-05-22 Video processing method and device

Country Status (1)

Country Link
CN (1) CN108933881B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113794923A (en) * 2021-09-16 2021-12-14 维沃移动通信(杭州)有限公司 Video processing method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281658A (en) * 2008-04-25 2008-10-08 清华大学 Variation illumination dynamic scene three-dimensional capturing system
WO2012082127A1 (en) * 2010-12-16 2012-06-21 Massachusetts Institute Of Technology Imaging system for immersive surveillance
JP2016038428A (en) * 2014-08-06 2016-03-22 キヤノン株式会社 Focus detection device and method, and program as well as imaging device
CN106131416A (en) * 2016-07-19 2016-11-16 广东欧珀移动通信有限公司 Zoom processing method, device and the mobile terminal of dual camera

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100749337B1 (en) * 2006-06-13 2007-08-14 삼성전자주식회사 Method for photography using a mobile terminal with plural lenses and apparatus therefor
KR101551800B1 (en) * 2010-05-12 2015-09-09 라이카 게오시스템스 아게 Surveying instrument
US8331760B2 (en) * 2010-06-02 2012-12-11 Microsoft Corporation Adaptive video zoom
US9025024B2 (en) * 2011-09-28 2015-05-05 Xerox Corporation System and method for object identification and tracking
JP5884421B2 (en) * 2011-11-14 2016-03-15 ソニー株式会社 Image processing apparatus, image processing apparatus control method, and program
CN105991915B (en) * 2015-02-03 2020-06-09 中兴通讯股份有限公司 Photographing method and device and terminal
CN105657299A (en) * 2015-07-14 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and system for processing shot data based on double cameras
CN106131397A (en) * 2016-06-21 2016-11-16 维沃移动通信有限公司 A kind of method that multi-medium data shows and electronic equipment
CN106254765B (en) * 2016-07-19 2019-04-12 Oppo广东移动通信有限公司 Zoom processing method, device and the terminal device of dual camera
CN106210584A (en) * 2016-08-02 2016-12-07 乐视控股(北京)有限公司 A kind of video recording method and device
CN106303258A (en) * 2016-09-19 2017-01-04 深圳市金立通信设备有限公司 A kind of image pickup method based on dual camera and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281658A (en) * 2008-04-25 2008-10-08 清华大学 Variation illumination dynamic scene three-dimensional capturing system
WO2012082127A1 (en) * 2010-12-16 2012-06-21 Massachusetts Institute Of Technology Imaging system for immersive surveillance
JP2016038428A (en) * 2014-08-06 2016-03-22 キヤノン株式会社 Focus detection device and method, and program as well as imaging device
CN106131416A (en) * 2016-07-19 2016-11-16 广东欧珀移动通信有限公司 Zoom processing method, device and the mobile terminal of dual camera

Also Published As

Publication number Publication date
CN108933881A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
WO2021175055A1 (en) Video processing method and related device
CN106937039B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN106375674B (en) Method and apparatus for finding and using video portions related to adjacent still images
CN109167931B (en) Image processing method, device, storage medium and mobile terminal
EP3545686B1 (en) Methods and apparatus for generating video content
TW201251443A (en) Video summary including a feature of interest
CN111698553A (en) Video processing method and device, electronic equipment and readable storage medium
CN110493526A (en) Image processing method, device, equipment and medium based on more photographing modules
CN112637500B (en) Image processing method and device
CN112261218B (en) Video control method, video control device, electronic device and readable storage medium
CN113014798A (en) Image display method and device and electronic equipment
CN111064930B (en) Split screen display method, display terminal and storage device
CN114422692B (en) Video recording method and device and electronic equipment
US9325776B2 (en) Mixed media communication
CN108933881B (en) Video processing method and device
US9137446B2 (en) Imaging device, method of capturing image, and program product for capturing image
CN108174112B (en) Processing method and device in camera shooting
JP2018137797A (en) Imaging apparatus, imaging method and program
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN114500821B (en) Photographing method and device, terminal and storage medium
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114245018A (en) Image shooting method and device
CN113315903A (en) Image acquisition method and device, electronic equipment and storage medium
WO2013084422A1 (en) Information processing device, communication terminal, information search method, and non-temporary computer-readable medium
CN107431756B (en) Method and apparatus for automatic image frame processing possibility detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant