WO2021036659A1 - 视频录制方法及电子设备 - Google Patents
视频录制方法及电子设备 Download PDFInfo
- Publication number
- WO2021036659A1 WO2021036659A1 PCT/CN2020/105526 CN2020105526W WO2021036659A1 WO 2021036659 A1 WO2021036659 A1 WO 2021036659A1 CN 2020105526 W CN2020105526 W CN 2020105526W WO 2021036659 A1 WO2021036659 A1 WO 2021036659A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- video
- recording
- frame image
- jitter
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
Definitions
- the embodiments of the present disclosure relate to the field of image technology, and in particular, to a video recording method and electronic equipment.
- video anti-shake is a direction for the development of mobile phone video recording.
- Conventional anti-shake methods are implemented through anti-shake algorithms or hardware design.
- anti-shake algorithms need to crop the size of the video, resulting in a smaller field of view (FOV).
- FOV field of view
- OIS optical image stabilization
- the embodiments of the present disclosure provide a video recording method and an electronic device to solve the problem that the video anti-shake method in the related art will either cause the angle of view to be reduced or require a specific hardware device.
- embodiments of the present disclosure provide a video recording method applied to an electronic device, including:
- the embodiments of the present disclosure also provide an electronic device, including:
- the determination module is used to determine the target jitter part of the recorded video
- a receiving module for receiving re-recording input for the jitter part of the target
- the response module is used to display the video recording interface in response to the re-recording input, and re-record the video;
- embodiments of the present disclosure also provide an electronic device, including a processor, a memory, and a computer program stored on the memory and running on the processor, and the computer program is executed by the processor.
- the steps of the video recording method described above are realized when executed.
- the jitter problem in the video recording process is solved by re-recording the jittered part of the video and replacing the jittered part with the re-recorded video, so that the final output video content is stable and quality high.
- this video recording method does not lose the angle of view, and the hardware requirements are relatively small.
- FIG. 1 is a schematic flowchart of a video recording method in Embodiment 1 of the present disclosure
- FIG. 2 is a schematic diagram of a video recording interface when re-recording a video in Embodiment 1 of the present disclosure
- FIG. 3 is the second schematic diagram of the video recording interface when re-recording the video in the first embodiment of the disclosure
- FIG. 4 is the third schematic diagram of a video recording interface when re-recording a video in Embodiment 1 of the present disclosure
- FIG. 5 is a schematic diagram of an original recording track corresponding to a jittered part of a target in Embodiment 1 of the present disclosure
- FIG. 6 is a schematic diagram of optimizing the original recording track in Embodiment 1 of the disclosure.
- FIG. 7 is a schematic diagram of displaying a target recording track in Embodiment 1 of the present disclosure.
- FIG. 8 is a schematic diagram of displaying a preview image on a video recording interface in Embodiment 1 of the present disclosure.
- FIG. 9 is a schematic diagram of displaying prompt information used to prompt that there is a jitter part in the video in Embodiment 1 of the present disclosure.
- FIG. 10 is a schematic diagram of a time axis display in Embodiment 1 of the present disclosure.
- FIG. 11 is a schematic diagram of a re-recording input in Embodiment 1 of the present disclosure.
- FIG. 12 is a schematic structural diagram of an electronic device in the second embodiment of the disclosure.
- FIG. 13 is a schematic structural diagram of an electronic device in the third embodiment of the disclosure.
- FIG. 14 is a schematic diagram of the hardware structure of an electronic device that implements various embodiments of the present disclosure.
- FIG. 1 is a schematic flowchart of a video recording method provided in Embodiment 1 of the present disclosure. The method is applied to an electronic device and includes:
- Step 11 Determine the target jitter part in the recorded video
- Step 12 Receive re-recording input for the jitter part of the target
- Step 13 In response to the re-recording input, display the video recording interface, and re-record the video;
- Step 14 Replace the jittered part of the target with the re-recorded video.
- the jitter problem existing in the video recording process is solved, so that the final output video content is stable and high-quality.
- this video recording method does not lose the angle of view, and the hardware requirements are relatively small.
- the following example illustrates the above video recording method.
- the step of determining the target jitter part in the recorded video includes:
- a part of the recorded video with a jitter amplitude greater than a preset threshold is a jitter part, and the target jitter part is one of the jitter parts.
- the first frame image of the re-recorded video matches the first frame image of the target shaking part
- the last frame image of the re-recorded video matches the last frame of the target shaking part Image matching.
- the step of re-recording the video specifically includes:
- the embodiment of the present disclosure adopts an image matching method to match the first frame image of the re-recorded video with the first frame image of the target shaking part, and the last frame image of the re-recorded video matches the The last frame image of the target shaking part matches.
- the step of displaying the video recording interface specifically includes:
- the first frame image of the target shaking part and/or the last frame image of the target shaking part are displayed on the video recording interface.
- the user by displaying the first frame image of the target shaking part on the video recording interface, it is convenient for the user to determine how to move the electronic device to make the preview image in the video recording interface and the target shaking Part of the first frame image matches, saving the time required for re-recording.
- the last frame of the target shaking part on the video recording interface it is also convenient for the user to determine how to move the electronic device to make the current frame of the re-recorded video and the last frame of the target shaking part
- One frame of image matching saves the time required for re-recording.
- the first frame of the target shaking part and the last frame of the target shaking part can be displayed on the video recording interface. image.
- the first frame image of the target shaking part and the preview image are displayed on the video recording interface; and at the beginning of the re-recording After the video, the first frame image of the target shaking part is no longer displayed, but only the last frame image of the target shaking part and the current frame image of the re-recorded video are displayed.
- the method further includes:
- the second prompt message is displayed.
- the method of displaying the first prompt information may be to display the text "matched", or to change the display color of the first frame image of the target dithering part, for example, the first frame image of the target dithering part changes.
- Green can also be other methods, as long as the user can be reminded accordingly, which will not be detailed here.
- the method further includes:
- the target recording track being an optimized track of the original recording track of the target jitter part
- the step of re-recording the video specifically includes:
- the video recording interface displays the target recording track, it may further include:
- the original recording track is optimized to obtain a smooth target recording track.
- the target recording track may be displayed on the video recording interface in the form of Augmented Reality (AR) to instruct the electronic device to follow the instructions in the process of re-recording the video.
- AR Augmented Reality
- the target recording track moves.
- the mark point is used to indicate whether the electronic device moves according to the target recording track.
- the target recording track is always displayed on the video recording interface by way of AR, and the mark points are always displayed on the video recording interface.
- the user needs to always keep all The mark point coincides with the target recording track and keeps moving in the direction from the start point to the end point.
- the embodiment of the present disclosure adopts the matching method of the movement track of the electronic device to realize the matching of the first frame image of the re-recorded video with the first frame image of the target shaking part, and the last frame of the re-recorded video
- the frame image matches the last frame image of the target shaking part.
- the target recording track since the target recording track includes not only the start point and the end point, but also the movement track in the intermediate process of re-recording the video, it can also make the content of the re-recorded video and the target jitter part
- the content is basically the same.
- only need to align the mark points with the target recording track to achieve image matching the operation is simpler and less time-consuming.
- the electronic device records the movement track (that is, the recording track) of the electronic device in space during the recording process.
- the electronic device also selects the original recording track corresponding to the target jitter part (as shown in FIG. 5) from all the original recording tracks corresponding to the recorded video. Cut it out.
- the original recording track corresponding to the target jitter part is optimized, mainly to eliminate the jitter track, and obtain a smooth optimized recording track, that is, the target recording track.
- the electronic device displays the target recording track on the video recording interface by means of AR, and displays a mark point 05 on the video recording interface.
- the user In the process of re-recording the video, the user first needs to align the mark point 05 to the starting point of the target recording track to trigger recording, that is, the electronic device detects the difference between the mark point 05 and the target recording track. When the starting point coincides, the re-recording of the video starts automatically, and then the user needs to move the mark point 05 along the target recording track from the starting point to the end point to end the recording, that is, the electronic device detects that the mark point 05 and the point When the end points of the target recording track coincide, the re-recording of the video is automatically stopped.
- the method before the step of receiving the re-recording input for the target jitter part, the method further includes:
- the aforementioned preset display format may be black, red, or highlight.
- the method may further include: displaying third prompt information, where the third prompt information is used to prompt that the recorded video exists Jitter part
- a second input is received, in response to the second input, the time axis of the recorded video is displayed, and the time axis corresponding to the jittered part is displayed according to a preset display format.
- the target jitter part is one of the jitter parts.
- the third prompt information is displayed to facilitate the user to know in time that there is a jitter part in the recorded video and to process it in time.
- Display the time axis of the recorded video and use a preset display format to indicate the part of the time axis corresponding to the jitter part, so as to facilitate the user to understand the approximate value of the jitter part in the recorded video position.
- a preview image is displayed on the video recording interface.
- the electronic device determines that there is a jitter part in the currently recorded video based on the result of the jitter amplitude detection, it will display a prompt message (that is, the third prompt message) to indicate that there is a jitter part in the video. ).
- a button 03 may be displayed in the prompt box 02 where the prompt information is located, or the prompt box itself is a button, and the first input may be an input by clicking the button.
- the electronic device displays the time axis 04 of the recorded video as shown in FIG. 10, wherein the black part is a part of the time axis corresponding to the jitter part.
- the re-recording input is an input of dragging the target time axis.
- the re-recording input may be an input of dragging the target time axis to a preset position.
- the re-recording input operation is simple, and it can be directly determined which part of the recorded video is jittery.
- the input of dragging the target time axis is only an example of the re-recording input, and not as a limitation on the re-recording input.
- the re-recording input can also be other inputs, such as clicking on the target time axis, or First select the target timeline and then click a preset button, etc. I won’t go into details here.
- the method further includes:
- the target jitter part is played.
- the user can click on one of the blacked parts of the time axis in FIG. 11, and then the electronic device will play the jitter part corresponding to the black part of the time axis, specifically playing the video content of the jittered part.
- the user can view the video content of the shaking part. After checking, the user can judge by himself whether to re-record, if he does not want to re-record, skip this part, if he wants to re-record, enter the re-record input.
- EIS electronic image stabilization
- FIG. 12 is a schematic structural diagram of an electronic device according to Embodiment 2 of the present disclosure.
- the electronic device includes:
- the determining module 121 is used to determine the target jitter part in the recorded video
- the receiving module 122 is configured to receive re-recording input for the jitter part of the target;
- the response module 123 is configured to display a video recording interface in response to the re-recording input, and re-record the video;
- the video processing module 124 is configured to replace the target jitter part with the re-recorded video.
- the jitter problem in the video recording process is solved by re-recording the jittered part of the video and replacing the jittered part with the re-recorded video, so that the final output video content is stable and high-quality , And will not lose the angle of view, and the hardware requirements are relatively small.
- the first frame image of the re-recorded video matches the first frame image of the target shaking part
- the last frame image of the re-recorded video matches the last frame of the target shaking part Image matching.
- the response module 123 includes:
- the first opening unit is configured to start video recording when the preview image in the video recording interface matches the first frame image of the target jitter part;
- the first stopping unit is configured to stop recording the video when the current frame image of the re-recorded video matches the last frame image of the target shaking part.
- the response module 123 includes:
- the display unit is configured to display the first frame image of the target shaking part and/or the last frame image of the target shaking part on the video recording interface.
- the electronic device further includes:
- the first prompting module is configured to display the first frame image of the target shaking part if the preview image in the video recording interface matches the first frame image of the target shaking part when the first frame image of the target shaking part is displayed A prompt message;
- the second prompt module is used to display if the current frame image of the re-recorded video matches the last frame image of the target shaking part in the case of displaying the last frame image of the target shaking part The second prompt message.
- the electronic device further includes:
- a display module configured to display a target recording track and a landmark point on the video recording interface, the target recording track is an optimized track of the original recording track of the target jitter part;
- the response module 123 includes:
- the second opening unit is configured to start recording a video when the mark point coincides with the starting point of the target recording track;
- the second stop unit is used to stop recording the video when the mark point coincides with the end point of the target recording track.
- the electronic device further includes:
- the time axis display module is used to display the time axis of the recorded video and display the target time axis corresponding to the target jitter part according to a preset display format.
- the re-recording input is an input of dragging the target time axis.
- the electronic device further includes:
- the first input receiving module is configured to receive the first input for the target time axis
- the playing module is configured to play the target jitter part in response to the first input.
- the electronic device provided by the embodiment of the present disclosure can implement each process in the method embodiment corresponding to FIG. 1 to FIG. 11. To avoid repetition, the details are not repeated here.
- FIG. 13 is a schematic structural diagram of an electronic device according to Embodiment 3 of the present disclosure.
- the electronic device 130 includes a processor 131, a memory 132, is stored in the memory 132 and can run on the processor 131 When the computer program is executed by the processor 131, the following steps are implemented:
- the jitter problem in the video recording process is solved by re-recording the jittered part of the video and replacing the jittered part with the re-recorded video, so that the final output video content is stable and quality It is high without losing the angle of view, and the hardware requirements are relatively small.
- the first frame image of the re-recorded video matches the first frame image of the target shaking part
- the last frame image of the re-recorded video matches the last frame of the target shaking part Image matching.
- the steps of re-recording the video specifically include:
- the step of displaying the video recording interface specifically includes:
- the first frame image of the target shaking part and/or the last frame image of the target shaking part are displayed on the video recording interface.
- the second prompt message is displayed.
- the method further includes:
- the target recording track being an optimized track of the original recording track of the target jitter part
- the step of re-recording the video specifically includes:
- the method further includes:
- the re-recording input is an input of dragging the target time axis.
- the method further includes:
- the target jitter part is played.
- the electronic device can implement each process of the first embodiment of the above method, and can achieve the same technical effect. In order to avoid repetition, details are not described herein again.
- the electronic device 1400 includes but is not limited to: a radio frequency unit 1401, a network module 1402, an audio output unit 1403, an input unit 1404, a sensor 1405, and a display unit 1406, a user input unit 1407, an interface unit 1408, a memory 1409, a processor 1410, a power supply 1411 and other components.
- a radio frequency unit 1401 includes but is not limited to: a radio frequency unit 1401, a network module 1402, an audio output unit 1403, an input unit 1404, a sensor 1405, and a display unit 1406, a user input unit 1407, an interface unit 1408, a memory 1409, a processor 1410, a power supply 1411 and other components.
- the electronic device may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Layout.
- electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, in-vehicle terminals, wearable devices, and pedometers.
- the processor 1410 is used to determine the target jitter part of the recorded video; the user input unit 1407 is used to receive re-recording input for the target jitter part; the processor 1410 is also used to respond to the re-recording Input, display the video recording interface through the display unit 1406, and re-record the video; replace the target jitter part with the re-recorded video.
- the jitter problem in the video recording process is solved by re-recording the jittered part of the video and replacing the jittered part with the re-recorded video, so that the final output video content is stable and high-quality , And will not lose the angle of view, and the hardware requirements are relatively small.
- the radio frequency unit 1401 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, after receiving the downlink data from the base station, it is processed by the processor 1410; in addition, Uplink data is sent to the base station.
- the radio frequency unit 1401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 1401 can also communicate with the network and other devices through a wireless communication system.
- the electronic device provides users with wireless broadband Internet access through the network module 1402, such as helping users to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 1403 may convert the audio data received by the radio frequency unit 1401 or the network module 1402 or stored in the memory 1409 into audio signals and output them as sounds. Moreover, the audio output unit 1403 may also provide audio output related to a specific function performed by the electronic device 1400 (for example, call signal reception sound, message reception sound, etc.).
- the audio output unit 1403 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 1404 is used to receive audio or video signals.
- the input unit 1404 may include a graphics processing unit (GPU) 14041 and a microphone 14042.
- the graphics processor 14041 is configured to respond to images of still pictures or videos obtained by an image capture device (such as a camera) in the video capture mode or the image capture mode.
- the data is processed.
- the processed image frame can be displayed on the display unit 1406.
- the image frame processed by the graphics processor 14041 may be stored in the memory 1409 (or other storage medium) or sent via the radio frequency unit 1401 or the network module 1402.
- the microphone 14042 can receive sound, and can process such sound into audio data.
- the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 1401 in the case of a telephone call mode for output.
- the electronic device 1400 further includes at least one sensor 1405, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 14061 according to the brightness of the ambient light.
- the proximity sensor can close the display panel 14061 and the display panel 14061 when the electronic device 1400 is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; the sensor 1405 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
- the display unit 1406 is used to display information input by the user or information provided to the user.
- the display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
- LCD liquid crystal display
- OLED organic light-emitting diode
- the user input unit 1407 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
- the user input unit 1407 includes a touch panel 14071 and other input devices 14072.
- the touch panel 14071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 14071 or near the touch panel 14071. operating).
- the touch panel 14071 may include two parts, a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it
- the processor 1410 receives and executes the command sent by the processor 1410.
- the touch panel 14071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
- the user input unit 1407 may also include other input devices 14072.
- other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
- the touch panel 14071 can cover the display panel 14061.
- the touch panel 14071 detects a touch operation on or near it, it transmits it to the processor 1410 to determine the type of the touch event, and then the processor 1410 determines the type of the touch event according to the touch.
- the type of event provides corresponding visual output on the display panel 14061.
- the touch panel 14071 and the display panel 14061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 14071 and the display panel 14061 may be integrated
- the implementation of the input and output functions of the electronic device is not specifically limited here.
- the interface unit 1408 is an interface for connecting an external device and the electronic device 1400.
- the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (input/output, I/O) port, video I/O port, headphone port, etc.
- the interface unit 1408 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 1400 or can be used to connect to the electronic device 1400 and the external device. Transfer data between devices.
- the memory 1409 can be used to store software programs and various data.
- the memory 1409 may mainly include a storage program area and a storage data area.
- the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
- the memory 1409 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the processor 1410 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes the software programs and/or modules stored in the memory 1409, and calls the data stored in the memory 1409 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
- the processor 1410 may include one or more processing units; optionally, the processor 1410 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
- the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1410.
- the electronic device 1400 may also include a power source 1411 (such as a battery) for supplying power to various components.
- a power source 1411 such as a battery
- the power source 1411 may be logically connected to the processor 1410 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
- the electronic device 1400 includes some functional modules not shown, which will not be repeated here.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored.
- a computer program is stored.
- the computer program is executed by a processor, each process of the above-mentioned video recording method embodiment is realized, and the same technology can be achieved. The effect, in order to avoid repetition, will not be repeated here.
- the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.
- the technical solution of the present disclosure essentially or the part that contributes to the related technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ) Includes several instructions to make an electronic device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present disclosure.
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present disclosure can be embodied in the form of a software product in essence or a part that contributes to the related technology.
- the computer software product is stored in a storage medium and includes a number of instructions to make a A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the steps of the methods described in the various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk and other media that can store program codes.
- the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
- modules, units, and sub-units can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processor (DSP), Digital Signal Processing Device (DSP Device, DSPD) ), programmable logic devices (Programmable Logic Device, PLD), Field-Programmable Gate Array (FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, used to implement this disclosure Described functions in other electronic units or combinations thereof.
- ASIC Application Specific Integrated Circuits
- DSP Digital Signal Processor
- DSP Device Digital Signal Processing Device
- DSPD Digital Signal Processing Device
- PLD programmable logic devices
- FPGA Field-Programmable Gate Array
- the technology described in the embodiments of the present disclosure can be implemented through modules (for example, procedures, functions, etc.) that perform the functions described in the embodiments of the present disclosure.
- the software codes can be stored in the memory and executed by the processor.
- the memory can be implemented in the processor or external to the processor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开实施例提供一种视频录制方法及电子设备,该方法包括:确定已录制视频中的目标抖动部分;接收针对所述目标抖动部分的重录输入;响应于所述重录输入,显示视频录制界面,并重新录制视频;将所述目标抖动部分替换为所述重新录制的视频。
Description
相关申请的交叉引用
本申请主张在2019年8月28日在中国提交的中国专利申请号No.201910802034.5的优先权,其全部内容通过引用包含于此。
本公开实施例涉及图像技术领域,尤其涉及一种视频录制方法及电子设备。
相关技术中,视频防抖是手机视频录制发展的一个方向,常规的防抖方式,是通过防抖算法或硬件设计来实现。但通过算法防抖需要裁剪视频的尺寸,导致视场角(Field of view,FOV)变小。而通过硬件防抖的话,需要设置特定的硬件设备,例如光学防抖(Optical Image Stabilization,OIS)需要设置陀螺仪和补偿镜片组等。
发明内容
本公开实施例提供一种视频录制方法及电子设备,以解决相关技术中的视频防抖方法要么会导致视场角变小、要么需要设置特定的硬件设备的问题。
为解决上述技术问题,本公开是这样实现的:
第一方面,本公开实施例提供了一种视频录制方法,应用于电子设备,包括:
确定已录制视频中的目标抖动部分;
接收针对所述目标抖动部分的重录输入;
响应于所述重录输入,显示视频录制界面,并重新录制视频;
将所述目标抖动部分替换为所述重新录制的视频。
第二方面,本公开实施例还提供了一种电子设备,包括:
确定模块,用于确定已录制视频中的目标抖动部分;
接收模块,用于接收针对所述目标抖动部分的重录输入;
响应模块,用于响应于所述重录输入,显示视频录制界面,并重新录制视频;
视频处理模块,用于将所述目标抖动部分替换为所述重新录制的视频。
第三方面,本公开实施例还提供了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述视频录制方法的步骤。
第四方面,本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现上述视频录制方法的步骤。
在本公开实施例中,通过重新录制视频中抖动部分的内容,并将抖动部分替换为重新录制的视频的方式,来解决视频录制过程中存在的抖动问题,使得最终输出的视频内容稳定、质量高。且该视频录制方式,不会损失视场角,对硬件的需求也比较小。
图1为本公开实施例一中的一种视频录制方法的流程示意图;
图2为本公开实施例一中重新录制视频时的视频录制界面示意图之一;
图3为本公开实施例一中重新录制视频时的视频录制界面示意图之二;
图4为本公开实施例一中重新录制视频时的视频录制界面示意图之三;
图5为本公开实施例一中一种目标抖动部分对应的原始录制轨迹的示意图;
图6为本公开实施例一中一种对原始录制轨迹进行优化的示意图;
图7为本公开实施例一中一种目标录制轨迹的显示示意图;
图8为本公开实施例一中一种视频录制界面上显示预览图像的示意图;
图9为本公开实施例一中一种显示用来提示视频中存在抖动部分的提示信息的示意图;
图10为本公开实施例一中一种时间轴的显示示意图;
图11为本公开实施例一中一种重录输入的示意图;
图12为本公开实施例二中的一种电子设备的结构示意图;
图13为本公开实施例三中的一种电子设备的结构示意图;
图14为实现本公开各个实施例的一种电子设备的硬件结构示意图。
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
请参阅图1,图1是本公开实施例一提供的一种视频录制方法的流程示意图,该方法应用于电子设备,包括:
步骤11:确定已录制视频中的目标抖动部分;
步骤12:接收针对所述目标抖动部分的重录输入;
步骤13:响应于所述重录输入,显示视频录制界面,并重新录制视频;
步骤14:将所述目标抖动部分替换为所述重新录制的视频。
本公开实施例,通过重新录制视频中抖动部分的内容,并将抖动部分替换为重新录制的视频的方式,来解决视频录制过程中存在的抖动问题,使得最终输出的视频内容稳定、质量高。且该视频录制方式,不会损失视场角,对硬件的需求也比较小。
下面举例说明上述视频录制方法。
可选的,所述确定已录制视频中的目标抖动部分的步骤包括:
检测所述已录制视频的抖动幅度;
确定所述已录制视频中抖动幅度大于预设阈值的部分为抖动部分,所述目标抖动部分为其中一个抖动部分。
可选的,所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
本公开实施例中,由于需要将所述已录制视频中的目标抖动部分替换为 重新录制的视频,因此为保证视频的连贯性,需要使得所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
作为其中一种可选的具体实施方式,所述重新录制视频的步骤,具体包括:
在所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,开始录制视频;
在所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,停止录制视频。
本公开实施例采用图像匹配的方式来使所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
具体的,在所述电子设备响应于所述重录输入,显示视频录制界面之后,用户可以通过移动电子设备的方式使得所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像匹配,而一旦匹配上,电子设备就自动开始录制。在录制过程中,所述电子设备还实时地将录制的当前帧图像与所述目标抖动部分的最后一帧图像进行匹配,一旦匹配上,则所述电子设备自动停止录制。在进行图像匹配时,可以通过特征点来匹配,识别相同场景。
可选的,所述显示视频录制界面的步骤具体包括:
在所述视频录制界面上显示所述目标抖动部分的第一帧图像和/或所述目标抖动部分的最后一帧图像。
本公开实施例中,通过在所述视频录制界面上显示所述目标抖动部分的第一帧图像,可以方便用户判断如何移动电子设备来使得所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像匹配上,节省重新录制所需的时间。同样的,通过在所述视频录制界面上显示所述目标抖动部分的最后一帧图像,也可以方便用户判断如何移动电子设备来使得重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像匹配上,节省重新录制所需的时间。
具体的,如图2-4所示,在重新录制视频的整个过程中,都可以在所述 视频录制界面上显示所述目标抖动部分的第一帧图像和所述目标抖动部分的最后一帧图像。当然,在其他的可选实施例中,也可以在开始重新录制视频之前,只将所述目标抖动部分的第一帧图像和预览图像一起显示在所述视频录制界面上;且在开始重新录制视频之后,不再显示所述目标抖动部分的第一帧图像,而只显示所述目标抖动部分的最后一帧图像和重新录制的视频的当前帧图像。
可选的,所述方法还包括:
在显示所述目标抖动部分的第一帧图像的情况下,若所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,显示第一提示信息;
在显示所述目标抖动部分的最后一帧图像的情况下,若所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,显示第二提示信息。
具体的,显示所述第一提示信息的方式,可以是显示文字“已匹配”,或者改变所述目标抖动部分的第一帧图像的显示颜色,例如所述目标抖动部分的第一帧图像变绿,还可以是其他的方式,只要可以对用户进行相应提醒即可,此处不再详举。
作为另一种可选的具体实施方式,所述接收针对所述目标抖动部分的重录输入的步骤之后,还包括:
在所述视频录制界面显示目标录制轨迹和标志点,所述目标录制轨迹为对所述目标抖动部分的原始录制轨迹进行优化后的轨迹;
所述重新录制视频的步骤具体包括:
在所述标志点与所述目标录制轨迹的起点重合时,开始录制视频;
在所述标志点与所述目标录制轨迹的终点重合时,停止录制视频。
具体的,在所述视频录制界面显示目标录制轨迹之前,还可以包括:
获取所述目标抖动部分的原始录制轨迹,所述原始录制轨迹为录制所述目标抖动部分的过程中,所述电子设备在空间中的移动轨迹;
对所述原始录制轨迹进行优化,得到平滑的目标录制轨迹。
本公开实施例中,具体可以是通过增强现实(Augmented Reality,AR)的 方式,将所述目标录制轨迹显示在所述视频录制界面,以指示所述电子设备在重新录制视频的过程中需要按照所述目标录制轨迹移动。所述标志点用于指示所述电子设备是否按照所述目标录制轨迹移动。
需要说明的是,在重新录制视频的过程中,所述目标录制轨迹始终通过AR的方式显示在所述视频录制界面,所述标志点也始终显示在所述视频录制界面,用户需要始终保持所述标志点与所述目标录制轨迹重合,且保持沿着从起点到终点的方向移动。
本公开实施例是采用电子设备的移动轨迹的匹配方式来实现所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。而且,在本公开实施例中,由于所述目标录制轨迹不仅包括起点和终点,还包括重新录制视频中间过程的移动轨迹,因此还可以使得所述重新录制的视频的内容与所述目标抖动部分的内容基本保持一致。另外,只需要将标志点与目标录制轨迹对准就能实现图像的匹配,操作更加简单,耗费的时间更少。
具体的,在录制所述已录制视频的过程中,所述电子设备就记录录制过程中所述电子设备在空间中的移动轨迹(也即录制轨迹)。所述电子设备在确定所述已录制视频中的目标抖动部分时,也将所述目标抖动部分对应的原始录制轨迹(如图5所示)从所述已录制视频对应的全部原始录制轨迹中截取出来。然后,如图6所示,对所述目标抖动部分对应的原始录制轨迹进行优化,主要是消除抖动的轨迹,得到平滑的优化录制轨迹,也就是目标录制轨迹。如图7所示,所述电子设备通过AR的方式,将所述目标录制轨迹显示在所述视频录制界面,并在所述视频录制界面显示一标志点05。在重新录制视频的过程中,用户首先需要将所述标志点05对到所述目标录制轨迹的起点,触发录制,也即所述电子设备检测到所述标志点05与所述目标录制轨迹的起点重合时自动开始视频的重新录制,然后用户需要将所述标志点05沿着所述目标录制轨迹从起点移动到终点,结束录制,也即所述电子设备检测到所述标志点05与所述目标录制轨迹的终点重合时自动停止视频的重新录制。
可选的,所述接收针对目标抖动部分的重录输入的步骤之前,还包括:
显示所述已录制视频的时间轴,并按照预设显示格式显示所述目标抖动 部分对应的目标时间轴。
本公开实施例中,通过显示所述已录制视频的时间轴,并按照预设显示格式显示所述目标抖动部分对应的目标时间轴,可以方便用户了解目标抖动部分在所述已录制视频中的大概位置。
具体的,上述预设显示格式可以是标黑、标红或高亮等。
在其他的可选实施方式中,所述接收针对目标抖动部分的重录输入的步骤之前,还可以包括:显示第三提示信息,所述第三提示信息用于提示所述已录制视频中存在抖动部分;
接收第二输入,响应于所述第二输入,显示所述已录制视频的时间轴,并按照预设显示格式显示所述抖动部分对应的时间轴。
本公开实施例中,目标抖动部分是其中的一个抖动部分。
本公开实施例中,显示第三提示信息是为了方便用户及时知晓所述已录制视频中存在抖动部分,以及时进行处理。显示所述已录制视频的时间轴,并采用使用预设显示格式进行显示的方式对所述抖动部分对应的部分时间轴进行指示,是为了方便用户了解抖动部分在所述已录制视频中的大概位置。
例如,如图8所示,用户打开电子设备的相机应用进入视频录制界面后,视频录制界面上显示预览图像,用户点击录像按钮01即可开始录制一段视频,在录制过程中,用户按结束按钮即可停止录像。在停止录像之后,如图9所示,如果电子设备根据抖动幅度检测结果判断当前录制的视频中存在抖动部分,就会显示用来提示视频中存在抖动部分的提示信息(也即第三提示信息)。进一步的,该提示信息所在的提示框02中可以显示一按钮03,或者该提示框本身就是一按钮,所述第一输入可以是点击该按钮的输入。所述电子设备响应于所述第一输入,如图10所示,显示已录制视频的时间轴04,其中标黑的部分为抖动部分对应的部分时间轴。
上述通过标黑的方式对所述抖动部分对应的部分时间轴进行指示只是举例说明,在其他的具体实施方式中,也可以是通过标红、高亮等方式来指示,这里不详细列举。
进一步可选的,所述重录输入为拖动所述目标时间轴的输入。
例如,如图11所示,所述已录制视频的时间轴04中有两处标黑的部分, 表示该已录制视频中存在两个抖动部分,若用户想要对第一个抖动部分(也即目标抖动部分)进行重新录制,那么就将第一个标黑的部分时间轴(也即目标时间轴)向上拖动。进一步可选的,所述重录输入可以是将所述目标时间轴拖动至预设位置的输入。
本公开实施例中,所述重录输入操作简单,而且可以直接明确是针对所述已录制视频中的哪个抖动部分。
上述拖动目标时间轴的输入也只是所述重录输入的举例说明,并不作为对所述重录输入的限定,所述重录输入还可以是其他的输入,例如点击目标时间轴,或者先选中目标时间轴然后点击一预设按钮,等等,此处也不再详举。
可选的,所述按照预设显示格式显示所述目标抖动部分对应的目标时间轴的步骤之后,还包括:
接收针对所述目标时间轴的第一输入;
响应于所述第一输入,播放所述目标抖动部分。
例如,用户可以点击图11中的其中一个标黑的部分时间轴,然后电子设备就播放该标黑的部分时间轴对应的抖动部分,具体是播放该抖动部分的视频内容。
本公开实施例中,在所述电子设备显示所述已录制视频的时间轴,并对抖动部分对应的时间轴进行指示之后,用户可以查看抖动部分的视频内容。在查看完之后,用户可以自行判断是否重新录制,若不想重新录制,则跳过此部分,若想重新录制则输入所述重录输入。
另外,对于所述已录制视频中或者所述重新录制的视频中抖动幅度小于预设阈值的部分,可以不做处理,也可以使用裁剪较小的电子防抖(electronic image stabilization,EIS)算法,这样虽然会损失部分视场角,但是损失的视场角较小。
请参阅图12,图12是本公开实施例二提供的一种电子设备的结构示意图,该电子设备包括:
确定模块121,用于确定已录制视频中的目标抖动部分;
接收模块122,用于接收针对所述目标抖动部分的重录输入;
响应模块123,用于响应于所述重录输入,显示视频录制界面,并重新录制视频;
视频处理模块124,用于将所述目标抖动部分替换为所述重新录制的视频。
本公开实施例中,通过重新录制视频中抖动部分的内容,并将抖动部分替换为重新录制的视频的方式,来解决视频录制过程中存在的抖动问题,使得最终输出的视频内容稳定、质量高,且不会损失视场角,对硬件的需求也比较小。
下面举例说明上述电子设备。
可选的,所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
可选的,所述响应模块123包括:
第一开启单元,用于在所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,开始录制视频;
第一停止单元,用于在所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,停止录制视频。
可选的,所述响应模块123包括:
显示单元,用于在所述视频录制界面上显示所述目标抖动部分的第一帧图像和/或所述目标抖动部分的最后一帧图像。
可选的,所述电子设备还包括:
第一提示模块,用于在显示所述目标抖动部分的第一帧图像的情况下,若所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,显示第一提示信息;
第二提示模块,用于在显示所述目标抖动部分的最后一帧图像的情况下,若所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,显示第二提示信息。
可选的,所述电子设备还包括:
显示模块,用于在所述视频录制界面显示目标录制轨迹和标志点,所述 目标录制轨迹为对所述目标抖动部分的原始录制轨迹进行优化后的轨迹;
所述响应模块123包括:
第二开启单元,用于在所述标志点与所述目标录制轨迹的起点重合时,开始录制视频;
第二停止单元,用于在所述标志点与所述目标录制轨迹的终点重合时,停止录制视频。
可选的,所述电子设备还包括:
时间轴显示模块,用于显示所述已录制视频的时间轴,并按照预设显示格式显示所述目标抖动部分对应的目标时间轴。
可选的,所述重录输入为拖动所述目标时间轴的输入。
可选的,所述电子设备还包括:
第一输入接收模块,用于接收针对所述目标时间轴的第一输入;
播放模块,用于响应于所述第一输入,播放所述目标抖动部分。
本公开实施例提供的电子设备,能够实现图1至图11对应的方法实施例中的各个过程,为避免重复,这里不再赘述。
请参考图13,图13为本公开实施例三提供的一种电子设备的结构示意图,该电子设备130包括处理器131,存储器132,存储在存储器132上并可在所述处理器131上运行的计算机程序,该计算机程序被处理器131执行时实现如下步骤:
确定已录制视频中的目标抖动部分;
接收针对所述目标抖动部分的重录输入;
响应于所述重录输入,显示视频录制界面,并重新录制视频;
将所述目标抖动部分替换为所述重新录制的视频。
在本公开实施例中,通过重新录制视频中抖动部分的内容,并将抖动部分替换为重新录制的视频的方式,来解决视频录制过程中存在的抖动问题,使得最终输出的视频内容稳定、质量高,且不会损失视场角,对硬件的需求也比较小。
可选的,所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的 最后一帧图像匹配。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
所述重新录制视频的步骤,具体包括:
在所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,开始录制视频;
在所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,停止录制视频。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
所述显示视频录制界面的步骤,具体包括:
在所述视频录制界面上显示所述目标抖动部分的第一帧图像和/或所述目标抖动部分的最后一帧图像。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
在显示所述目标抖动部分的第一帧图像的情况下,若所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,显示第一提示信息;
在显示所述目标抖动部分的最后一帧图像的情况下,若所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,显示第二提示信息。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
所述接收针对所述目标抖动部分的重录输入的步骤之后,所述方法还包括:
在所述视频录制界面显示目标录制轨迹和标志点,所述目标录制轨迹为对所述目标抖动部分的原始录制轨迹进行优化后的轨迹;
所述重新录制视频的步骤具体包括:
在所述标志点与所述目标录制轨迹的起点重合时,开始录制视频;
在所述标志点与所述目标录制轨迹的终点重合时,停止录制视频。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
所述接收针对所述目标抖动部分的重录输入的步骤之前,还包括:
显示所述已录制视频的时间轴,并按照预设显示格式显示所述目标抖动 部分对应的目标时间轴。
可选的,所述重录输入为拖动所述目标时间轴的输入。
可选的,计算机程序被处理器131执行时还可实现如下步骤:
所述按照预设显示格式显示所述目标抖动部分对应的目标时间轴的步骤之后,还包括:
接收针对所述目标时间轴的第一输入;
响应于所述第一输入,播放所述目标抖动部分。
该电子设备能够实现上述方法实施例一的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
图14为实现本公开各个实施例的一种电子设备的硬件结构示意图,该电子设备1400包括但不限于:射频单元1401、网络模块1402、音频输出单元1403、输入单元1404、传感器1405、显示单元1406、用户输入单元1407、接口单元1408、存储器1409、处理器1410、以及电源1411等部件。本领域技术人员可以理解,图14中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器1410,用于确定已录制视频中的目标抖动部分;用户输入单元1407,用于接收针对所述目标抖动部分的重录输入;处理器1410,还用于响应于所述重录输入,通过显示单元1406显示视频录制界面,并重新录制视频;将所述目标抖动部分替换为所述重新录制的视频。
本公开实施例中,通过重新录制视频中抖动部分的内容,并将抖动部分替换为重新录制的视频的方式,来解决视频录制过程中存在的抖动问题,使得最终输出的视频内容稳定、质量高,且不会损失视场角,对硬件的需求也比较小。
应理解的是,本公开实施例中,射频单元1401可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器1410处理;另外,将上行的数据发送给基站。通常,射频单元1401包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工 器等。此外,射频单元1401还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块1402为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元1403可以将射频单元1401或网络模块1402接收的或者在存储器1409中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元1403还可以提供与电子设备1400执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元1403包括扬声器、蜂鸣器以及受话器等。
输入单元1404用于接收音频或视频信号。输入单元1404可以包括图形处理器(Graphics Processing Unit,GPU)14041和麦克风14042,图形处理器14041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1406上。经图形处理器14041处理后的图像帧可以存储在存储器1409(或其它存储介质)中或者经由射频单元1401或网络模块1402进行发送。麦克风14042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元1401发送到移动通信基站的格式输出。
电子设备1400还包括至少一种传感器1405,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板14061的亮度,接近传感器可在电子设备1400移动到耳边时,关闭显示面板14061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器1405还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1406用于显示由用户输入的信息或提供给用户的信息。显示单元1406可包括显示面板14061,可以采用液晶显示器(Liquid Crystal Display, LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板14061。
用户输入单元1407可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1407包括触控面板14071以及其他输入设备14072。触控面板14071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板14071上或在触控面板14071附近的操作)。触控面板14071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1410,接收处理器1410发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板14071。除了触控面板14071,用户输入单元1407还可以包括其他输入设备14072。具体地,其他输入设备14072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板14071可覆盖在显示面板14061上,当触控面板14071检测到在其上或附近的触摸操作后,传送给处理器1410以确定触摸事件的类型,随后处理器1410根据触摸事件的类型在显示面板14061上提供相应的视觉输出。虽然在图14中,触控面板14071与显示面板14061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板14071与显示面板14061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元1408为外部装置与电子设备1400连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(input/output,I/O)端口、视频I/O端口、耳机端口等等。接口单元1408可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备1400内的一个或多个元件或者可以用于在电子设备1400和外部装置之间传输数据。
存储器1409可用于存储软件程序以及各种数据。存储器1409可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1409可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1410是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器1409内的软件程序和/或模块,以及调用存储在存储器1409内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器1410可包括一个或多个处理单元;可选的,处理器1410可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1410中。
电子设备1400还可以包括给各个部件供电的电源1411(比如电池),可选的,电源1411可以通过电源管理系统与处理器1410逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备1400包括一些未示出的功能模块,在此不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述视频录制方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台电子设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述的程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器(Read-Only Memory,ROM)或随机存取存储器(Random Access Memory,RAM)等。
可以理解的是,本公开实施例描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,模块、单元、子单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本公开所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本公开实施例所述功能的模块(例如过程、函数等)来实现本公开实施例所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。
Claims (15)
- 一种视频录制方法,应用于电子设备,包括:确定已录制视频中的目标抖动部分;接收针对所述目标抖动部分的重录输入;响应于所述重录输入,显示视频录制界面,并重新录制视频;将所述目标抖动部分替换为所述重新录制的视频。
- 根据权利要求1所述的方法,其中,所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
- 根据权利要求2所述的方法,其中,所述重新录制视频的步骤,具体包括:在所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,开始录制视频;在所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,停止录制视频。
- 根据权利要求3所述的方法,其中,所述显示视频录制界面的步骤,具体包括:在所述视频录制界面上显示所述目标抖动部分的第一帧图像和/或所述目标抖动部分的最后一帧图像。
- 根据权利要求2-4中任一项所述的方法,还包括:在显示所述目标抖动部分的第一帧图像的情况下,若所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,显示第一提示信息;在显示所述目标抖动部分的最后一帧图像的情况下,若所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,显示第二提示信息。
- 根据权利要求2所述的方法,其中,所述接收针对所述目标抖动部分的重录输入的步骤之后,所述方法还包括:在所述视频录制界面显示目标录制轨迹和标志点,所述目标录制轨迹为对所述目标抖动部分的原始录制轨迹进行优化后的轨迹;所述重新录制视频的步骤具体包括:在所述标志点与所述目标录制轨迹的起点重合时,开始录制视频;在所述标志点与所述目标录制轨迹的终点重合时,停止录制视频。
- 根据权利要求1所述的方法,其中,所述接收针对所述目标抖动部分的重录输入的步骤之前,还包括:显示所述已录制视频的时间轴,并按照预设显示格式显示所述目标抖动部分对应的目标时间轴。
- 根据权利要求7所述的方法,其中,所述重录输入为拖动所述目标时间轴的输入。
- 根据权利要求7所述的方法,其中,所述按照预设显示格式显示所述目标抖动部分对应的目标时间轴的步骤之后,还包括:接收针对所述目标时间轴的第一输入;响应于所述第一输入,播放所述目标抖动部分。
- 一种电子设备,包括:确定模块,用于确定已录制视频中的目标抖动部分;接收模块,用于接收针对所述目标抖动部分的重录输入;响应模块,用于响应于所述重录输入,显示视频录制界面,并重新录制视频;视频处理模块,用于将所述目标抖动部分替换为所述重新录制的视频。
- 根据权利要求10所述的电子设备,其中,所述重新录制的视频的第一帧图像与所述目标抖动部分的第一帧图像匹配,且所述重新录制的视频的最后一帧图像与所述目标抖动部分的最后一帧图像匹配。
- 根据权利要求11所述的电子设备,其中,所述响应模块包括:第一开启单元,用于在所述视频录制界面中的预览图像与所述目标抖动部分的第一帧图像相匹配时,开始录制视频;第一停止单元,用于在所述重新录制的视频的当前帧图像与所述目标抖动部分的最后一帧图像相匹配时,停止录制视频。
- 根据权利要求11所述的电子设备,还包括:显示模块,用于在所述视频录制界面显示目标录制轨迹和标志点,所述目标录制轨迹为对所述目标抖动部分的原始录制轨迹进行优化后的轨迹;所述响应模块包括:第二开启单元,用于在所述标志点与所述目标录制轨迹的起点重合时,开始录制视频;第二停止单元,用于在所述标志点与所述目标录制轨迹的终点重合时,停止录制视频。
- 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至9中任一项所述的视频录制方法的步骤。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的视频录制方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910802034.5A CN110602386B (zh) | 2019-08-28 | 2019-08-28 | 一种视频录制方法及电子设备 |
CN201910802034.5 | 2019-08-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021036659A1 true WO2021036659A1 (zh) | 2021-03-04 |
Family
ID=68856035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/105526 WO2021036659A1 (zh) | 2019-08-28 | 2020-07-29 | 视频录制方法及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110602386B (zh) |
WO (1) | WO2021036659A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283220A (zh) * | 2021-05-18 | 2021-08-20 | 维沃移动通信有限公司 | 笔记记录方法、装置、设备及可读存储介质 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110602386B (zh) * | 2019-08-28 | 2021-05-14 | 维沃移动通信有限公司 | 一种视频录制方法及电子设备 |
CN113572993B (zh) * | 2020-04-27 | 2022-10-11 | 华为技术有限公司 | 一种视频处理方法及移动终端 |
CN114390341B (zh) * | 2020-10-22 | 2023-06-06 | 华为技术有限公司 | 一种视频录制方法、电子设备、存储介质及芯片 |
CN112492251A (zh) * | 2020-11-24 | 2021-03-12 | 维沃移动通信有限公司 | 视频通话方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150097976A1 (en) * | 2011-12-14 | 2015-04-09 | Panasonic Corporation | Image processing device and image processing method |
CN104618627A (zh) * | 2014-12-31 | 2015-05-13 | 小米科技有限责任公司 | 视频处理方法和装置 |
CN108024083A (zh) * | 2017-11-28 | 2018-05-11 | 北京川上科技有限公司 | 处理视频的方法、装置、电子设备和计算机可读存储介质 |
CN109089059A (zh) * | 2018-10-19 | 2018-12-25 | 北京微播视界科技有限公司 | 视频生成的方法、装置、电子设备及计算机存储介质 |
CN110602386A (zh) * | 2019-08-28 | 2019-12-20 | 维沃移动通信有限公司 | 一种视频录制方法及电子设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016025562A (ja) * | 2014-07-23 | 2016-02-08 | ソニー株式会社 | 表示制御装置、撮像装置及び表示制御方法 |
CN105898133A (zh) * | 2015-08-19 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | 一种视频拍摄方法及装置 |
CN109905590B (zh) * | 2017-12-08 | 2021-04-27 | 腾讯科技(深圳)有限公司 | 一种视频图像处理方法及装置 |
CN108366243B (zh) * | 2018-01-23 | 2019-10-29 | 微幻科技(北京)有限公司 | 一种视频去抖方法及装置 |
CN109348125B (zh) * | 2018-10-31 | 2020-02-04 | Oppo广东移动通信有限公司 | 视频校正方法、装置、电子设备和计算机可读存储介质 |
-
2019
- 2019-08-28 CN CN201910802034.5A patent/CN110602386B/zh active Active
-
2020
- 2020-07-29 WO PCT/CN2020/105526 patent/WO2021036659A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150097976A1 (en) * | 2011-12-14 | 2015-04-09 | Panasonic Corporation | Image processing device and image processing method |
CN104618627A (zh) * | 2014-12-31 | 2015-05-13 | 小米科技有限责任公司 | 视频处理方法和装置 |
CN108024083A (zh) * | 2017-11-28 | 2018-05-11 | 北京川上科技有限公司 | 处理视频的方法、装置、电子设备和计算机可读存储介质 |
CN109089059A (zh) * | 2018-10-19 | 2018-12-25 | 北京微播视界科技有限公司 | 视频生成的方法、装置、电子设备及计算机存储介质 |
CN110602386A (zh) * | 2019-08-28 | 2019-12-20 | 维沃移动通信有限公司 | 一种视频录制方法及电子设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113283220A (zh) * | 2021-05-18 | 2021-08-20 | 维沃移动通信有限公司 | 笔记记录方法、装置、设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110602386B (zh) | 2021-05-14 |
CN110602386A (zh) | 2019-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021098678A1 (zh) | 投屏控制方法及电子设备 | |
US11689649B2 (en) | Shooting method and terminal | |
WO2021036659A1 (zh) | 视频录制方法及电子设备 | |
WO2021036536A1 (zh) | 视频拍摄方法及电子设备 | |
WO2021036542A1 (zh) | 录屏方法及移动终端 | |
WO2021078116A1 (zh) | 视频处理方法及电子设备 | |
WO2019196929A1 (zh) | 一种视频数据处理方法及移动终端 | |
WO2021159998A1 (zh) | 信息显示方法、装置及电子设备 | |
CN111010510B (zh) | 一种拍摄控制方法、装置及电子设备 | |
WO2020238497A1 (zh) | 图标移动方法及终端设备 | |
WO2020042890A1 (zh) | 视频处理方法、终端及计算机可读存储介质 | |
CN111010610B (zh) | 一种视频截图方法及电子设备 | |
WO2019223494A1 (zh) | 截屏方法及移动终端 | |
US11740769B2 (en) | Screenshot method and terminal device | |
US20200257433A1 (en) | Display method and mobile terminal | |
WO2020238449A1 (zh) | 通知消息的处理方法及终端 | |
WO2019196691A1 (zh) | 一种键盘界面显示方法和移动终端 | |
WO2021036623A1 (zh) | 显示方法及电子设备 | |
WO2021109959A1 (zh) | 应用程序分享方法及电子设备 | |
WO2019184947A1 (zh) | 图像查看方法及移动终端 | |
WO2020199988A1 (zh) | 内容复制方法及终端 | |
WO2020238445A1 (zh) | 屏幕录制方法及终端 | |
WO2020199986A1 (zh) | 视频通话方法及终端设备 | |
WO2021082772A1 (zh) | 截屏方法及电子设备 | |
WO2021082744A1 (zh) | 视频查看方法及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20857358 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20857358 Country of ref document: EP Kind code of ref document: A1 |