WO2019105441A1 - 视频录制方法和视频录制终端 - Google Patents

视频录制方法和视频录制终端 Download PDF

Info

Publication number
WO2019105441A1
WO2019105441A1 PCT/CN2018/118380 CN2018118380W WO2019105441A1 WO 2019105441 A1 WO2019105441 A1 WO 2019105441A1 CN 2018118380 W CN2018118380 W CN 2018118380W WO 2019105441 A1 WO2019105441 A1 WO 2019105441A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
recording
video file
receiving
Prior art date
Application number
PCT/CN2018/118380
Other languages
English (en)
French (fr)
Inventor
周宇涛
Original Assignee
广州市百果园信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州市百果园信息技术有限公司 filed Critical 广州市百果园信息技术有限公司
Priority to SG11202005043TA priority Critical patent/SG11202005043TA/en
Priority to EP18884133.2A priority patent/EP3709633B1/en
Priority to RU2020121732A priority patent/RU2745737C1/ru
Priority to US16/764,958 priority patent/US11089259B2/en
Publication of WO2019105441A1 publication Critical patent/WO2019105441A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Definitions

  • the present invention relates to the field of video recording technology, and in particular, to a video recording method and a video recording terminal.
  • the traditional video sharing platform allows users to add various special effects to the video while recording the video, making the homemade video more interesting, in order to attract users to click and enjoy.
  • the homemade video there is a video production method that combines two videos or multiple videos into the same video. For example, the user first takes a video for himself, and then the user adjusts his appearance and then takes the same posture. Continue to take a video, it seems to be a moment to change, full of fun. However, it is difficult for the user to align with the last pose of the previous video during the resume, which reduces the naturalness of the video transition.
  • These videos are obviously not interesting enough, which makes the interest of netizens gradually decline, making it difficult for self-created videos to attract netizens and reduce the click-through rate of their own videos.
  • the invention provides a video recording method, comprising the following steps:
  • the first video file and the second video file are spliced into a target video file.
  • the process of pausing the recording when receiving the pause instruction input by the user comprises: pausing the recording when receiving the pause instruction input by the user, acquiring and saving the first posture acquired by the angular velocity sensor at the current time. data;
  • the process of superimposing and displaying the last frame picture on the dynamic picture comprises: superimposing and displaying the last frame picture on the dynamic picture, and detecting the second posture according to the first posture data and the angular velocity sensor in real time.
  • the matching relationship of the data gives the user a corresponding prompt.
  • the process of superimposing the last frame of the picture on the dynamic picture comprises:
  • the last frame picture is superimposed and displayed on the dynamic picture, and a corresponding prompt is sent to the user according to the similarity between the arbitrary frame picture of the dynamic picture and the last frame picture.
  • an animation effect is also added to the first video file and/or the second video file.
  • the first video file and the second video file are also subjected to frame interpolation processing or frame drawing processing before the first video file and the second video file are spliced into the target video file.
  • the main element in the picture content is also outlined in the last frame before the last frame after the transparency process is displayed.
  • the video is recorded after the first preset time is counted down when the recording instruction input by the user is received, and/or the second preset time is counted down when the user inputs the renewal instruction. Continue to record the video.
  • a recording progress component with a time stamp is displayed while the video is being recorded.
  • the video is re-recorded and the first video file is regenerated
  • the video is re-recorded and the second video file is regenerated.
  • the invention also provides a video recording terminal, comprising:
  • One or more processors are One or more processors;
  • One or more applications wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: execute A video recording method according to any of the above embodiments.
  • the above video recording method and video recording terminal record video when receiving a recording instruction input by the user, then pause the recording when receiving a pause instruction input by the user, and generate a first video file; continue to display the image sensor in real time Collecting a dynamic picture, and superimposing and displaying the last frame of the first video file after the transparent processing on the dynamic picture; continuing to record the video when receiving the renewal instruction input by the user, and then receiving the video
  • the recording is stopped when the user inputs a stop command, and a second video file is generated; the first video file and the second video file are stitched into a target video file.
  • the screen When the user resumes the video, the screen can be positioned by the last frame that is reserved when the first video is recorded, so that the second video recorded later can better connect the first video recorded before, thereby achieving the residual image.
  • the recording function makes the transition to be more natural when the final stitching generates the target video, and improves the fun level of the homemade video, thereby improving the click rate of the homemade video.
  • FIG. 1 is a flow chart of a video recording method of an embodiment
  • FIG. 2 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present invention.
  • terminal and terminal device used herein include both a wireless signal receiver device, a device having only a wireless signal receiver without a transmitting capability, and a receiving and transmitting hardware.
  • Such devices may include cellular or other communication devices having single-line displays or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Service), which can combine voice and data processing, facsimile and / or data communications capabilities; PDA (personal digital Assistant, personal digital assistants), which may include network access to a radio frequency receiver, a pager, Internet / web browser, notepad, calendar and / or GPS (Global Positioning System (Global Positioning System) receiver; conventional laptop and/or palmtop computer or other device having a conventional laptop and/or palmtop computer or other device that includes and/or includes a radio frequency receiver.
  • GPS Global Positioning System
  • terminal may be portable, transportable, installed in a vehicle (aviation, sea and/or land), or adapted and/or configured to operate locally, and/or Run in any other location on the Earth and/or space in a distributed form.
  • the "terminal” and “terminal device” used herein may also be a communication terminal, an internet terminal, a music/video playing terminal, and may be, for example, a PDA, a MID (Mobile Internet Device), and/or have a music/video playback.
  • Functional mobile phones can also be smart TVs, set-top boxes and other devices.
  • 1 is a flow chart of a video recording method of an embodiment.
  • the present invention provides a video recording method, which is applied to a mobile terminal, and includes the following steps S100 to S400:
  • Step S100 Recording a video when receiving a recording instruction input by the user (for example, the user clicks the recording button), and then suspending the recording when receiving a pause instruction input by the user (for example, the user clicks the pause button), and generating the first video file.
  • the first video file herein may be a cache file recorded in memory.
  • the video is recorded immediately upon receiving the recording instruction input by the user, and in other embodiments, after receiving the recording instruction input by the user, it is necessary to count down the first preset time. Record video, for example, 3 seconds after the video is recorded.
  • a recording progress component When recording a video, you can display a recording progress component with a time stamp, such as a video recording progress bar, to show the progress of the recording in real time.
  • the recording progress bar has a time stamp to help the user know when the video has been recorded.
  • the first posture data records the current posture of the terminal, such as the inclination.
  • the direction data collected by the magnetic sensor at the current time can also be acquired and saved at the same time.
  • the first posture data may be used to instruct the user to adjust the posture of the terminal, for example, to adjust the inclination or direction.
  • the re-recording may be performed. Therefore, in some embodiments, after the first video file is generated, when a re-recording instruction input by the user is received (for example, the user clicks the re-recording button), Re-recording the video and regenerating the first video file, the old first video file will be deleted or replaced by the regenerated first video file.
  • a re-recording instruction input by the user for example, the user clicks the re-recording button
  • the first video file may be deleted after receiving the deletion instruction input by the user (for example, the user clicks the delete button), and then receiving the re-recording instruction input by the user (for example, the user) Re-record the video and regenerate the first video file when you click the Record button again.
  • the user's instructions can be received to add animated special effects content to the first video file to increase the interest of the video.
  • step S200 is performed.
  • Step S200 Continue to display the dynamic picture that the image sensor is collecting in real time, and superimpose and display the last frame of the first video file after the transparent processing on the dynamic picture.
  • the image sensor After the recording is paused, the image sensor also collects dynamic pictures in real time, but these dynamic pictures are not stored as video files. For example, when the mobile phone and the camera are shooting, the image sensor is turned on to collect the image of the image in real time, and when the user clicks on the shooting or recording, the image data of the image captured by the image sensor in real time is stored as a picture or a video file.
  • the terminal continues to display the dynamic picture in real time, and the dynamic picture is for the user to observe the recorded scene.
  • the last frame picture is also superimposed on the dynamic picture, and the last frame picture is transparent.
  • the last frame picture may be translucent, which is equivalent to When displaying a dynamic picture, a layer of translucent picture is cast on the dynamic picture, so that the user can see the dynamic picture that the image sensor is collecting in real time through the translucent last frame, so that the user can pass through the subsequent recording.
  • the translucent last frame of the picture is positioned. That is, the user can determine the time to start the resume by comparing the dynamic picture with the last frame, so that the first video and the second video can be made more natural.
  • the body elements in the picture content may also be outlined on the last frame before the last frame of the rendered frame is displayed. For example, if the main element of the screen content of the last frame picture is a character image, the outline of the person image can be drawn in the last frame picture, so that the user can more conveniently locate the picture.
  • the corresponding prompt may be sent to the user according to the matching relationship between the first posture data and the second posture data currently detected by the angular velocity sensor.
  • the first posture data and the second posture data can be analyzed by comparing the first posture data and the second posture data.
  • the degree of matching or similarity between the dynamic picture and the last frame is displayed, so that the user can be prompted to make corresponding adjustment actions, such as prompting the user to tilt or tilt the direction in a certain direction, or prompting the user to adjust the orientation of the terminal.
  • the user can be prompted to continue the recording.
  • the user when the last frame is superimposed and displayed on the dynamic picture, the user may be prompted according to the similarity between the frame of the dynamic picture and the last frame. That is, in these embodiments, the user is directly judged by image analysis to determine the degree of matching or similarity between the dynamic picture and the last frame, when a dynamic picture (for example, a certain frame) and the last frame are displayed. When the matching degree or similarity reaches the preset condition, the user can be prompted to continue the recording.
  • Step S300 Continue to record the video when receiving the renewal instruction input by the user (for example, the user clicks the recording button again), then stop the recording when receiving the stop instruction input by the user, and generate the second video file.
  • the second video file herein may be a cache file recorded in memory.
  • the video is recorded as soon as the user enters the resume command, and in other embodiments, the user receives the resume command, which requires a second preset time. After recording the video, for example, it takes 3 seconds to record the video.
  • a recording progress component with a time stamp, such as a video recording progress bar, to show the progress of the recording in real time.
  • the re-recording can be performed. Therefore, in some embodiments, after the second video file is generated, the user can input a re-recording instruction (for example, the user clicks the re-recording button).
  • a re-recording instruction for example, the user clicks the re-recording button.
  • the second video file may be deleted after receiving the deletion instruction input by the user (for example, the user clicks the delete button), and then receiving the re-recording instruction input by the user (for example, the user) Re-record the video and regenerate the second video file when you click the Record button again.
  • the user's instructions can be received to add animated special effects content to the second video file to increase the fun of the video.
  • Step S400 splicing the first video file and the second video file into a target video file.
  • the target video file here may also be a cache file in the memory.
  • the target video file is shared, stored locally, or stored in the cloud server.
  • the first video file and the second video file may be spliced into the target video file after receiving the splicing instruction input by the user, or the first video file and the second video file may be automatically spliced into the target video after step S300.
  • Documents are not limited.
  • Image signal processing includes, but is not limited to, at least one of the following operations: black reduction, lens roll-off correction, channel gain adjustment, bad pixel correction, demosaicing, cropping, scaling, white balance, color correction, brightness adaptation, Color conversion and enhanced image contrast.
  • the interpolation frame processing may be to insert a repeating frame between frames of the video, and the length of the video is correspondingly increased, which becomes slower when the user appears to be moving.
  • the frame drawing process may be to uniformly extract and discard some of the frames in the video frame (for example, to remove odd or even frames), and the length of the video is correspondingly reduced, which becomes faster when the user sees action.
  • the present invention also provides a video recording terminal comprising: a display; one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to Executed by the one or more processors, the one or more programs configured to: perform the video recording method according to any of the above embodiments.
  • the embodiment of the present invention further provides a mobile terminal.
  • a mobile terminal As shown in FIG. 2, for the convenience of description, only parts related to the embodiment of the present invention are shown. If the specific technical details are not disclosed, please refer to the method part of the embodiment of the present invention.
  • the terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal is a mobile phone as an example:
  • FIG. 2 is a block diagram showing a portion of a structure of a mobile phone associated with a terminal provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 1510, a memory 1520, an input unit 1530, a display unit 1540, a sensor 1550, an audio circuit 1560, a wireless fidelity (Wi-Fi) module 1570, and processing.
  • RF radio frequency
  • Device 1580 and power supply 1590 and other components.
  • the structure of the handset shown in FIG. 2 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the RF circuit 1510 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is processed by the processor 1580. In addition, the data designed for the uplink is sent to the base station.
  • RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 1510 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 1520 can be used to store software programs and modules, and the processor 1580 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 1520.
  • the memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a voiceprint playing function, an image playing function, etc.), and the like; the storage data area may be stored. Data created based on the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 1520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 1530 may include a touch panel 1531 and other input devices 1532.
  • the touch panel 1531 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1531 or near the touch panel 1531. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1580 is provided and can receive commands from the processor 1580 and execute them.
  • the touch panel 1531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1530 may also include other input devices 1532.
  • other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1540 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 1540 can include a display panel 1541.
  • the display panel 1541 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1531 may cover the display panel 1541. After the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel 1531 transmits to the processor 1580 to determine the type of the touch event, and then the processor 1580 according to the touch event. The type provides a corresponding visual output on display panel 1541.
  • the touch panel 1531 and the display panel 1541 are used as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1531 can be integrated with the display panel 1541. Realize the input and output functions of the phone.
  • the handset may also include at least one type of sensor 1550, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1541 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 1560, a speaker 1561, and a microphone 1562 can provide an audio interface between the user and the handset.
  • the audio circuit 1560 can transmit the converted electrical data of the received audio data to the speaker 1561, and convert it into a voiceprint signal output by the speaker 1561.
  • the microphone 1562 converts the collected voiceprint signal into an electrical signal by the audio.
  • the circuit 1560 receives the converted audio data, processes the audio data output processor 1580, transmits it to the other mobile device via the RF circuit 1510, or outputs the audio data to the memory 1520 for further processing.
  • Wi-Fi is a short-range wireless transmission technology.
  • the mobile phone can help users to send and receive e-mail, browse web pages and access streaming media through the Wi-Fi module 1570. It provides users with wireless broadband Internet access.
  • FIG. 2 shows the Wi-Fi module 1570, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 1580 is a control center for the handset that connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1520, and invoking data stored in the memory 1520, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1580.
  • the handset also includes a power source 1590 (such as a battery) that supplies power to the various components.
  • a power source 1590 such as a battery
  • the power source can be logically coupled to the processor 1580 via a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 1580 included in the terminal further has the following functions: recording a video when receiving a recording instruction input by the user, and then suspending the recording when receiving a pause instruction input by the user, and generating the first a video file; continuing to display a dynamic picture that the image sensor is acquiring in real time, and superimposing and displaying the last frame of the first video file after the transparent processing on the dynamic picture; receiving the user input input instruction The video is continuously recorded, and then the recording is stopped when the stop command input by the user is received, and the second video file is generated; the first video file and the second video file are stitched into the target video file. That is, the processor 1580 is provided with the function of performing the video recording method of any of the above embodiments, and details are not described herein again.
  • the above video recording method and video recording terminal record video when receiving a recording instruction input by the user, then pause the recording when receiving a pause instruction input by the user, and generate a first video file; continue to display the image sensor in real time Collecting a dynamic picture, and superimposing and displaying the last frame of the first video file after the transparent processing on the dynamic picture; continuing to record the video when receiving the renewal instruction input by the user, and then receiving the video
  • the recording is stopped when the user inputs a stop command, and a second video file is generated; the first video file and the second video file are stitched into a target video file.
  • the screen When the user resumes the video, the screen can be positioned by the last frame that is reserved when the first video is recorded, so that the second video recorded later can better connect the first video recorded before, thereby achieving the residual image.
  • the recording function makes the transition to be more natural when the final stitching generates the target video, and improves the fun level of the homemade video, thereby improving the click rate of the homemade video.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本发明提供一种视频录制方法和视频录制终端,在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示第一视频文件的经透明化处理后的最后一帧画面;在接收到用户输入的续录指令时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;将第一视频文件和第二视频文件拼接成目标视频文件。使得用户在续录视频时,可以通过前面摄录第一视频时保留的最后一帧画面进行画面定位,使得后面录制的第二视频能够更好的衔接前面录制的第一视频,提高自创视频的趣味程度,从而提高自创视频的点击率。

Description

视频录制方法和视频录制终端 技术领域
本发明涉及视频录制技术领域,具体而言,本发明涉及一种视频录制方法和视频录制终端。
背景技术
随着视频技术的不断发展,涌现了各种视频分享平台,以供网友欣赏上传到平台的自创视频。传统视频分享平台,可以允许用户在摄录视频时对视频加入各种特效,使得自创的视频变得更为有趣,以吸引网友点击欣赏。自创视频中,有一种将两个视频或多个视频拼接成同一视频的视频制作方式较为受用户欢迎,例如先对自己拍一段视频,然后用户经过调整自己的外观后,再以同一姿势下续拍一段视频,看似瞬间换装,趣味十足。然而,用户在续录时难以对准上一视频的最后姿势,降低了视频过渡的自然性。这些视频显然趣味性不足,使得网友的兴趣逐渐下降,使得自创视频难以吸引网友,降低了自创视频的点击率。
发明内容
本发明的目的旨在至少能解决上述的技术缺陷之一,特别是续录视频时难以对准的技术缺陷。
本发明提供一种视频录制方法,包括如下步骤:
在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;
继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示所述第一视频文件的经透明化处理后的最后一帧画面;
在接收到用户输入的续录指令时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;
将所述第一视频文件和第二视频文件拼接成目标视频文件。
在其中一个实施例中,所述在接收到用户输入的暂停指令时暂停摄录的过程包括:在接收到用户输入的暂停指令时暂停摄录,获取并保存角速度传感器当前时刻采集的第一姿势数据;
在该动态画面上叠加展示所述最后一帧画面的过程包括:在该动态画面上叠加展示所述最后一帧画面,并根据所述第一姿势数据和所述角速度传感器实时检测的第二姿势数据的匹配关系向用户发出相应的提示。
在其中一个实施例中,在该动态画面上叠加展示所述最后一帧画面的过程包括:
在该动态画面上叠加展示所述最后一帧画面,并根据该动态画面的任意一帧画面与所述最后一帧画面之间的相似度向用户发出相应的提示。
在其中一个实施例中,将所述第一视频文件和第二视频文件拼接成目标视频文件之前,还对所述第一视频文件和/或第二视频文件添加动画特效。
在其中一个实施例中,将所述第一视频文件和第二视频文件拼接成目标视频文件之前,还对所述第一视频文件和/或第二视频文件进行插帧处理或抽帧处理。
在其中一个实施例中,在展示所述经透明化处理后的最后一帧画面之前,还在所述最后一帧画面对画面内容中的主体要素描绘轮廓。
在其中一个实施例中,在接收到用户输入的录制指令时经过倒计第一预设时间后摄录视频,和/或在接收到用户输入的续录指令时经过倒计第二预设时间后继续摄录视频。
在其中一个实施例中,在摄录视频时展示带有时间标识的摄录进度部件。
在其中一个实施例中,生成第一视频文件后,在接收到用户输入的重录指令时,重新摄录视频并重新生成第一视频文件;和/或
生成第二视频文件后,在接收到用户输入的重录指令时,重新摄录视频并重新生成第二视频文件。
本发明还提供一种视频录制终端,包括:
显示器;
一个或多个处理器;
存储器;
一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于:执行根据上述任一项实施例所述的视频录制方法。
上述的视频录制方法和视频录制终端,在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示所述第一视频文件的经透明化处理后的最后一帧画面;在接收到用户输入的续录指令时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;将所述第一视频文件和第二视频文件拼接成目标视频文件。使得用户在续录视频时,可以通过前面摄录第一视频时保留的最后一帧画面进行画面定位,使得后面录制的第二视频能够更好的衔接前面录制的第一视频,实现残影续录功能,使得最后拼接生成目标视频时过渡得更加自然,提高自创视频的趣味程度,从而提高自创视频的点击率。
本发明附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为一个实施例的视频录制方法流程图;
图2示出的是与本发明实施例提供的终端相关的手机的部分结构的框图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其 中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能解释为对本发明的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本发明的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
本技术领域技术人员可以理解,这里所使用的“终端”、“终端设备”既包括无线信号接收器的设备,其仅具备无发射能力的无线信号接收器的设备,又包括接收和发射硬件的设备,其具有能够在双向通讯链路上,执行双向通讯的接收和发射硬件的设备。这种设备可以包括:蜂窝或其他通讯设备,其具有单线路显示器或多线路显示器或没有多线路显示器的蜂窝或其他通讯设备;PCS(Personal Communications Service,个人通讯系统),其可以组合语音、数据处理、传真和/或数据通讯能力;PDA(Personal Digital Assistant 个人数字助理),其可以包括射频接收器、寻呼机、互联网/内联网访问、网络浏览器、记事本、日历和/或GPS(Global Positioning System,全球定位系统)接收器;常规膝上型和/或掌上型计算机或其他设备,其具有和/或包括射频接收器的常规膝上型和/或掌上型计算机或其他设备。这里所使用的“终端”、“终端设备”可以是便携式、可运输、安装在交通工具(航空、海运和/或陆地)中的,或者适合于和/或配置为在本地运行,和/或以分布形式,运行在地球和/或空间的任何其他位置运行。这里所使用的“终端”、“终端设备”还可以是通讯终端、上网终端、音乐/视频播放终端,例如可以是PDA、MID(Mobile Internet Device,移动互联网设 备)和/或具有音乐/视频播放功能的移动电话,也可以是智能电视、机顶盒等设备。
图1为一个实施例的视频录制方法流程图。
本发明提供一种视频录制方法,应用于移动终端,包括如下步骤S100~S400:
步骤S100:在接收到用户输入的录制指令(例如用户点击录制按键)时摄录视频,然后在接收到用户输入的暂停指令(例如用户点击暂停按键)时暂停摄录,并生成第一视频文件。在一些实施例中,此处的第一视频文件,可以是记录在内存中的缓存文件。
在一些实施例中,在接收到用户输入的录制指令时就马上摄录视频,而在另一些实施例中,在接收到用户输入的录制指令时,需要经过倒计第一预设时间后才摄录视频,例如倒计3秒后才摄录视频。
在摄录视频时,可以展示带有时间标识的摄录进度部件,例如展示摄录进度条,实时展示摄录进度。摄录进度条上带有时间标识,以方便用户知道视频已摄录的时间。当在接收到用户输入的暂停指令时暂停摄录时,摄录进度条同步暂停。
为了方便用户在后面续录时定位画面,可以考虑在接收到用户输入的暂停指令时暂停摄录,同时获取并保存角速度传感器(陀螺仪)当前时刻(即暂停摄录的时刻)采集的第一姿势数据,该第一姿势数据记录了终端当前的姿势,例如倾斜度。当然,在一些实施例中还可以同时获取并保存磁传感器当前时刻采集的方向数据。当用户在后面续录时,可以根据该第一姿势数据以指示用户调整终端的姿势,例如调整倾斜度或方向。
为了方便用户感觉摄录的视频不满意时可以进行重录,因此在一些实施例中,生成第一视频文件后,可以在接收到用户输入的重录指令(例如用户点击重录按键)时,重新摄录视频并重新生成第一视频文件,旧的第一视频文件将被删除或被重新生成的第一视频文件替换。
在其他一些实施例中,生成第一视频文件后,可以在接收到用户输入的删除指令(例如用户点击删除按键)后删除第一视频文件,然后在接收到用户输入的重录指令(例如用户再次点击录制按键)时重新摄录视频并 重新生成第一视频文件。
在一些实施例中,可以接收用户的指令为第一视频文件添加动画特效内容,以增加视频的趣味性。
生成第一视频文件后,执行步骤S200。
步骤S200:继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示第一视频文件的经透明化处理后的最后一帧画面。
在暂停摄录后,图像传感器还会实时采集动态画面,但是并没有将这些动态画面存储为视频文件。例如手机、相机在拍摄时候都会开启图像传感器实时采集画面图像,当用户点击拍摄或摄录时才开始将图像传感器实时采集的画面图像数据存储为图片或视频文件。
终端继续实时展示该动态画面,该动态画面是给用户观察摄录的场景的。在展示该动态画面时,还会在该动态画面上叠加展示所述最后一帧画面,所述最后一帧画面是经过透明化处理的,例如最后一帧画面可以是半透明的,相当于在展示动态画面时在动态画面上蒙上一层半透明的图片,使得用户能透过半透明的最后一帧画面也能看到图像传感器正在实时采集的动态画面,从而方便用户在后面续录时通过半透明的最后一帧画面定位画面。即用户可以通过对比动态画面与所述最后一帧画面来确定开始续录的时刻,从而可以使得第一视频和第二视频能够过渡得更加自然。
在一些实施例中,在展示经透明化处理后的最后一帧画面之前,还可以在最后一帧画面对画面内容中的主体要素描绘轮廓。例如最后一帧画面的画面内容主要要素是人物图像,则可以在最后一帧画面中描绘出该人物图像的轮廓,使得用户在定位画面时可以更加便捷。
在一些实施例中,在动态画面上叠加展示最后一帧画面时,同时可以根据上面所述的第一姿势数据和角速度传感器当前实时检测的第二姿势数据的匹配关系向用户发出相应的提示。例如,当用户手持终端的姿势不太匹配用户上述暂停录制那个时刻的姿势时,当前的动态画面与最后一帧画面自然不会匹配,因此通过对比第一姿势数据和第二姿势数据,可以分析出动态画面与最后一帧画面的匹配程度或者说相似度,从而可以提示用户做出相应的调整动作,例如提示用户向某一方向倾斜或倾斜角度,或者 提示用户将终端调整某一朝向。当对比第一姿势数据和第二姿势数据判断动态画面与最后一帧画面较为匹配或相似时,就可以提示用户可以进行续录了。
而在另一些实施例中,在该动态画面上叠加展示最后一帧画面时,同时可以根据该动态画面的任意一帧画面与最后一帧画面之间的相似度向用户发出相应的提示。也即在这些实施例中,是通过图像分析直接判断动态画面和最后一帧画面之间的匹配度或相似度来提示用户的,当动态画面(例如某一帧)和最后一帧画面之间的匹配度或相似度达到预设条件时,就可以提示用户可以进行续录了。
步骤S300:在接收到用户输入的续录指令(例如用户再次点击录制按键)时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件。在一些实施例中,此处的第二视频文件,可以是记录在内存中的缓存文件。
在一些实施例中,在接收到用户输入的续录指令时就马上摄录视频,而在另一些实施例中,在接收到用户输入的续录指令时,需要经过倒计第二预设时间后才摄录视频,例如倒计3秒后才摄录视频。在摄录视频时,也可以展示带有时间标识的摄录进度部件,例如展示摄录进度条,实时展示摄录进度。
同样,为了方便用户感觉摄录的视频不满意时可以进行重录,因此在一些实施例中,生成第二视频文件后,可以在接收到用户输入的重录指令(例如用户点击重录按键)时,重新摄录视频并重新生成第二视频文件,旧的第二视频文件将被删除或被重新生成的第二视频文件替换。
在其他一些实施例中,生成第二视频文件后,可以在接收到用户输入的删除指令(例如用户点击删除按键)后删除第二视频文件,然后在接收到用户输入的重录指令(例如用户再次点击录制按键)时重新摄录视频并重新生成第二视频文件。
在一些实施例中,可以接收用户的指令为第二视频文件添加动画特效内容,以增加视频的趣味性。
步骤S400:将第一视频文件和第二视频文件拼接成目标视频文件。 此处的目标视频文件,也可以是在内存中的缓存文件,当接收到用户输入的分享指令或存储指令时,再将目标视频文件进行分享,或存储到本地,或存储到云端服务器。可以是在接收到用户输入的拼接指令后才将第一视频文件和第二视频文件拼接成目标视频文件,也可以是在步骤S300之后自动将第一视频文件和第二视频文件拼接成目标视频文件,不做限定。
当然,在目标视频文件生成后,可以对目标视频文件进行图像处理,例如可以添加动画特效。图像信号处理包括但不限于以下操作中的至少一项:减黑色、透镜滚降校正、通道增益调节、坏像素校正、去马赛克、裁切、按比例缩放、白平衡、色彩校正、亮度适应、色彩转换及增强图像对比度。
又或者,为了使得目标视频文件的第一视频或第二视频的内容在播放时可以加快或减慢,在一些实施例中,将第一视频文件和第二视频文件拼接成目标视频文件之前,还可以对第一视频文件和/或第二视频文件进行插帧处理或抽帧处理。插帧处理可以是在视频各帧之间插入重复帧,视频的时间长度相应增加,在用户看来动作时变慢了。而抽帧处理可以是均匀抽出和丢弃视频帧中的部分帧(例如抽掉奇数帧或偶数帧),视频的时间长度相应减少,在用户看来动作时变快了。
当然,还可能出现续录多次然后产生多个视频文件的情况,这种情况下需要将这多个视频文件拼接,在此不赘述。
本发明还提供一种视频录制终端,包括:显示器;一个或多个处理器;存储器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于:执行根据上述任一项实施例所述的视频录制方法。
本发明实施例还提供了移动终端,如图2所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为手机为例:
图2示出的是与本发明实施例提供的终端相关的手机的部分结构的 框图。参考图2,手机包括:射频(Radio Frequency,RF)电路1510、存储器1520、输入单元1530、显示单元1540、传感器1550、音频电路1560、无线保真(wireless fidelity,Wi-Fi)模块1570、处理器1580、以及电源1590等部件。本领域技术人员可以理解,图2中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图2对手机的各个构成部件进行具体的介绍:
RF电路1510可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1580处理;另外,将设计上行的数据发送给基站。通常,RF电路1510包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器1520可用于存储软件程序以及模块,处理器1580通过运行存储在存储器1520的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声纹播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1530可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1530可包括触控面板1531以及其他输入设备1532。触控面板1531,也称为触摸 屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1531上或在触控面板1531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1580,并能接收处理器1580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1531。除了触控面板1531,输入单元1530还可以包括其他输入设备1532。具体地,其他输入设备1532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1540可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元1540可包括显示面板1541,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1541。进一步的,触控面板1531可覆盖显示面板1541,当触控面板1531检测到在其上或附近的触摸操作后,传送给处理器1580以确定触摸事件的类型,随后处理器1580根据触摸事件的类型在显示面板1541上提供相应的视觉输出。虽然在图2中,触控面板1531与显示面板1541是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1531与显示面板1541集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器1550,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1541的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1541和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、 温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1560、扬声器1561,传声器1562可提供用户与手机之间的音频接口。音频电路1560可将接收到的音频数据转换后的电信号,传输到扬声器1561,由扬声器1561转换为声纹信号输出;另一方面,传声器1562将收集的声纹信号转换为电信号,由音频电路1560接收后转换为音频数据,再将音频数据输出处理器1580处理后,经RF电路1510以发送给比如另一手机,或者将音频数据输出至存储器1520以便进一步处理。
Wi-Fi属于短距离无线传输技术,手机通过Wi-Fi模块1570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图2示出了Wi-Fi模块1570,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器1580是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1520内的软件程序和/或模块,以及调用存储在存储器1520内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1580可包括一个或多个处理单元;优选的,处理器1580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1580中。
手机还包括给各个部件供电的电源1590(比如电池),优选的,电源可以通过电源管理系统与处理器1580逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本发明实施例中,该终端所包括的处理器1580还具有以下功能:在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示所述第一视频文件的经透明化处理后的最后一帧画面;在接收到用户输入的续录指令时继续摄录视频, 然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;将所述第一视频文件和第二视频文件拼接成目标视频文件。也即处理器1580具备执行上述的任一实施例视频录制方法的功能,在此不再赘述。
上述的视频录制方法和视频录制终端,在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示所述第一视频文件的经透明化处理后的最后一帧画面;在接收到用户输入的续录指令时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;将所述第一视频文件和第二视频文件拼接成目标视频文件。使得用户在续录视频时,可以通过前面摄录第一视频时保留的最后一帧画面进行画面定位,使得后面录制的第二视频能够更好的衔接前面录制的第一视频,实现残影续录功能,使得最后拼接生成目标视频时过渡得更加自然,提高自创视频的趣味程度,从而提高自创视频的点击率。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上所述仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种视频录制方法,其特征在于,包括如下步骤:
    在接收到用户输入的录制指令时摄录视频,然后在接收到用户输入的暂停指令时暂停摄录,并生成第一视频文件;
    继续展示图像传感器正在实时采集的动态画面,并在该动态画面上叠加展示所述第一视频文件的经透明化处理后的最后一帧画面;
    在接收到用户输入的续录指令时继续摄录视频,然后在接收到用户输入的停止指令时停止摄录,并生成第二视频文件;
    将所述第一视频文件和第二视频文件拼接成目标视频文件。
  2. 根据权利要求1所述的视频录制方法,其特征在于,所述在接收到用户输入的暂停指令时暂停摄录的过程包括:在接收到用户输入的暂停指令时暂停摄录,获取并保存角速度传感器当前时刻采集的第一姿势数据;
    在该动态画面上叠加展示所述最后一帧画面的过程包括:在该动态画面上叠加展示所述最后一帧画面,并根据所述第一姿势数据和所述角速度传感器实时检测的第二姿势数据的匹配关系向用户发出相应的提示。
  3. 根据权利要求1所述的视频录制方法,其特征在于,在该动态画面上叠加展示所述最后一帧画面的过程包括:
    在该动态画面上叠加展示所述最后一帧画面,并根据该动态画面的任意一帧画面与所述最后一帧画面之间的相似度向用户发出相应的提示。
  4. 根据权利要求1所述的视频录制方法,其特征在于,将所述第一视频文件和第二视频文件拼接成目标视频文件之前,还对所述第一视频文件和/或第二视频文件添加动画特效。
  5. 根据权利要求1所述的视频录制方法,其特征在于,将所述第一视频文件和第二视频文件拼接成目标视频文件之前,还对所述第一视频文件和/或第二视频文件进行插帧处理或抽帧处理。
  6. 根据权利要求1所述的视频录制方法,其特征在于,在展示所述经透明化处理后的最后一帧画面之前,还在所述最后一帧画面对画面内容中的主体要素描绘轮廓。
  7. 根据权利要求1所述的视频录制方法,其特征在于,在接收到用户输入的录制指令时经过倒计第一预设时间后摄录视频,和/或在接收到用户输入的续录指令时经过倒计第二预设时间后继续摄录视频。
  8. 根据权利要求1所述的视频录制方法,其特征在于,在摄录视频时展示带有时间标识的摄录进度部件。
  9. 根据权利要求1所述的视频录制方法,其特征在于,生成第一视频文件后,在接收到用户输入的重录指令时,重新摄录视频并重新生成第一视频文件;和/或
    生成第二视频文件后,在接收到用户输入的重录指令时,重新摄录视频并重新生成第二视频文件。
  10. 一种视频录制终端,其特征在于,包括:
    显示器;
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于:执行根据权利要求1~9任一项所述的视频录制方法。
PCT/CN2018/118380 2017-11-30 2018-11-30 视频录制方法和视频录制终端 WO2019105441A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202005043TA SG11202005043TA (en) 2017-11-30 2018-11-30 Video recording method and video recording terminal
EP18884133.2A EP3709633B1 (en) 2017-11-30 2018-11-30 Video recording method and video recording terminal
RU2020121732A RU2745737C1 (ru) 2017-11-30 2018-11-30 Способ видеозаписи и видеозаписывающий терминал
US16/764,958 US11089259B2 (en) 2017-11-30 2018-11-30 Video recording method and video recording terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711236846.5A CN107948562B (zh) 2017-11-30 2017-11-30 视频录制方法和视频录制终端
CN201711236846.5 2017-11-30

Publications (1)

Publication Number Publication Date
WO2019105441A1 true WO2019105441A1 (zh) 2019-06-06

Family

ID=61946931

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118380 WO2019105441A1 (zh) 2017-11-30 2018-11-30 视频录制方法和视频录制终端

Country Status (6)

Country Link
US (1) US11089259B2 (zh)
EP (1) EP3709633B1 (zh)
CN (1) CN107948562B (zh)
RU (1) RU2745737C1 (zh)
SG (1) SG11202005043TA (zh)
WO (1) WO2019105441A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278341A (zh) * 2022-06-30 2022-11-01 海信视像科技股份有限公司 显示设备及视频处理方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107948562B (zh) 2017-11-30 2020-05-15 广州市百果园信息技术有限公司 视频录制方法和视频录制终端
CN109167950B (zh) * 2018-10-25 2020-05-12 腾讯科技(深圳)有限公司 视频录制方法、视频播放方法、装置、设备及存储介质
CN109640019B (zh) * 2018-12-13 2021-09-07 广州艾美网络科技有限公司 一种通过移动终端录制编辑长视频的方法
CN111214829A (zh) * 2019-12-30 2020-06-02 咪咕视讯科技有限公司 一种教学方法、电子设备及存储介质
KR102558294B1 (ko) * 2020-12-31 2023-07-24 한국과학기술연구원 임의 시점 영상 생성 기술을 이용한 다이나믹 영상 촬영 장치 및 방법
CN115379107B (zh) * 2021-06-07 2023-04-21 云南力衡医疗技术有限公司 工具使用视频处理方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002084488A (ja) * 2000-07-18 2002-03-22 Fuji Xerox Co Ltd ビデオ生成システム及びカスタムビデオ生成方法
CN103347155A (zh) * 2013-06-18 2013-10-09 北京汉博信息技术有限公司 实现两个视频流不同过渡效果切换的转场特效模块及方法
CN103702041A (zh) * 2013-12-30 2014-04-02 乐视网信息技术(北京)股份有限公司 一种视频暂停续拍的方法及装置
CN105245810A (zh) * 2015-10-08 2016-01-13 广东欧珀移动通信有限公司 一种视频转场的处理方法及装置
CN106210531A (zh) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 视频生成方法、装置和移动终端
CN107948562A (zh) * 2017-11-30 2018-04-20 广州市百果园信息技术有限公司 视频录制方法和视频录制终端

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6466224B1 (en) * 1999-01-19 2002-10-15 Matsushita Electric Industrial Co., Ltd. Image data composition and display apparatus
JP2005341416A (ja) * 2004-05-28 2005-12-08 Toshiba Corp 撮像機能付き電子機器およびその画像表示方法
JP4820136B2 (ja) * 2005-09-22 2011-11-24 パナソニック株式会社 映像音声記録装置及び映像音声記録方法
KR101520659B1 (ko) * 2008-02-29 2015-05-15 엘지전자 주식회사 개인용 비디오 레코더를 이용한 영상 비교 장치 및 방법
KR101506488B1 (ko) * 2008-04-04 2015-03-27 엘지전자 주식회사 근접센서를 이용하는 휴대 단말기 및 그 제어방법
KR101576969B1 (ko) * 2009-09-08 2015-12-11 삼성전자 주식회사 영상처리장치 및 영상처리방법
JP2011259365A (ja) * 2010-06-11 2011-12-22 Sony Corp カメラシステム、映像選択装置及び映像選択方法
US9554049B2 (en) * 2012-12-04 2017-01-24 Ebay Inc. Guided video capture for item listings
CN103546698B (zh) * 2013-10-31 2016-08-17 广东欧珀移动通信有限公司 一种移动终端录制视频保存方法和装置
CN105872700A (zh) * 2015-11-30 2016-08-17 乐视网信息技术(北京)股份有限公司 开机视频无缝循环的实现方法及装置
US10438630B2 (en) * 2017-02-10 2019-10-08 Canon Kabushiki Kaisha Display control apparatus that performs time-line display, method of controlling the same, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002084488A (ja) * 2000-07-18 2002-03-22 Fuji Xerox Co Ltd ビデオ生成システム及びカスタムビデオ生成方法
CN103347155A (zh) * 2013-06-18 2013-10-09 北京汉博信息技术有限公司 实现两个视频流不同过渡效果切换的转场特效模块及方法
CN103702041A (zh) * 2013-12-30 2014-04-02 乐视网信息技术(北京)股份有限公司 一种视频暂停续拍的方法及装置
CN105245810A (zh) * 2015-10-08 2016-01-13 广东欧珀移动通信有限公司 一种视频转场的处理方法及装置
CN106210531A (zh) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 视频生成方法、装置和移动终端
CN107948562A (zh) * 2017-11-30 2018-04-20 广州市百果园信息技术有限公司 视频录制方法和视频录制终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3709633A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115278341A (zh) * 2022-06-30 2022-11-01 海信视像科技股份有限公司 显示设备及视频处理方法

Also Published As

Publication number Publication date
US20200351467A1 (en) 2020-11-05
EP3709633B1 (en) 2021-10-27
SG11202005043TA (en) 2020-06-29
EP3709633A4 (en) 2020-09-16
CN107948562B (zh) 2020-05-15
RU2745737C1 (ru) 2021-03-31
EP3709633A1 (en) 2020-09-16
CN107948562A (zh) 2018-04-20
US11089259B2 (en) 2021-08-10

Similar Documents

Publication Publication Date Title
WO2019105441A1 (zh) 视频录制方法和视频录制终端
WO2021098678A1 (zh) 投屏控制方法及电子设备
WO2021036536A1 (zh) 视频拍摄方法及电子设备
WO2019137429A1 (zh) 图片处理方法及移动终端
JP7062092B2 (ja) 表示制御方法及び端末
WO2019223494A1 (zh) 截屏方法及移动终端
WO2019137248A1 (zh) 视频插帧方法、存储介质以及终端
WO2016177296A1 (zh) 一种生成视频的方法和装置
WO2021104236A1 (zh) 一种共享拍摄参数的方法及电子设备
WO2021036542A1 (zh) 录屏方法及移动终端
WO2018059352A1 (zh) 直播视频流远程控制方法及装置
CN111010510B (zh) 一种拍摄控制方法、装置及电子设备
WO2020042890A1 (zh) 视频处理方法、终端及计算机可读存储介质
WO2019196929A1 (zh) 一种视频数据处理方法及移动终端
WO2020233323A1 (zh) 显示控制方法、终端设备及计算机可读存储介质
WO2021043121A1 (zh) 一种图像换脸的方法、装置、系统、设备和存储介质
WO2021104230A1 (zh) 同步方法及电子设备
WO2019214502A1 (zh) 多媒体内容的操作方法和移动终端
CN111147779B (zh) 视频制作方法、电子设备及介质
CN111597370B (zh) 一种拍摄方法及电子设备
WO2019085774A1 (zh) 应用程序控制方法和移动终端
WO2015131768A1 (en) Video processing method, apparatus and system
WO2021036659A1 (zh) 视频录制方法及电子设备
CN109618218B (zh) 一种视频处理方法及移动终端
WO2020228538A1 (zh) 截图的方法和移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18884133

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018884133

Country of ref document: EP

Effective date: 20200608