CN108012101B - Video recording method and video recording terminal - Google Patents
Video recording method and video recording terminal Download PDFInfo
- Publication number
- CN108012101B CN108012101B CN201711238019.XA CN201711238019A CN108012101B CN 108012101 B CN108012101 B CN 108012101B CN 201711238019 A CN201711238019 A CN 201711238019A CN 108012101 B CN108012101 B CN 108012101B
- Authority
- CN
- China
- Prior art keywords
- video
- recording
- user
- video file
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000694 effects Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 2
- 230000007704 transition Effects 0.000 abstract description 4
- 238000007667 floating Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 5
- 238000003860 storage Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 239000000725 suspension Substances 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a video recording method and a video recording terminal, which are applied to a mobile terminal, and are used for recording a video when receiving a recording instruction input by a user, pausing the recording when receiving a pause instruction input by the user, acquiring and storing first posture data acquired by a posture sensor at the current moment, and then generating a first video file; sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time; continuously shooting and recording the video when a recording continuing instruction input by a user is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file; and splicing the first video file and the second video file into a target video file. The user can adjust the posture of the terminal according to the prompt for adjusting the posture of the terminal sent by the terminal before recording, so that transition is more natural when the target video is generated by final splicing.
Description
Technical Field
The invention relates to the technical field of video recording, in particular to a video recording method and a video recording terminal.
Background
With the continuous development of video technology, various video sharing platforms emerge for network friends to enjoy self-created videos uploaded to the platforms. The traditional video sharing platform can allow a user to add various special effects to a video when the video is shot, so that the video from creation becomes more interesting, and a net friend can be attracted to click and enjoy the video. In the self-created video, a video production mode of splicing two videos or a plurality of videos into the same video is popular with users, for example, a video is shot by the users firstly, then the users adjust the appearances of the users and then continuously shoot a video in the same posture, the users look like changing the video instantly, and the video is very interesting. However, it is difficult for the user to align the last gesture of the previous video during recording, which reduces the naturalness of the transition of the video.
Disclosure of Invention
The present invention aims to solve at least one of the above technical drawbacks, in particular the technical drawback of difficult alignment during continuous video recording.
The invention provides a video recording method, which is applied to a mobile terminal and comprises the following steps:
recording a video when a recording instruction input by a user is received, pausing the recording when a pause instruction input by the user is received, acquiring and storing first posture data acquired by a posture sensor at the current moment, and then generating a first video file;
sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time;
continuously shooting and recording the video when a recording continuing instruction input by a user is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file;
and splicing the first video file and the second video file into a target video file.
In one embodiment, the recording command, the pause command, the resume command, and the stop command are all input into the same key.
In one embodiment, the prompt for adjusting the terminal posture comprises an image prompt and/or a voice prompt for instructing a user to adjust the terminal transverse inclination angle and the terminal longitudinal inclination angle.
In one embodiment, the posture sensor comprises an angular velocity sensor and/or a magnetic sensor.
In one embodiment, before the first video file and the second video file are spliced into the target video file, an animated special effect is further added to the first video file and/or the second video file.
In one embodiment, before the first video file and the second video file are spliced into the target video file, the first video file and/or the second video file are/is subjected to frame insertion processing or frame extraction processing.
In one embodiment, the video is recorded after the first preset time is counted down when the recording command input by the user is received, and/or the video is continuously recorded after the second preset time is counted down when the recording command input by the user is received.
In one embodiment, a videography progress feature with a time identification is presented while videography is taking place.
In one embodiment, after the first video file is generated, when a re-recording instruction input by a user is received, the video is re-shot and the first video file is re-generated; and/or
After the second video file is generated, when a re-recording instruction input by a user is received, the video is recorded again, and the second video file is generated again.
The present invention also provides a video recording terminal, comprising:
a display;
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a video recording method according to any of the above embodiments is performed.
The video recording method and the video recording terminal are applied to the mobile terminal, video recording is carried out when a recording instruction input by a user is received, then the video recording is suspended when a suspension instruction input by the user is received, first posture data acquired at the current moment by the posture sensor is acquired and stored, and then a first video file is generated; sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time; continuously shooting and recording the video when a recording continuing instruction input by a user is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file; and splicing the first video file and the second video file into a target video file. The user can adjust the posture of the terminal, such as a transverse inclination angle or a longitudinal inclination angle, according to the prompt for adjusting the posture of the terminal sent by the terminal before recording, so that the second video recorded at the back can be better connected with the first video recorded at the front, the function of natural recording is realized, transition is more natural when the target video is generated by final splicing, the interesting degree of the self-created video is improved, and the click rate of the self-created video is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a video recording method according to an embodiment;
FIG. 2 is a diagram illustrating an exemplary image prompt for adjusting a terminal pose;
FIG. 3 is a schematic diagram of an image prompt for adjusting the terminal pose according to another embodiment;
fig. 4 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be understood by those skilled in the art, a "terminal" as used herein includes both devices that include a wireless signal receiver, which are devices having only a wireless signal receiver without transmit capability, and devices that include receive and transmit hardware, which have devices capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data Communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, the "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
Fig. 1 is a flowchart of a video recording method according to an embodiment.
The invention provides a video recording method, which is applied to a mobile terminal and comprises the following steps of S100-S300:
step S100: the method comprises the steps of recording a video when a recording instruction input by a user is received, then pausing the recording when a pause instruction input by the user is received, acquiring and storing first posture data acquired by a posture sensor at the current moment, and then generating a first video file. In some embodiments, the first video file may be a cache file recorded in the memory.
In consideration of the fact that users can shoot and record more conveniently, the same key can be used for inputting a recording instruction, a pause instruction, a subsequent continuous recording instruction and a subsequent stop instruction, and key multiplexing is achieved. The keys may be virtual keys or physical keys. The following are specific examples of 2:
example 1: the user can send a recording instruction or a recording continuing instruction by long-pressing the recording key to record the video, and the terminal continuously records the video in the process of identifying the long-pressing of the user; when the fact that the finger of the user is lifted up and leaves the recording key is recognized, the terminal judges that the user inputs a pause instruction or a stop instruction, and therefore recording is immediately paused or stopped.
Example 2: the user can click the recording key to send a recording instruction, the terminal starts to record the video, and in the recording process, the user clicks the recording key again to send a pause instruction, and the terminal pauses the recording; and when the user clicks the recording key again to send a continuous recording instruction, the terminal continues recording the video, and in the recording process, the user clicks the recording key last time to send a stop instruction, the terminal stops recording, and the video recording is completed.
In some embodiments, the video is recorded immediately upon receiving the recording command input by the user, while in other embodiments, the video is recorded after a first predetermined time, for example 3 seconds, has elapsed after receiving the recording command input by the user.
When the video is shot, a shooting progress part with a time mark can be displayed, for example, a shooting progress bar is displayed, and the shooting progress is displayed in real time. The recording progress bar is provided with a time mark so that a user can know the recording time of the video conveniently. When the recording is paused when a pause instruction input by a user is received, the recording progress bar is paused synchronously.
Of course, the recording of the video may be paused and resumed many times, which is not described herein.
The attitude sensor includes an angular velocity sensor (gyroscope) or includes a magnetic sensor, or includes both an angular velocity sensor and a magnetic sensor. The angular velocity sensor can determine the tilt posture of the terminal, and the magnetic sensor can determine the orientation posture of the terminal. The tilt posture is mainly determined by the lateral tilt angle and the longitudinal tilt angle of the terminal, and the orientation posture is determined by the orientation of the terminal, for example, two directions of the long side of the terminal may be set as the longitudinal direction, two directions of the wide side may be set as the lateral direction, and the direction of one end of the long side of the terminal may be set as the terminal orientation. Accordingly, the first posture data may include data reflecting the current posture of the terminal, such as a lateral tilt angle, a longitudinal tilt angle, and a terminal orientation of the terminal.
Magnetic sensors are widely used in modern industry and electronic products to sense magnetic field strength to measure physical parameters such as current, position, direction, etc. In the prior art, there are many different types of sensors for measuring magnetic fields. For example, a magnetic sensor using a Hall (Hall) element, an Anisotropic Magnetoresistive (AMR) element, or a Giant Magnetoresistive (GMR) element as a sensitive element. Tmr (tunnel magnetoresistive resistance) elements are a new magnetoresistive effect sensor which has started to be industrially used in recent years, which senses a magnetic field by using the tunnel magnetoresistive effect of a magnetic multilayer film material, and has a larger rate of change in resistance than the AMR element and the GMR element which have been found and put into practical use before.
In order to facilitate the user to re-record the recorded video when the recorded video is not satisfactory, in some embodiments, after the first video file is generated, the video can be re-recorded and the first video file can be re-generated when a re-recording instruction input by the user is received (for example, the user clicks a re-recording button), and the old first video file is deleted or replaced by the re-generated first video file.
In other embodiments, after the first video file is generated, the first video file may be deleted after receiving a deletion instruction input by the user (e.g., the user clicks a delete button), and then the video may be re-recorded and the first video file may be regenerated when a re-recording instruction input by the user is received (e.g., the user clicks a record button again).
In some embodiments, an instruction of a user can be further received to add animation special effect content to the first video file so as to increase the interestingness of the video.
After the first video file is generated, step S200 is executed.
Step S200: and sending a corresponding prompt for adjusting the terminal posture to the user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time.
After the first video file is generated, when the user adjusts the starting posture of the recorded next video before recording, the posture sensor detects and collects second posture data reflecting the real-time posture of the terminal in real time, then the terminal can send a prompt for adjusting the posture of the terminal to the user according to the matching relation between the first posture data and the second posture data, and the prompt for adjusting the posture of the terminal comprises an image prompt and/or a voice prompt for indicating the user to adjust the transverse inclination angle and the longitudinal inclination angle of the terminal or adjust the orientation angle of the terminal.
When the gesture of the user holding the terminal does not match the gesture of the user at the moment of pause recording, the current dynamic picture and the last frame picture are not matched naturally, so that by comparing the first gesture data with the second gesture data, the matching degree or similarity (which can be determined by comparing the difference value or the ratio between the first gesture data and the second gesture data) between the current gesture of the terminal and the gesture of the terminal at the moment of pause can be analyzed, so that the user can be prompted to make a corresponding adjustment action, for example, the user is prompted to tilt or incline towards a certain direction (forward, backward, leftward, rightward), or the user is prompted to adjust the terminal to a certain direction (east, west, south, north) by how many angles, or the user is directly instructed to rotate the terminal clockwise or counterclockwise by how many angles. When the first posture data and the second posture data are compared to judge that the current posture of the terminal is matched with or similar to the posture of the terminal in the pause process, the user can be prompted to continue recording.
For the image prompt, a transverse indication bar and a longitudinal indication bar can be arranged when the terminal is displayed, the transverse indication bar is provided with a transverse floating point, the longitudinal indication bar is provided with a longitudinal floating point, the transverse floating point moves in the transverse indication bar according to the posture adjustment of the terminal, the longitudinal floating point moves in the longitudinal indication bar according to the posture adjustment of the terminal, namely the transverse floating point and the longitudinal floating point move transversely or longitudinally according to the data acquired by the posture sensor in real time. The gesture determined with the first gesture data is a standard gesture in which the horizontal floating point is at a standard position (e.g., a middle point) of the horizontal indicator bar and the vertical floating point is at a standard position (e.g., a middle point) of the vertical indicator bar. Therefore, when the second posture data detected by the posture sensor in real time does not match with the first posture data, the horizontal floating point and the vertical floating point are naturally not in the standard positions, and the user can achieve the purpose of posture adjustment by adjusting the terminal posture so that the horizontal floating point and the vertical floating point reach the standard positions. Referring to fig. 2, fig. 2 is a schematic diagram of an image prompt for adjusting a terminal posture according to an embodiment, where the vertical floating point needs to be moved down to a standard position, and the vertical floating point needs to be moved left to a standard position, so that a user can tilt the terminal up and down to adjust the vertical floating point, and can tilt the terminal left and right to adjust the horizontal floating point.
Still alternatively, a standard position may be set and an activity floating point may be set, the activity floating point being active in a specified area (e.g., a circular area) according to the adjustment of the posture of the terminal, the standard position being at the center (e.g., the center of a circle) within the specified area. When the second posture data detected by the posture sensor in real time is not matched with the first posture data, the activity floating point is naturally not at the standard position, and the user can adjust the posture by adjusting the terminal posture to enable the activity floating point to reach the standard position. Referring to fig. 3, fig. 3 is a schematic diagram of an image prompt for adjusting a terminal posture according to another embodiment, where the activity floating point needs to be moved to a standard position from a lower right corner, so that a user can tilt the terminal up and down, left and right to adjust the activity floating point.
The image prompt can be displayed at a designated position of a terminal screen, for example, a small area in the middle of the screen, and the shooting preview is not affected. In some embodiments, when the first posture data and the second posture data match or are similar (which may be determined by determining whether a difference or a ratio between the first posture data and the second posture data is within a preset threshold), the image prompt may disappear to avoid affecting the video preview; when the first gesture data and the second gesture data do not match or are similar again, the image cues described above reappear to facilitate the user to adjust again.
Step S300: and continuing to record the video when receiving a recording continuation instruction input by the user (for example, the user clicks the recording button again), and then stopping recording when receiving a stopping instruction input by the user and generating a second video file. In some embodiments, the second video file may be a cache file recorded in the memory.
In some embodiments, the video is recorded immediately upon receiving the user-input resume command, while in other embodiments, the video is recorded after a second predetermined time, for example 3 seconds, has elapsed after receiving the user-input resume command. During video recording, a recording progress part with a time mark can be displayed, for example, a recording progress bar is displayed, and the recording progress is displayed in real time.
Also, for convenience, after the second video file is generated, when a user clicks a re-recording button, the video may be re-recorded and the second video file may be re-generated, and the old second video file may be deleted or replaced by the re-generated second video file.
In other embodiments, after the second video file is generated, the second video file may be deleted after receiving a deletion instruction input by the user (e.g., the user clicks a delete button), and then the video may be re-recorded and the second video file may be regenerated when a re-recording instruction input by the user is received (e.g., the user clicks a record button again).
In some embodiments, an instruction of the user may be further received to add animation special effect content to the second video file, so as to increase the interest of the video.
Step S400: and splicing the first video file and the second video file into a target video file. The target video file may also be a cache file in a memory, and when a sharing instruction or a storage instruction input by a user is received, the target video file is shared, or stored locally, or stored in a cloud server. The first video file and the second video file may be spliced into the target video file after receiving a splicing instruction input by the user, or the first video file and the second video file may be automatically spliced into the target video file after step S300, without limitation.
Of course, after the target video file is generated, image processing may be performed on the target video file, for example, an animated special effect may be added. Image signal processing includes, but is not limited to, at least one of the following: black subtraction, lens roll-off correction, channel gain adjustment, bad pixel correction, demosaicing, cropping, scaling, white balance, color correction, luma adaptation, color conversion, and image contrast enhancement.
Still alternatively, in order to enable the content of the first video or the second video of the target video file to be faster or slower during playing, in some embodiments, before the first video file and the second video file are spliced into the target video file, the first video file and/or the second video file may be subjected to frame insertion processing or frame extraction processing. The frame interpolation process may be to interpolate a repeated frame between frames of the video, and the time length of the video is increased accordingly, and the motion is slowed down as viewed by the user. While the frame extraction process may be to extract and discard some frames of the video frames uniformly (e.g., to extract odd or even frames), the time duration of the video is reduced accordingly, and the action is faster as viewed by the user.
Of course, there may be a situation where multiple video files are temporarily and continuously recorded and then generated, and in such a situation, the multiple video files need to be spliced, which is not described herein.
The present invention also provides a video recording terminal, comprising: a display; one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a video recording method according to any of the above embodiments is performed.
As shown in fig. 4, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 4 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present invention. Referring to fig. 4, the handset includes: radio Frequency (RF) circuitry 1510, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuitry 1560, wireless fidelity (Wi-Fi) module 1570, processor 1580, and power supply 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 4:
the RF circuit 1510 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 1510 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a voiceprint playback function, an image playback function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 1540 may include a Display panel 1541, and optionally, the Display panel 1541 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 4, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the mobile phone.
The handset can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Wi-Fi belongs to short-distance wireless transmission technology, and a mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a Wi-Fi module 1570, and provides wireless broadband internet access for the user. Although fig. 4 shows a Wi-Fi module 1570, it is understood that it does not belong to the essential constitution of the handset and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The handset also includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1580 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment of the present invention, the processor 1580 included in the terminal further has the following functions: recording a video when a recording instruction input by a user is received, pausing the recording when a pause instruction input by the user is received, acquiring and storing first posture data acquired by a posture sensor at the current moment, and then generating a first video file; sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time; continuously shooting and recording the video when a recording continuing instruction input by a user is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file; and splicing the first video file and the second video file into a target video file. That is, the processor 1580 has a function of executing the video recording method according to any of the embodiments described above, and details are not described herein.
The video recording method and the video recording terminal are applied to the mobile terminal, video recording is carried out when a recording instruction input by a user is received, then the video recording is suspended when a suspension instruction input by the user is received, first posture data acquired at the current moment by the posture sensor is acquired and stored, and then a first video file is generated; sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time; continuously shooting and recording the video when a recording continuing instruction input by a user is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file; and splicing the first video file and the second video file into a target video file. The user can adjust the posture of the terminal, such as a transverse inclination angle or a longitudinal inclination angle, according to the prompt for adjusting the posture of the terminal sent by the terminal before recording, so that the second video recorded at the back can be better connected with the first video recorded at the front, the function of natural recording is realized, transition is more natural when the target video is generated by final splicing, the interesting degree of the self-created video is improved, and the click rate of the self-created video is improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. A video recording method is characterized in that the video recording method is applied to a mobile terminal and comprises the following steps:
recording a video when a recording instruction input by a user is received, pausing the recording when a pause instruction input by the user is received, acquiring and storing first posture data acquired by a posture sensor at the current moment, and then generating a first video file;
sending a corresponding prompt for adjusting the terminal posture to a user according to the matching relation between the first posture data and the second posture data detected by the posture sensor in real time, and adjusting the terminal posture by the user according to the prompt;
when the first posture data and the second posture data are matched or similar, sending a prompt of continuous shooting and recording to a user, continuously shooting and recording videos when a continuous recording instruction input by the user according to the prompt of continuous shooting and recording is received, stopping shooting and recording when a stopping instruction input by the user is received, and generating a second video file;
and splicing the first video file and the second video file into a target video file.
2. The video recording method according to claim 1, wherein the recording command, the pause command, the resume command, and the stop command are all inputted to the same key.
3. The video recording method according to claim 1, wherein the prompt for adjusting the terminal posture comprises a visual prompt and/or a voice prompt for instructing a user to adjust the terminal lateral tilt angle and the terminal longitudinal tilt angle.
4. The video recording method according to claim 1, wherein the attitude sensor comprises an angular velocity sensor and/or a magnetic sensor.
5. The video recording method according to claim 1, wherein before the first video file and the second video file are spliced into the target video file, an animated special effect is further added to the first video file and/or the second video file.
6. The video recording method according to claim 1, wherein before the first video file and the second video file are spliced into the target video file, the first video file and/or the second video file are/is further subjected to frame interpolation processing or frame extraction processing.
7. The video recording method according to claim 1, wherein the recording is continued after a first predetermined time elapses after the recording command input by the user is received and/or after a second predetermined time elapses after the recording command input by the user is received.
8. The video recording method of claim 1, wherein the recording schedule component with the time stamp is displayed at the time of recording the video.
9. The video recording method according to claim 1, wherein after the first video file is generated, when a re-recording instruction input by a user is received, the video is re-recorded and the first video file is re-generated; and/or
After the second video file is generated, when a re-recording instruction input by a user is received, the video is recorded again, and the second video file is generated again.
10. A video recording terminal, comprising:
a display;
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: -performing a video recording method according to any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711238019.XA CN108012101B (en) | 2017-11-30 | 2017-11-30 | Video recording method and video recording terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711238019.XA CN108012101B (en) | 2017-11-30 | 2017-11-30 | Video recording method and video recording terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108012101A CN108012101A (en) | 2018-05-08 |
CN108012101B true CN108012101B (en) | 2020-11-03 |
Family
ID=62055163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711238019.XA Active CN108012101B (en) | 2017-11-30 | 2017-11-30 | Video recording method and video recording terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108012101B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109032560A (en) * | 2018-07-26 | 2018-12-18 | 歌尔股份有限公司 | A kind of parameter adjusting method based on rotary encoder, device and electronic equipment |
CN109600661B (en) * | 2018-08-01 | 2022-06-28 | 北京微播视界科技有限公司 | Method and apparatus for recording video |
CN109005359B (en) * | 2018-10-31 | 2020-11-03 | 广州酷狗计算机科技有限公司 | Video recording method, apparatus and storage medium |
CN109729408B (en) * | 2018-12-19 | 2022-03-11 | 四川坤和科技有限公司 | Mobile terminal high-definition online video scaling method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103546698B (en) * | 2013-10-31 | 2016-08-17 | 广东欧珀移动通信有限公司 | A kind of mobile terminal recorded video store method and device |
CN105872700A (en) * | 2015-11-30 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for realizing seamless circulation of startup video |
CN106139564B (en) * | 2016-08-01 | 2018-11-13 | 纳恩博(北京)科技有限公司 | Image processing method and device |
CN106657774A (en) * | 2016-11-25 | 2017-05-10 | 杭州联络互动信息科技股份有限公司 | Method and device for recording video |
-
2017
- 2017-11-30 CN CN201711238019.XA patent/CN108012101B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108012101A (en) | 2018-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107948562B (en) | Video recording method and video recording terminal | |
WO2021036536A1 (en) | Video photographing method and electronic device | |
CN108089788B (en) | Thumbnail display control method and mobile terminal | |
CN111010510B (en) | Shooting control method and device and electronic equipment | |
CN108012101B (en) | Video recording method and video recording terminal | |
US8847878B2 (en) | Environment sensitive display tags | |
CN109597556B (en) | Screen capturing method and terminal | |
WO2016177296A1 (en) | Video generation method and apparatus | |
US20160112632A1 (en) | Method and terminal for acquiring panoramic image | |
CN108038825B (en) | Image processing method and mobile terminal | |
WO2019196929A1 (en) | Video data processing method and mobile terminal | |
US11785331B2 (en) | Shooting control method and terminal | |
CN108616771B (en) | Video playing method and mobile terminal | |
CN108307106B (en) | Image processing method and device and mobile terminal | |
CN111147779B (en) | Video production method, electronic device, and medium | |
CN110602386B (en) | Video recording method and electronic equipment | |
WO2015131768A1 (en) | Video processing method, apparatus and system | |
CN108124059B (en) | Recording method and mobile terminal | |
CN111597370B (en) | Shooting method and electronic equipment | |
CN109618218B (en) | Video processing method and mobile terminal | |
CN109922294B (en) | Video processing method and mobile terminal | |
CN110650294A (en) | Video shooting method, mobile terminal and readable storage medium | |
CN108132749B (en) | Image editing method and mobile terminal | |
CN109361864B (en) | Shooting parameter setting method and terminal equipment | |
CN107734269B (en) | Image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211203 Address after: 31a, 15 / F, building 30, maple mall, bangrang Road, Brazil, Singapore Patentee after: Baiguoyuan Technology (Singapore) Co.,Ltd. Address before: Building B-1, North District, Wanda Commercial Plaza, Wanbo business district, No. 79, Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province Patentee before: GUANGZHOU BAIGUOYUAN INFORMATION TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |