CN114143398A - Video playing method and device - Google Patents

Video playing method and device Download PDF

Info

Publication number
CN114143398A
CN114143398A CN202111362634.8A CN202111362634A CN114143398A CN 114143398 A CN114143398 A CN 114143398A CN 202111362634 A CN202111362634 A CN 202111362634A CN 114143398 A CN114143398 A CN 114143398A
Authority
CN
China
Prior art keywords
video
target
input
fusion
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111362634.8A
Other languages
Chinese (zh)
Other versions
CN114143398B (en
Inventor
兰天成
黄春成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiwo Software Technology Co ltd
Original Assignee
Xi'an Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Weiwo Software Technology Co ltd filed Critical Xi'an Weiwo Software Technology Co ltd
Priority to CN202111362634.8A priority Critical patent/CN114143398B/en
Publication of CN114143398A publication Critical patent/CN114143398A/en
Application granted granted Critical
Publication of CN114143398B publication Critical patent/CN114143398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video playing method and device, belongs to the technical field of communication, and can solve the problems that a video playing mode is rigid and inflexible. The method comprises the following steps: receiving a first input of a first video, wherein the first video comprises N target moving objects; in response to the first input, playing a second video; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N. The method is applied to the scene of video playing.

Description

Video playing method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video playing method and device.
Background
With the development of communication technology, electronic devices are more widely used in the daily life of users, for example, users can record videos using electronic devices, thereby recording interests or good times in life.
Generally, when a user wants to photograph a photographic subject in an operating state through an electronic device, the user may trigger the electronic device to record a video in a process of tracking the photographic subject. And after the recording is finished, the user can trigger the electronic equipment to play the video. And determining the motion track and the motion speed of the shooting object in the video segment, wherein the motion track and the motion speed of the shooting object in the video segment are determined by the motion track and the motion speed of the shooting object when the video segment is shot. Therefore, the playing mode of the video is rigid and inflexible.
Disclosure of Invention
The embodiment of the application aims to provide a video playing method and device, and the problems that a video playing mode is relatively rigid and not flexible can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video playing method, where the method includes: receiving a first input of a first video, wherein the first video comprises N target moving objects; in response to the first input, playing a second video; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N.
In a second aspect, an embodiment of the present application provides a video playing apparatus, including: the device comprises a receiving module and a playing module. The receiving module is used for receiving a first input of a first video, wherein the first video comprises N target moving objects. The playing module is used for responding to the first input received by the receiving module and playing a second video; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method as in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method as in the first aspect described above.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method as in the first aspect.
In the embodiment of the application, a first input of a first video is received, wherein the first video comprises N target moving objects; in response to the first input, playing a second video; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N. By this scheme, when one video including at least one moving object is being played, since the user can trigger the playing of the fused video synthesized by the one video and another video including each moving object through one input to the one video, the motion parameter of a certain moving object in the at least one moving object changes according to the input parameter of the input, that is, the user can freely change the motion parameter (for example, the motion speed and/or the motion track) of the moving object in the played video according to the user's will. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
Drawings
Fig. 1 is a schematic diagram of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic view of an interface for playing a video according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an interface for video frame composition according to an embodiment of the present disclosure;
fig. 4 is a second schematic interface diagram of video frame composition according to an embodiment of the present application;
fig. 5 is a third schematic interface diagram of video frame composition according to an embodiment of the present application;
fig. 6 is a schematic view of an interface for recording a video according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," should not be construed as advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
Generally, when a user triggers an electronic device to play a video, since a motion trajectory and a motion speed of a shooting object in the video are determined by the motion trajectory and the motion speed of the shooting object when the video is shot, the shooting object in the video can only move according to motion parameters during shooting, so that a playing mode of the video is rigid and inflexible.
Based on the above technical problem, embodiments of the present application provide a video playing method, when a video including at least one moving object is being played, since a user can trigger playing of a merged video in which the video and another video including each moving object are synthesized through an input to the video, a motion parameter of a certain moving object in the at least one moving object changes according to the input parameter of the input, that is, a user can freely change a motion parameter (for example, a motion speed and/or a motion trajectory) of a moving object in a played video according to his/her own will. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
The following describes in detail a video playing method, a video playing device, and an electronic device provided in the embodiments of the present application with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a video playing method, which includes the following steps S101 and S102.
S101, the video playing device receives a first input of a first video.
The first video comprises N target moving objects.
Optionally, in this embodiment of the application, the first video is a currently played video.
Optionally, the first video may be a touch input, a voice input, or a gesture input of the user on the currently played first video. For example, the touch input is a sliding input on a video screen of a currently playing first video. Of course, the first input may also be other possible inputs, which is not limited in this embodiment of the application.
Optionally, the target moving object may be a living body or a non-living body. For example, the target moving object is a flying wild goose; as another example, the target moving object is a soccer ball that is kicked away.
Further, the N target moving objects may be moving objects of the same type or moving objects of different types, which is not limited in this application.
It can be understood that when the above S101 is executed, the video playing apparatus is playing the first video, so that before the above S101, the video playing method provided in the embodiment of the present application may further include: the video playing device receives input of a user on a first thumbnail; in response to the input, a first video is played. Wherein the first thumbnail is used to indicate the first video.
S102, the video playing device responds to the first input and plays the second video.
The second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameter, N and P are positive integers, and P is less than or equal to N.
Optionally, in this embodiment of the present application, the motion parameter may include at least one of: motion trail and motion speed. Wherein the motion trajectory also determines the direction of motion.
Optionally, in this embodiment of the application, the first video and the N third videos may be videos of a local storage space that are recorded and stored by an electronic device (i.e., a local computer); or, the video downloaded online for the electronic device; or video from other devices received (locally) for the electronic device. The method can be determined according to actual use conditions, and is not limited in the embodiment of the application.
Illustratively, the video playing device is a mobile phone, and N is 1. The handset plays the first video and, as shown in fig. 2 (a), the handset displays one video frame of the first video. Wherein the first video comprises a target moving object. As shown in (b) of fig. 2, are a plurality of video frames in the third video of the one target moving object. Therefore, the target moving object in the first video played by the mobile phone walks to the right in a straight line.
Optionally, in this embodiment of the present application, when P is equal to N, the motion parameter of each of the N target moving objects is determined according to the input parameter of the first input; when P < N, the motion parameters of only P moving objects out of the N target moving objects are determined according to the input parameters of the first input. Thus, different second videos are obtained for these two possible cases. The specific implementation mode is as follows:
first possible implementation
Optionally, P ═ N; after S101 and before S102, the video playing method provided in the embodiment of the present application may further include S103 and S104.
S103, the video playing device responds to the first input, and synthesizes a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fusion video frames.
The first target video frame and the second target video frame are video frames recorded at the same time.
Optionally, the first target video frame is any one video frame in the first video, and the second target video frame is any one video frame in each of the N third videos. Specifically, a video frame recorded at the same time in the first video and each third video is obtained, the two video frames are synthesized to obtain a first fused video frame, and so on, to obtain M first fused video frames.
Further, assume that the first video may include S video frames and each third video includes M video frames. Wherein S and M are positive integers. Since the number of frames of the M first fused video frames is determined by the number of frames of each third video frame, two possible cases may be included as follows:
the first possible scenario:
when S > M, the number of frames of video frames included in the first video is greater than the number of frames of video frames included in each third video, at this time, M video frames recorded at the same time as the M video frames included in each third video may be determined from the S video frames, and then the first target video frame in the M video frames and the second target video frame in the M video frames included in each third video are synthesized to obtain M first fusion video frames.
The second possible scenario:
when S is equal to M, the number of frames of the video frames included in the first video is equal to the number of frames of the video frames included in each third video, that is, the first target video frame is one video frame of the S video frames, and the second target video frame is one video frame of the M video frames included in each third video.
Alternatively, in the case that the input parameter includes an input trajectory, the above S103 may be specifically implemented by the following S103A and S103B.
S103A, the video playing device responds to the first input, and determines a first fusion position of the second target video frame in the first target video frame in each third video according to the input track.
Optionally, the input track may include a track direction and a track shape.
It should be noted that, when the first target video frame is a first video frame recorded at a first time in the first video, the first fusion position is a composition position of P target moving objects in the first target video frame, that is, the first fusion position at this time is a starting position of the P target moving objects. And then sequentially distributing first blending positions of the second target video frame in other video frames except the first video frame in the first video according to the track direction and the track shape of the input track.
S103B, the video playing apparatus synthesizes the first target video frame and the second target video frame in each third video according to the first fusion position, and obtains M first fusion video frames.
It is understood that the blend positions in adjacent ones of the M first blended video frames are connected to form a blend track that is consistent with the input track of the first input. In this way, the motion trajectory of each target moving object in the second video coincides with the input trajectory of the first input.
Illustratively, in conjunction with the description of fig. 2 above, the handset displays a first target video frame, as shown in fig. 3 (a). Wherein the first target video frame comprises a moving object: a human. Since the track direction of the finger sliding track is upward and the track shape is a longitudinal line, according to the sliding track, the first fusion of 5 second target video frames shown in (b) in fig. 3 is determined as a position, so that according to the first fusion position, the 5 second target video frames are respectively synthesized with one first target video frame recorded at the same time in sequence to obtain 5 first fusion video frames. In this way, the fusion trajectory formed by the fusion positions of the M first fusion video frames is as shown in (b) of fig. 3, so that the 5 first fusion video frames are synthesized, and the moving object in the obtained second video moves straight upward according to the sliding trajectory.
Illustratively, in conjunction with the description of fig. 2 above, the handset displays a first target video frame, as shown in fig. 4 (a). Wherein the first target video frame comprises a moving object: a human. Since the track direction of the finger sliding track is rightward and the track shape is an arc, according to the sliding track, the first fusion positions of 5 second target video frames shown in (b) in fig. 4 are determined, so that according to the first fusion positions, the 5 second target video frames are respectively synthesized with one first target video frame recorded at the same time in sequence, and 5 first fusion video frames are obtained. In this way, the fusion trajectory formed by the M first fused video frames is as shown in (b) in fig. 4, so that the 5 first fused video frames are synthesized, and the moving object in the obtained second video moves in an arc to the right according to the sliding trajectory.
S104, the video playing device synthesizes the M first fusion video frames according to the input parameters to obtain a second video.
Optionally, in the case that the input parameter includes an input strength, the step S104 may be specifically implemented by the following steps S104A and S104B.
S104A, the video playing device determines a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input force.
Optionally, when the first input is a sliding input, the electronic device may determine an input strength according to a touch strength of a user on a screen of the electronic device; or determining the input force according to the line thickness of the input track.
Further, when the input force is determined according to the line thickness of the input track, the thicker the line of the input track is, the larger the input force is; conversely, the thinner the line of the input track, the smaller the input force.
It should be noted that, in the embodiment of the present application, when the input strength is larger, the first time interval is larger; the first time interval is smaller when the input force is smaller.
S104B, the video playing apparatus synthesizes the M first fused video frames according to the first time interval, to obtain a second video.
It should be noted that, in the embodiment of the present application, when the first time interval is smaller, the number of frames of the first fusion video frame required for video synthesis in unit time is greater, and then the motion speed of each target moving object in the obtained second video is smaller, that is, an effect of slowing down motion occurs in each target moving object in the second video; when the first time interval is larger, the number of frames of the first fusion video frame required for video synthesis in unit time is smaller, and the motion speed of each target motion object in the obtained second video is faster, that is, each target motion object in the second video has an effect of acceleration action. Therefore, the user can trigger and adjust the movement speed of the target moving object in the video according to actual requirements.
Illustratively, as shown in fig. 5 (a), in conjunction with the above-mentioned description of fig. 4, since the trajectory line of the finger sliding trajectory shown in fig. 5 (a) is thicker than the trajectory line of the finger sliding trajectory shown in fig. 4 (a), the input force of the first input is larger, and thus the number of frames of the second target video frames that need to be synthesized in a unit time is reduced from 5 second target video frames shown in fig. 4 (b) to 3 second target video frames shown in fig. 5 (b), and the 3 second target video frames are sequentially synthesized with one first target video frame recorded at the same time, respectively, to obtain 3 first fused video frames. In this way, the fusion track formed by the 3 first fusion video frames is as shown in (b) in fig. 5, so that the 3 first fusion video frames are synthesized, and the motion speed of the motion object in the obtained second video is faster than that of the target motion object in fig. 3.
Second possible implementation
Optionally, P < N; after S101 and before S102, the video playing method provided in the embodiment of the present application may further include S105 to S107.
S105, the video playing device responds to the first input, and synthesizes a third target video frame of the first video and a fourth target video frame of each third video in the P third videos according to the input parameters to obtain M second fusion video frames.
And the third target video frame and the fourth target video frame are video frames recorded at the same time. The P third videos are videos corresponding to the P target moving objects.
Optionally, the input parameters may include at least one of: inputting track and inputting force.
Optionally, the third target video frame is any one video frame in the first video, and the fourth target video frame is any one video frame in each of P third videos.
Optionally, for the above description of obtaining the M second fusion video frames, reference may be made to the detailed description related to the M first fusion video frames in the above embodiment, and details of this description are not repeated in this embodiment.
Optionally, in the case that P < N, before the above S101, the video playing method provided in this embodiment of the present application may further include: the video playing device receives user input of P target moving objects in N target moving objects included in the first video; in response to the input, the P target moving objects are selected. In this way, when the user is triggered by the first input, the input parameters of the first input may only trigger the adjustment of the motion parameters of the P target moving objects.
S106, the video playing device synthesizes each second fusion video frame and (N-P) fifth target video frames respectively to obtain M third fusion video frames.
Each fifth target video frame is a video frame of one of (N-P) third videos, and each fifth target video frame and the third target video frame are video frames recorded at the same time.
Alternatively, S106 may be specifically implemented by S106A and S106B described below.
S106, 106A, the video playing device determines second fusion positions of the (N-P) fifth target video frames in each second fusion video frame according to the composition positions of the (N-P) target moving objects in the third target video frames.
Specifically, the (N-P) composition positions of the (N-P) target moving objects in the third target video frame are obtained, and each composition position is used as the second fusion position of one fifth target video frame in the (N-P) fifth target video frames in each second fusion video frame.
S106B, the video playing apparatus synthesizes the (N-P) fifth target video frames and each second fused video frame according to the second fused position, to obtain M third fused video frames.
It should be noted that, since the second fusion positions of the (N-P) fifth target video frames in each second fusion video frame are determined by the composition positions of the (N-P) target moving objects in the third target video frames, the second fusion positions in each third fusion video frame of the M third fusion video frames are connected to form a fusion track, and the fusion track is consistent with the motion tracks (e.g., track shape and track direction) of the (N-P) target moving objects in the first video. Namely, the motion tracks of the (N-P) target moving objects are not changed.
And S107, the video playing device synthesizes the M third fusion video frames to obtain a second video.
Alternatively, S107 may be specifically implemented by S107A described below.
S107A, the video playing device synthesizes M third fusion video frames according to the second time interval to obtain a second video.
Wherein the second time interval is a time interval of any two video frames in the first video.
It should be noted that, since the time interval of any two video frames in the first video is determined, when the second video is obtained by synthesizing M third fused video frames according to the second time interval, the motion speed of (N-P) target moving objects in the second video is not changed, that is, the motion speed of (N-P) target moving objects in the second video is consistent with the motion speed of (N-P) target moving objects in the first video. Namely, the movement speeds of the (N-P) target moving objects are not changed.
It can be understood that when N is greater than 1, the input trajectory of the first input indicates that it is desired to adjust the motion trajectories of the P target moving objects to be consistent with the input trajectory. The electronic equipment obtains the fusion position where the video frames recorded at the same time in the first video and the N third videos are synthesized and the motion speed of the P target motion objects in the fusion video by obtaining the input track and the input force of the first input, so that the motion objects with different original motion tracks and motion speeds keep the same motion track and motion speed under the motion track appointed by the fingers of the user, and the moving objects with the same original motion track can also move along different motion tracks under the triggering of the user, thereby greatly increasing the interest of video playing.
The embodiment of the present application provides a video playing method, when a video including at least one moving object is being played, since the video is used to trigger playing of a fused video composed of the video and another video including each moving object through an input to the video, a motion parameter of a certain moving object in the at least one moving object changes according to the input parameter of the input, that is, a user can freely change a motion parameter (for example, a motion speed and/or a motion track) of a moving object in a played video according to his/her own will. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
Optionally, in the case that the first video and the N third videos are videos recorded by the electronic device, before the step S101, the video playing method provided in the embodiment of the present application may further include the following steps S108 and S109.
And S108, the video playing device receives a third input.
Optionally, the third input may be touch input, gesture input, voice input, or the like. For example, the touch input is a click input of a target control by a user, and the target control is used for triggering recording of a video. Of course, the third input may also be other possible inputs, which is not limited in this embodiment of the application.
Optionally, before the user triggers the camera of the electronic device to record the video, the user needs to trigger the electronic device to run the camera application program. Thus, before the step S108, the video playing method provided in the embodiment of the present application may further include: the video playing device receives input of a user to a camera application icon; and responding to the input, running the camera application program indicated by the camera application icon, and starting the first camera and the N second cameras of the electronic equipment. In this way, the images of the target moving object can be acquired by the first camera and the N second cameras.
And S109, the video playing device responds to the third input, controls the first camera of the electronic equipment to record the first video, and controls each second camera of the N second cameras of the electronic equipment to record a third video respectively.
And a third video is a video recorded in the process of tracking the motion of a target moving object by a second camera in the process of recording the first video.
For example, the first camera may be a main camera and the second camera may be a large-angle periscope or a rotatable camera.
It should be noted that when N is greater than 1, the N second cameras may be the same type of camera or different types of cameras. The method is determined according to actual use conditions, and the method is not limited in the embodiment of the application.
As an example, the video playing device is a mobile phone, and N is 1. As shown in fig. 6 (a), the cell phone displays a preview interface. If the user wants to trigger the electronic device to record a video, the user can click on control 01 in the preview interface. After the cell phone receives the click input, the cell phone may control the main camera to record the first video and control the periscope to record a third video in response to the click input, as shown in (b) of fig. 6.
It should be noted that, since a third video is a video recorded during the process of recording the first video and tracking the motion of a target moving object by using a second camera, the motion details of a target moving object included in each video frame of a third video are clearer.
Further, since the field angle range of the first camera is large, each video frame of the first video includes a large-area background image in addition to the images of the N target moving objects, so that the moving states of the N target moving objects are not clear enough. Therefore, the first video and the N third videos are synthesized to obtain a second video, and the dynamic process image of the moving object in the second video is clearer.
According to the video playing method provided by the embodiment of the application, the user can trigger the first camera of the electronic equipment to record the first video through input and control each second camera of the N second cameras of the electronic equipment to record the third video respectively, so that the electronic equipment can store a plurality of videos recorded by different cameras, and therefore when the user wants to view the videos, the user can view the videos recorded by each single camera according to actual needs or trigger the plurality of videos to be synthesized to obtain a fused video (namely, the second video).
It should be noted that, in the video playing method provided in the embodiment of the present application, the execution main body may be a video playing device (the video playing device is an electronic device or an external device on the electronic device), or a control module in the video playing device for executing the video playing method. In the embodiment of the present application, a video playing device executing a video playing method is taken as an example to describe the video playing device provided in the embodiment of the present application.
As shown in fig. 7, an embodiment of the present application provides a video playing apparatus 200, which may include a receiving module 201 and a playing module 202. The receiving module 201 may be configured to receive a first input of a currently played first video, where the first video includes N target moving objects. A playing module 202, which may be configured to play a second video in response to the first input received by the receiving module 201; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N.
Optionally, P ═ N; the video playback device further comprises a determination module 203. The determining module 203 may be configured to synthesize a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing M first fusion video frames according to the input parameters to obtain a second video.
Optionally, the input parameter comprises an input trajectory. The determining module 203 may be specifically configured to determine, according to the input track, a first fusion position of the second target video frame in the first target video frame in each third video; and synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
Optionally, the input parameter comprises an input trajectory. The determining module 203 may be specifically configured to determine, according to the input strength, a first time interval between two adjacent first fusion video frames in the M first fusion video frames; and synthesizing M first fusion video frames according to the first time interval to obtain a second video.
Optionally, P < N; the video playback device may also include a determination module 203. The determining module 203 may be configured to synthesize a third target video frame of the first video and a fourth target video frame of each of the P third videos according to the input parameters to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same time; synthesizing M third fusion video frames to obtain a second video; and the P third videos are videos corresponding to the P target moving objects.
Optionally, the determining module 203 may be specifically configured to determine, according to the composition positions of the (N-P) target moving objects in the third target video frame, second fusion positions of the (N-P) fifth target video frames in each second fusion video frame; and synthesizing (N-P) fifth target video frames and each second fusion video frame according to the second fusion position to obtain M third fusion video frames.
Optionally, the determining module 203 may be specifically configured to synthesize M third fusion video frames according to the second time interval to obtain a second video; wherein the second time interval is a time interval of any two video frames in the first video.
Optionally, the video playing apparatus may further include a control module 204. The receiving module 201 may further be configured to receive a third input. The control module 204 may be configured to, in response to the third input received by the receiving module 201, control the first camera of the electronic device to record the first video, and control each camera of the N second cameras in the electronic device to record a third video; and a third video is a video recorded in the process of tracking the motion of a target moving object by a second camera in the process of recording the first video.
The embodiment of the present application provides a video playing apparatus, when a video including at least one moving object is being played, since the video playing apparatus is configured to trigger playing of a merged video formed by synthesizing the video and another video including each moving object through an input to the video, a motion parameter of a certain moving object in the at least one moving object changes according to the input parameter of the input, that is, a user may freely change a motion parameter (for example, a motion speed and/or a motion trajectory) of a moving object in a played video according to his/her will. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
The video playing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video playing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 6, and can achieve the same technical effect, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 300 is further provided in this embodiment of the present application, and includes a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of being executed on the processor 301, where the program or the instruction is executed by the processor 301 to implement each process of the above-mentioned video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 407 is configured to receive a first input to a currently played first video, where the first video includes N target moving objects. A processor 410 for playing a second video in response to the first input received by the user input unit 407; the second video is a video obtained by synthesizing a first video and N third videos, each third video includes one of N target moving objects, the motion parameters of P target moving objects in the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is smaller than or equal to N.
Optionally, P ═ N. The processor 410 is further configured to synthesize a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing M first fusion video frames according to the input parameters to obtain a second video.
Optionally, the input parameter includes an input trajectory; the processor 410 is specifically configured to determine, according to the input trajectory, a first fusion position of the second target video frame in each third video in the first target video frame; and synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
Optionally, the input parameter includes an input trajectory; the processor 410 is specifically configured to determine, according to the input strength, a first time interval between two adjacent first fusion video frames in the M first fusion video frames; and synthesizing M first fusion video frames according to the first time interval to obtain a second video.
Optionally, P < N; the processor 410 is further configured to synthesize a third target video frame of the first video and a fourth target video frame of each of the P third videos according to the input parameters to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same time; synthesizing M third fusion video frames to obtain a second video; and the P third videos are videos corresponding to the P target moving objects.
Optionally, the processor 410 is specifically configured to determine, according to the composition positions of the (N-P) target moving objects in the third target video frame, second fusion positions of the (N-P) fifth target video frames in each second fusion video frame; and synthesizing (N-P) fifth target video frames and each second fusion video frame according to the second fusion position to obtain M third fusion video frames.
Optionally, the processor 410 is specifically configured to synthesize M third fusion video frames according to a second time interval, so as to obtain a second video; wherein the second time interval is a time interval of any two video frames in the first video.
Optionally, the user input unit 407 is further configured to receive a third input. A processor 410, configured to control a first camera of the electronic device to record a first video and control each of N second cameras of the electronic device to record a third video in response to a third input received by the user input unit 407; and a third video is a video recorded in the process of tracking the motion of a target moving object by a second camera in the process of recording the first video.
The embodiment of the present application provides an electronic device, when a video including at least one moving object is being played, since the electronic device is configured to trigger playing of a merged video in which the video and another video including each moving object are synthesized through an input to the video, a motion parameter of a certain moving object in the at least one moving object changes according to the input parameter of the input, that is, a user may freely change a motion parameter (for example, a motion speed and/or a motion trajectory) of a moving object in a played video according to his/her own intention. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
It should be understood that, in the embodiment of the present application, the input unit 404 may include a Graphics Processing Unit (GPU) 4041 and a microphone 4042, and the graphics processor 4041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. A touch panel 4071, also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer-readable storage media such as a computer-read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and so forth.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video playing method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method in the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video playback method, the method comprising:
receiving a first input of a first video, wherein the first video comprises N target moving objects;
in response to the first input, playing a second video;
the second video is a video obtained by synthesizing the first video and N third videos, each third video includes one of the N target moving objects, the motion parameters of P target moving objects among the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is less than or equal to N.
2. The method of claim 1, wherein P ═ N;
before the playing the second video, the method further comprises:
synthesizing a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fusion video frames, wherein the first target video frame and the second target video frame are video frames recorded at the same time;
and synthesizing the M first fusion video frames according to the input parameters to obtain the second video.
3. The method of claim 2, wherein the input parameters include an input trajectory;
synthesizing a first target video frame of the first video and a second target video frame of each third video according to the input parameters to obtain M first fusion video frames, including:
determining a first fusion position of a second target video frame in each third video in the first target video frame according to the input track;
and synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
4. The method of claim 2, wherein the input parameters include input force;
the synthesizing the M first fusion video frames according to the input parameters to obtain the second video includes:
determining a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input force;
and synthesizing the M first fusion video frames according to the first time interval to obtain the second video.
5. The method of claim 1, wherein P < N;
before the playing the second video, the method further comprises:
synthesizing a third target video frame of the first video and a fourth target video frame of each of the P third videos according to the input parameters to obtain M second fusion video frames, wherein the third target video frame and the fourth target video frame are video frames recorded at the same moment;
respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and the third target video frames are video frames recorded at the same time;
synthesizing the M third fusion video frames to obtain the second video;
and the P third videos are videos corresponding to the P target moving objects.
6. The video playing device is characterized by comprising a receiving module and a playing module;
the receiving module is used for receiving a first input of a first video, wherein the first video comprises N target moving objects;
the playing module is used for responding to the first input received by the receiving module and playing a second video;
the second video is a video obtained by synthesizing the first video and N third videos, each third video includes one of the N target moving objects, the motion parameters of P target moving objects among the N target moving objects included in the second video are determined according to the first input parameters, N and P are positive integers, and P is less than or equal to N.
7. The apparatus of claim 6, wherein P-N; the video playing device also comprises a determining module;
the determining module is configured to synthesize a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing the M first fusion video frames according to the input parameters to obtain the second video.
8. The apparatus of claim 7, wherein the input parameters comprise an input trajectory;
the determining module is specifically configured to determine, according to the input trajectory, a first fusion position of a second target video frame in each third video in the first target video frame; and synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
9. The apparatus of claim 7, wherein the input parameters comprise an input trajectory;
the determining module is specifically configured to determine, according to the input strength, a first time interval between two adjacent first fusion video frames in the M first fusion video frames; and synthesizing the M first fusion video frames according to the first time interval to obtain the second video.
10. The apparatus of claim 6, wherein P < N; the video playing device also comprises a determining module;
the determining module is configured to synthesize a third target video frame of the first video and a fourth target video frame of each of P third videos according to the input parameter to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fused video frame and (N-P) fifth target video frames to obtain M third fused video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and the third target video frame are video frames recorded at the same time;
synthesizing the M third fusion video frames to obtain the second video;
and the P third videos are videos corresponding to the P target moving objects.
CN202111362634.8A 2021-11-17 2021-11-17 Video playing method and device Active CN114143398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111362634.8A CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111362634.8A CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Publications (2)

Publication Number Publication Date
CN114143398A true CN114143398A (en) 2022-03-04
CN114143398B CN114143398B (en) 2023-08-25

Family

ID=80389885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111362634.8A Active CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Country Status (1)

Country Link
CN (1) CN114143398B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760071A (en) * 2016-01-29 2016-07-13 深圳天珑无线科技有限公司 Method and system for rapidly adjusting video play progress through pressure touch technology
US20170034449A1 (en) * 2015-07-28 2017-02-02 Lg Electronics Inc. Mobile terminal and method for controlling same
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN106507170A (en) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of method for processing video frequency and device
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
WO2021136134A1 (en) * 2019-12-30 2021-07-08 维沃移动通信有限公司 Video processing method, electronic device, and computer-readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170034449A1 (en) * 2015-07-28 2017-02-02 Lg Electronics Inc. Mobile terminal and method for controlling same
CN105760071A (en) * 2016-01-29 2016-07-13 深圳天珑无线科技有限公司 Method and system for rapidly adjusting video play progress through pressure touch technology
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN106507170A (en) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of method for processing video frequency and device
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
WO2021136134A1 (en) * 2019-12-30 2021-07-08 维沃移动通信有限公司 Video processing method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN114143398B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954199B (en) Video recording method and device
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113938748B (en) Video playing method, device, terminal, storage medium and program product
CN112492215B (en) Shooting control method and device and electronic equipment
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN113873151A (en) Video recording method and device and electronic equipment
CN112565611A (en) Video recording method, video recording device, electronic equipment and medium
CN113794829B (en) Shooting method and device and electronic equipment
CN112333382B (en) Shooting method and device and electronic equipment
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN113259743A (en) Video playing method and device and electronic equipment
CN112784081A (en) Image display method and device and electronic equipment
CN114143398B (en) Video playing method and device
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN114245017A (en) Shooting method and device and electronic equipment
CN113014799B (en) Image display method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114520874B (en) Video processing method and device and electronic equipment
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium
CN112672059B (en) Shooting method and shooting device
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN118200674A (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant