CN114143398B - Video playing method and device - Google Patents

Video playing method and device Download PDF

Info

Publication number
CN114143398B
CN114143398B CN202111362634.8A CN202111362634A CN114143398B CN 114143398 B CN114143398 B CN 114143398B CN 202111362634 A CN202111362634 A CN 202111362634A CN 114143398 B CN114143398 B CN 114143398B
Authority
CN
China
Prior art keywords
video
target
input
fusion
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111362634.8A
Other languages
Chinese (zh)
Other versions
CN114143398A (en
Inventor
兰天成
黄春成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Weiwo Software Technology Co ltd
Original Assignee
Xi'an Weiwo Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Weiwo Software Technology Co ltd filed Critical Xi'an Weiwo Software Technology Co ltd
Priority to CN202111362634.8A priority Critical patent/CN114143398B/en
Publication of CN114143398A publication Critical patent/CN114143398A/en
Application granted granted Critical
Publication of CN114143398B publication Critical patent/CN114143398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video playing method and device, belongs to the technical field of communication, and can solve the problems that a video playing mode is relatively dead and inflexible. The method comprises the following steps: receiving a first input to a first video, the first video including N target moving objects therein; playing a second video in response to the first input; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N. The method is applied to the scene of video playing.

Description

Video playing method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video playing method and device.
Background
With the development of communication technology, electronic devices are more widely used in the daily life of users, for example, users can record videos using electronic devices, thereby recording interesting things or good time in life.
Typically, when a user wants to photograph a photographic subject in an operating state through an electronic device, the user may trigger the electronic device to record a video during tracking of the photographic subject. And after the recording is finished, the user can trigger the electronic device to play the video. The motion trail and the motion speed of the shooting object in the video are determined, and the motion trail and the motion speed are determined by the motion trail and the motion speed of the shooting object when the video is shot. Thus, the video playing mode is relatively dead and inflexible.
Disclosure of Invention
The embodiment of the application aims to provide a video playing method and device, which can solve the problems that a video playing mode is relatively dead and inflexible.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video playing method, where the method includes: receiving a first input to a first video, the first video including N target moving objects therein; playing a second video in response to the first input; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N.
In a second aspect, an embodiment of the present application provides a video playing device, including: a receiving module and a playing module. And the receiving module is used for receiving a first input of a first video, wherein the first video comprises N target moving objects. The playing module is used for responding to the first input received by the receiving module and playing the second video; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the method as in the first aspect described above when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect described above.
In a fifth aspect, an embodiment of the present application provides a chip, the chip including a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement a method as in the first aspect described above.
In the embodiment of the application, a first input of a first video is received, wherein the first video comprises N target moving objects; playing a second video in response to the first input; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N. By this means, when one video including at least one moving object is being played, since the method is used for triggering the playing of the fused video formed by combining the one video and another video including each moving object through one input of the one video, the motion parameters of a certain moving object in the at least one moving object are changed according to the input parameters of the input, that is, the user can freely change the motion parameters (such as the motion speed and/or the motion track) of the moving object in the played video according to his own wish. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
Drawings
Fig. 1 is a schematic diagram of a video playing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an interface for playing video according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interface for video frame synthesis according to an embodiment of the present application;
FIG. 4 is a second diagram of an interface for video frame synthesis according to an embodiment of the present application;
FIG. 5 is a third exemplary diagram of an interface for video frame synthesis according to an embodiment of the present application;
fig. 6 is an interface schematic diagram of video recording according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a video playing device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the application may be practiced otherwise than as specifically illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as having advantages over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more, for example, a plurality of elements means two or more, elements, etc.
In general, when a user triggers an electronic device to play a video, since the motion trail and the motion speed of a shooting object in the video are determined by the motion trail and the motion speed of the shooting object when the video is shot, the shooting object in the video can only move according to the motion parameters when the video is shot, so that the playing mode of the video is relatively dead and inflexible.
Based on the above technical problems, the embodiments of the present application provide a video playing method, when one video including at least one moving object is being played, since the video playing method is used for triggering playing of a fused video formed by combining the one video and another video including each moving object through one input of the one video, a motion parameter of a moving object in the at least one moving object is changed according to the input parameter of the input, that is, a user can freely change a motion parameter (for example, a motion speed and/or a motion track) of the moving object in the played video according to his own will. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
The video playing method, the video playing device and the electronic equipment provided by the embodiment of the application are described in detail through specific embodiments and application scenes thereof by combining the attached drawings.
As shown in fig. 1, an embodiment of the present application provides a video playing method, which includes the following S101 and S102.
S101, the video playing device receives a first input of a first video.
The first video includes N target moving objects.
Optionally, in an embodiment of the present application, the first video is a currently playing video.
Optionally, the first video may be a touch input, a voice input, or a gesture input of the user to the first video that is currently played. For example, the touch input is a sliding input on a video frame of the first video currently being played. Of course, the first input may be other possible inputs, which are not limited by the embodiment of the present application.
Alternatively, the target moving object may be a living body or a non-living body. For example, the target moving object is a flying wild goose; for another example, the target moving object is a soccer ball that is kicked.
Further, the N target moving objects may be the same type of moving object, or different types of moving objects, which is not limited in the embodiment of the present application.
It may be understood that, when the above S101 is executed, the video playing device is playing the first video, so before the above S101, the video playing method provided by the embodiment of the present application may further include: the video playing device receives input of a first thumbnail from a user; in response to the input, the first video is played. Wherein the first thumbnail is used to indicate the first video.
S102, the video playing device responds to the first input and plays the second video.
The second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N.
Optionally, in an embodiment of the present application, the motion parameter may include at least one of: motion trail, motion speed. Wherein the motion trajectory also determines the direction of motion.
Optionally, in the embodiment of the present application, the first video and the N third videos may be videos of a local storage space recorded and saved by an electronic device (i.e., a local device); or, video downloaded online for the electronic device; or video received for the electronic device (locally) from other devices. Specifically, the method can be determined according to actual use conditions, and the embodiment of the application is not limited to the method.
For example, the video playing device is a mobile phone, where n=1. The mobile phone plays the first video, and as shown in fig. 2 (a), the mobile phone displays one video frame of the first video. Wherein the first video includes a target moving object. As shown in (b) of fig. 2, is a plurality of video frames in the third video of the one target moving object. Therefore, the target moving object in the first video played by the mobile phone walks right straight.
Optionally, in the embodiment of the present application, when p=n, the motion parameter of each of the N target moving objects is determined according to the input parameter of the first input; when P < N, the motion parameters of P moving objects in the N target moving objects are determined according to the input parameters of the first input. Thus, for both possible cases, a different second video is obtained. The specific implementation mode is as follows:
first possible implementation
Alternatively, p=n; after S101, before S102, the video playing method provided in the embodiment of the present application may further include S103 and S104 described below.
S103, the video playing device responds to the first input, and synthesizes a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fusion video frames.
The first target video frame and the second target video frame are video frames recorded at the same time.
Optionally, the first target video frame is any one video frame in the first video, and the second target video frame is any one video frame in each of the N third videos. Specifically, a video frame recorded at the same time in the first video and each third video is obtained respectively, the two video frames are synthesized to obtain a first fusion video frame, and the like to obtain M first fusion video frames.
Further, assume that the first video may include S video frames, and each third video includes M video frames. Wherein S and M are positive integers. Since the number of M first fused video frames is determined by the number of frames of each third video frame, two possible cases can be included as follows:
a first possible scenario is:
when S > M, the number of frames of the video frames included in the first video is greater than the number of frames of the video frames included in each third video, at this time, it may be determined from the S video frames that M video frames included in each third video are recorded at the same time, and then the first target video frame in the M video frames and the second target video frame in the M video frames included in each third video are synthesized to obtain M first fused video frames.
A second possible scenario:
when s=m, the number of frames of the video frames included in the first video is equal to the number of frames of the video frames included in each third video, that is, the first target video frame is one video frame of the S video frames, and the second target video frame is one video frame of the M video frames included in each third video, at this time, the first target video frame of the S video frames and the second target video frame of the M video frames included in each third video may be directly synthesized, so as to obtain M first fusion video frames.
Alternatively, in the case where the input parameter includes an input track, the above S103 may be specifically implemented by S103A and S103B described below.
S103A, the video playing device responds to the first input, and determines a first fusion position of the second target video frame in each third video in the first target video frame according to the input track.
Alternatively, the input track may include a track direction and a track shape.
It should be noted that, when the first target video frame is the first video frame recorded at the first moment in the first video, the first merging position is the composition position of the P target moving objects in the first target video frame, that is, the first merging position at this time is the starting position of the P target moving objects. And then sequentially distributing first merging positions of the second target video frames in other video frames except the first video frame in the first video according to the track direction and the track shape of the input track.
And S103B, the video playing device synthesizes the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
It can be appreciated that the merging positions in adjacent merging video frames in the M first merging video frames are connected to form a merging track, and the merging track is consistent with the input track of the first input. In this way, the motion trajectory of each target moving object in the second video coincides with the input trajectory of the first input.
Illustratively, in conjunction with the description of fig. 2 above, the handset displays a first target video frame as shown in fig. 3 (a). Wherein the one first target video frame comprises a moving object: and (5) a person. Since the track direction of the finger sliding track is upward and the track shape is a longitudinal line, the first fusion position of the 5 second target video frames shown in (b) in fig. 3 is determined according to the sliding track, so that according to the first fusion position, the 5 second target video frames are sequentially synthesized with one first target video frame recorded at the same time, and 5 first fusion video frames are obtained. Thus, the fusion track formed by the fusion positions of the M first fusion video frames is shown in (b) of fig. 3, so that the 5 first fusion video frames are synthesized, and the moving object in the obtained second video moves straight upwards according to the sliding track.
Illustratively, in conjunction with the description of fig. 2 above, the handset displays a first target video frame as shown in fig. 4 (a). Wherein the one first target video frame comprises a moving object: and (5) a person. Since the track direction of the finger sliding track is rightward and the track shape is an arc, the first fusion position of the 5 second target video frames shown in (b) in fig. 4 is determined according to the sliding track, so that according to the first fusion position, the 5 second target video frames are sequentially synthesized with one first target video frame recorded at the same time, and 5 first fusion video frames are obtained. Thus, the fusion track formed by the M first fusion video frames is shown in (b) of fig. 4, so that the 5 first fusion video frames are synthesized, and the moving object in the obtained second video moves rightwards along an arc line according to the sliding track.
S104, the video playing device synthesizes the M first fusion video frames according to the input parameters to obtain a second video.
Alternatively, in the case where the input parameter includes an input force, S104 may be specifically implemented by S104A and S104B described below.
S104A, the video playing device determines a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input force.
Optionally, when the first input is a sliding input, the electronic device may determine an input force according to a touch force of a user on a screen of the electronic device; or determining the input force according to the line thickness of the input track.
Further, when the input force is determined according to the thickness of the line of the input track, the thicker the line of the input track is, the greater the input force is; conversely, the thinner the line of the input track, the smaller the input force.
It should be noted that, in the embodiment of the present application, when the input force is larger, the first time interval is larger; the smaller the input force, the smaller the first time interval.
S104B, the video playing device synthesizes the M first fusion video frames according to the first time interval to obtain a second video.
It should be noted that, in the embodiment of the present application, when the first time interval is smaller, the number of frames of the first fused video frame required for video synthesis in unit time is greater, and thus the motion speed of each target moving object in the obtained second video is smaller, that is, the effect of slowing down motion occurs for each target moving object in the second video; when the first time interval is larger, the number of frames of the first fusion video frames required for video synthesis in unit time is smaller, and then the moving speed of each target moving object in the obtained second video is faster, namely the effect of acceleration action of each target moving object in the second video appears. Thus, the user can trigger and regulate the movement speed of the target moving object in the video according to the actual requirement.
As shown in fig. 5 (a), in connection with the above description of fig. 4, since the trace line of the finger sliding trace shown in fig. 5 (a) is thicker than the trace line of the finger sliding trace shown in fig. 4 (a), the greater the input force of the first input is, the number of frames of the second target video frames to be synthesized per unit time is reduced from 5 second target video frames shown in fig. 4 (b) to 3 second target video frames shown in fig. 5 (b), and then the 3 second target video frames are sequentially synthesized with one first target video frame recorded at the same time, respectively, to obtain 3 first fused video frames. Thus, the fusion track formed by the 3 first fusion video frames is shown in (b) of fig. 5, so that the 3 first fusion video frames are synthesized, and the moving object in the obtained second video has a moving speed faster than that of the target moving object in fig. 3.
Second possible implementation
Optionally, P < N; after S101, before S102, the video playing method provided in the embodiment of the present application may further include S105 to S107 described below.
S105, the video playing device responds to the first input, and synthesizes a third target video frame of the first video and a fourth target video frame of each third video in the P third videos according to the input parameters to obtain M second fusion video frames.
The third target video frame and the fourth target video frame are video frames recorded at the same time. The P third videos are videos corresponding to the P target moving objects.
Optionally, the input parameters may include at least one of: input track and input force.
Optionally, the third target video frame is any one video frame in the first video, and the fourth target video frame is any one video frame in each of the P third videos.
Optionally, for the above description of obtaining the M second fused video frames, reference may be made to the detailed description related to the M first fused video frames in the foregoing embodiment, which is not repeated in the embodiments of the present application.
Optionally, in the case of P < N, before S101, the video playing method provided by the embodiment of the present application may further include: the video playing device receives the input of a user on P target moving objects in N target moving objects included in the first video; in response to the input, the P target moving objects are selected. Thus, when the user is triggered by the first input, the input parameters of the first input can only trigger and adjust the motion parameters of the P target moving objects.
S106, the video playing device respectively synthesizes each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames.
Wherein each fifth target video frame is a video frame of a third video in the (N-P) third videos, and each fifth target video frame and the third target video frame are video frames recorded at the same time.
Alternatively, the above S106 may be specifically implemented by the following S106A and S106B.
S106A, the video playing device determines second fusion positions of (N-P) fifth target video frames in each second fusion video frame according to composition positions of (N-P) target moving objects in the third target video frames.
Specifically, (N-P) composition positions of (N-P) target moving objects in third target video frames are obtained, and each composition position is used as a second fusion position of one fifth target video frame in (N-P) fifth target video frames in each second fusion video frame.
S106B, the video playing device synthesizes the (N-P) fifth target video frames and each second fusion video frame according to the second fusion position to obtain M third fusion video frames.
It should be noted that, since the second fusion positions of the (N-P) fifth target video frames in each second fusion video frame are determined by the composition positions of the (N-P) target moving objects in the third target video frames, the second fusion positions of each of the M third fusion video frames are connected to form a fusion track, and the fusion track is consistent with the motion track (for example, track shape and track direction) of the (N-P) target moving objects in the first video. I.e., the motion trajectories of the (N-P) target moving objects are unchanged.
S107, the video playing device synthesizes the M third fusion video frames to obtain a second video.
Alternatively, the above S107 may be specifically realized by S107A described below.
And S107A, the video playing device synthesizes M third fusion video frames according to the second time interval to obtain a second video.
Wherein the second time interval is the time interval of any two video frames in the first video.
It should be noted that, since the time intervals of any two video frames in the first video are determined, when the M third fused video frames are synthesized according to the second time interval to obtain the second video, the motion speed of the (N-P) target moving objects in the second video is unchanged, that is, the motion speed of the (N-P) target moving objects in the second video is kept consistent with the motion speed of the (N-P) target moving objects in the first video. I.e. the movement speed of the (N-P) target moving objects is unchanged.
It will be appreciated that when N is greater than 1, the input trajectory by the first input indicates that the motion trajectories of the P target moving objects are intended to be adjusted to coincide with the input trajectory. The electronic equipment obtains the fusion position of the video frames recorded at the same time in the first video and the N third videos and the motion speed of the P target motion objects in the obtained fusion video by acquiring the input track and the input force of the first input, so that the motion objects with different original motion tracks and motion speeds keep the same motion track and motion speed under the motion track appointed by the fingers of the user, and the motion objects with the same original motion track can also move along different motion tracks under the triggering of the user, thereby greatly increasing the interestingness of video playing.
The embodiment of the application provides a video playing method, when one video including at least one moving object is being played, as the video playing method is used for triggering the playing of a fused video formed by combining the one video and another video including each moving object through one input of the one video, the motion parameters of a certain moving object in the at least one moving object are changed according to the input parameters of the input, namely, a user can freely change the motion parameters (such as the motion speed and/or the motion track) of the moving object in the played video according to own wish. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
Optionally, in the case where the first video and the N third videos are videos recorded by the electronic device, before S101, the video playing method provided in the embodiment of the present application may further include the following S108 and S109.
S108, the video playing device receives a third input.
Optionally, the third input may be a touch input, a gesture input, a voice input, or the like. For example, the touch input is a user click input to a target control, which is used to trigger recording of a video. Of course, the third input may be other possible inputs, which are not limited by the embodiment of the present application.
Optionally, before the user triggers the camera of the electronic device to record the video, the user needs to trigger the electronic device to run the camera application program. In this way, before S108, the video playing method provided by the embodiment of the present application may further include: the video playing device receives input of a camera application icon from a user; and responding to the input, running a camera application program indicated by the camera application icon, and starting a first camera and N second cameras of the electronic device. Thus, images of the target moving object can be acquired through the first camera and the N second cameras.
And S109, the video playing device responds to the third input, controls the first camera of the electronic equipment to record the first video, and controls each of the N second cameras of the electronic equipment to record a third video respectively.
The third video is recorded during the process of recording the first video, and a second camera is used for tracking the motion process of a target moving object.
The first camera may be a primary camera and the second camera may be a high angle periscope or a rotatable camera, for example.
It should be noted that, when N is greater than 1, the N second cameras may be the same type of cameras, or different types of cameras. And in particular, according to actual use conditions, the embodiment of the present application is not limited thereto.
For example, take a video playing device as a mobile phone, n=1 as an example. As shown in fig. 6 (a), the handset displays a preview interface. If the user wants to trigger the electronic device to record a video, the user can click on control 01 in the preview interface. After the mobile phone receives the click input, the mobile phone may respond to the click input, as shown in (b) of fig. 6, and may control the main camera to record the first video, and control the periscope to record a third video.
It should be noted that, since a third video is a video recorded during the process of recording the first video and tracking a moving process of a target moving object by a second camera, the moving details of a target moving object included in each video frame of a third video are clearer.
Further, since the range of the view angle of the first camera is large, each video frame of the first video includes a large area of background images in addition to the images of the N target moving objects, so that the motion states of the N target moving objects are not clear enough. Thus, the first video and the N third videos are synthesized to obtain the second video, and the dynamic process image of the moving object in the second video is clearer.
According to the video playing method provided by the embodiment of the application, as a user can trigger the first camera of the electronic equipment to record the first video through input and control each of the N second cameras of the electronic equipment to record a third video respectively, the electronic equipment can store a plurality of videos recorded by different cameras, so that when the user wants to view the videos, the user can view the video recorded by each single camera according to actual needs or trigger the synthesis of the plurality of videos to obtain a fused video (namely the second video).
It should be noted that, in the video playing method provided by the embodiment of the present application, the execution body may be a video playing device (the video playing device is an electronic device or an external device on the electronic device), or a control module in the video playing device for executing the video playing method. In the embodiment of the present application, a video playing method performed by a video playing device is taken as an example, and the video playing device provided by the embodiment of the present application is described.
As shown in fig. 7, an embodiment of the present application provides a video playing device 200, which may include a receiving module 201 and a playing module 202. The receiving module 201 may be configured to receive a first input of a first video that is currently played, where the first video includes N target moving objects. A playing module 202, which is configured to play the second video in response to the first input received by the receiving module 201; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N.
Alternatively, p=n; the video playback device further comprises a determination module 203. The determining module 203 may be configured to synthesize, according to an input parameter, a first target video frame of a first video and a second target video frame in each third video, to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing M first fusion video frames according to the input parameters to obtain a second video.
Optionally, the input parameter comprises an input trajectory. The determining module 203 may be specifically configured to determine, according to the input track, a first fusion position of the second target video frame in the first target video frame in each third video; and synthesizing the first target video frames and the second target video frames in each third video according to the first fusion positions to obtain M first fusion video frames.
Optionally, the input parameter comprises an input trajectory. The determining module 203 may be specifically configured to determine, according to the input strength, a first time interval between two adjacent first fusion video frames in the M first fusion video frames; and synthesizing M first fusion video frames according to the first time interval to obtain a second video.
Optionally, P < N; the video playback device may further comprise a determination module 203. The determining module 203 may be configured to synthesize, according to an input parameter, a third target video frame of the first video and a fourth target video frame of each third video of the P third videos, to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same moment; synthesizing M third fusion video frames to obtain a second video; the P third videos are videos corresponding to P target moving objects.
Optionally, the determining module 203 may be specifically configured to determine the second fusion position of the (N-P) fifth target video frames in each second fusion video frame according to the composition positions of the (N-P) target moving objects in the third target video frames; and synthesizing (N-P) fifth target video frames and each second fusion video frame according to the second fusion positions to obtain M third fusion video frames.
Optionally, the determining module 203 may be specifically configured to synthesize M third fused video frames according to the second time interval to obtain a second video; the second time interval is the time interval of any two video frames in the first video.
Optionally, the video playing device may further include a control module 204. The receiving module 201 may also be configured to receive a third input. The control module 204 may be configured to, in response to the third input received by the receiving module 201, control the first cameras of the electronic device to record the first video, and control each of the N second cameras in the electronic device to record a third video respectively; the third video is recorded during the process of recording the first video, and a second camera is used for tracking the motion process of a target moving object.
The embodiment of the application provides a video playing device, when one video including at least one moving object is being played, as the video playing device is used for triggering the playing of a fused video formed by combining the one video and another video including each moving object through one input of the one video, the motion parameters of a certain moving object in the at least one moving object are changed according to the input parameters of the input, namely, a user can freely change the motion parameters (such as the motion speed and/or the motion track) of the moving object in the played video according to own wish. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
The video playing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in the terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (network attached storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The video playing device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video playing device provided by the embodiment of the application can realize each process realized by the method embodiments of fig. 1 to 6, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
Optionally, as shown in fig. 8, the embodiment of the present application further provides an electronic device 300, including a processor 301, a memory 302, and a program or an instruction stored in the memory 302 and capable of running on the processor 301, where the program or the instruction implements each process of the embodiment of the video playing method when executed by the processor 301, and the process can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 407 is configured to receive a first input of a first video that is currently played, where the first video includes N target moving objects. A processor 410 for playing a second video in response to the first input received by the user input unit 407; the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N.
Optionally, p=n. The processor 410 is further configured to synthesize, according to the input parameters, a first target video frame of the first video and a second target video frame in each third video, to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing M first fusion video frames according to the input parameters to obtain a second video.
Optionally, the input parameters include an input trajectory; the processor 410 is specifically configured to determine, according to the input track, a first fusion position of the second target video frame in the first target video frame in each third video; and synthesizing the first target video frames and the second target video frames in each third video according to the first fusion positions to obtain M first fusion video frames.
Optionally, the input parameters include an input trajectory; the processor 410 is specifically configured to determine a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input strength; and synthesizing M first fusion video frames according to the first time interval to obtain a second video.
Optionally, P < N; the processor 410 is further configured to synthesize a third target video frame of the first video and a fourth target video frame of each third video of the P third videos according to the input parameters, to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in the (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same moment; synthesizing M third fusion video frames to obtain a second video; the P third videos are videos corresponding to P target moving objects.
Optionally, the processor 410 is specifically configured to determine the second fusion position of the (N-P) fifth target video frames in each second fusion video frame according to the composition position of the (N-P) target moving objects in the third target video frame; and synthesizing (N-P) fifth target video frames and each second fusion video frame according to the second fusion positions to obtain M third fusion video frames.
Optionally, the processor 410 is specifically configured to synthesize M third fused video frames according to the second time interval to obtain a second video; the second time interval is the time interval of any two video frames in the first video.
Optionally, the user input unit 407 is further configured to receive a third input. A processor 410, configured to control, in response to a third input received by the user input unit 407, the first camera of the electronic device to record a first video, and control each of the N second cameras in the electronic device to record a third video respectively; the third video is recorded during the process of recording the first video, and a second camera is used for tracking the motion process of a target moving object.
The embodiment of the application provides electronic equipment, when one video including at least one moving object is being played, the electronic equipment is used for triggering to play the fused video formed by combining the one video and another video including each moving object through one input of the one video, so that the motion parameters of a certain moving object in the at least one moving object are changed according to the input parameters of the input, namely, a user can freely change the motion parameters (such as the motion speed and/or the motion track) of the moving object in the played video according to own wish. Therefore, the flexibility of the video playing mode is greatly improved, and the video playing mode is more humanized.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (graphics processing unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071 is also referred to as a touch screen. The touch panel 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 409 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
The processor is a processor in the electronic device in the above embodiment. A readable storage medium includes a computer readable storage medium such as a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the video playing method embodiment, and the same technical effects can be achieved, so that repetition is avoided, and the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. A video playing method, the method comprising:
receiving a first input to a first video, wherein the first video comprises N target moving objects;
playing a second video in response to the first input;
the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of the N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N;
before the receiving the first input to the first video, the method further comprises:
Receiving a third input;
and responding to the third input, controlling a first camera of the electronic equipment to record the first video, and controlling each second camera of N second cameras of the electronic equipment to record one third video respectively.
2. The method of claim 1, wherein P = N;
before the playing of the second video, the method further includes:
synthesizing a first target video frame of the first video and a second target video frame in each third video according to the input parameters to obtain M first fusion video frames, wherein the first target video frame and the second target video frame are video frames recorded at the same moment;
and synthesizing the M first fusion video frames according to the input parameters to obtain the second video.
3. The method of claim 2, wherein the input parameter comprises an input trajectory;
synthesizing the first target video frame of the first video and the second target video frame in each third video according to the input parameters to obtain M first fusion video frames, wherein the M first fusion video frames comprise:
determining a first fusion position of a second target video frame in each third video in the first target video frame according to the input track;
And synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
4. The method of claim 2, wherein the input parameter comprises an input force;
synthesizing the M first fusion video frames according to the input parameters to obtain the second video, wherein the method comprises the following steps:
determining a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input force;
and synthesizing the M first fusion video frames according to the first time interval to obtain the second video.
5. The method of claim 1, wherein P < N;
before the playing of the second video, the method further includes:
synthesizing a third target video frame of the first video and a fourth target video frame of each third video in the P third videos according to the input parameters to obtain M second fusion video frames, wherein the third target video frame and the fourth target video frame are video frames recorded at the same moment;
respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same moment;
Synthesizing the M third fusion video frames to obtain the second video;
the P third videos are videos corresponding to the P target moving objects.
6. The video playing device is characterized by comprising a receiving module, a playing module and a processing module;
the receiving module is used for receiving a first input of a first video, and the first video comprises N target moving objects;
the playing module is used for responding to the first input received by the receiving module and playing a second video;
the second video is obtained by synthesizing the first video and N third videos, each third video comprises one target moving object of the N target moving objects, the second video comprises P target moving objects of the N target moving objects, the motion parameters of the P target moving objects are determined according to the input parameters of the first input, N and P are positive integers, and P is smaller than or equal to N;
the receiving module is further configured to receive a third input before receiving the first input to the first video;
the processing module is configured to control, in response to the third input received by the receiving module, a first camera of the electronic device to record the first video, and control each of N second cameras of the electronic device to record one third video respectively.
7. The apparatus of claim 6, wherein P = N; the video playing device also comprises a determining module;
the determining module is configured to synthesize, according to the input parameter, a first target video frame of the first video and a second target video frame in each third video, to obtain M first fused video frames, where the first target video frame and the second target video frame are video frames recorded at the same time; and synthesizing the M first fusion video frames according to the input parameters to obtain the second video.
8. The apparatus of claim 7, wherein the input parameter comprises an input trajectory;
the determining module is specifically configured to determine, according to the input track, a first fusion position of the second target video frame in the first target video frame in each third video; and synthesizing the first target video frame and the second target video frame in each third video according to the first fusion position to obtain M first fusion video frames.
9. The apparatus of claim 7, wherein the input parameter comprises an input trajectory;
the determining module is specifically configured to determine a first time interval between two adjacent first fusion video frames in the M first fusion video frames according to the input strength; and synthesizing the M first fusion video frames according to the first time interval to obtain the second video.
10. The apparatus of claim 6, wherein P < N; the video playing device also comprises a determining module;
the determining module is configured to synthesize a third target video frame of the first video and a fourth target video frame of each third video in the P third videos according to the input parameter, so as to obtain M second fused video frames, where the third target video frame and the fourth target video frame are video frames recorded at the same time; respectively synthesizing each second fusion video frame and (N-P) fifth target video frames to obtain M third fusion video frames, wherein each fifth target video frame is a video frame of one third video in (N-P) third videos, and each fifth target video frame and each third target video frame are video frames recorded at the same moment;
synthesizing the M third fusion video frames to obtain the second video;
the P third videos are videos corresponding to the P target moving objects.
CN202111362634.8A 2021-11-17 2021-11-17 Video playing method and device Active CN114143398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111362634.8A CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111362634.8A CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Publications (2)

Publication Number Publication Date
CN114143398A CN114143398A (en) 2022-03-04
CN114143398B true CN114143398B (en) 2023-08-25

Family

ID=80389885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111362634.8A Active CN114143398B (en) 2021-11-17 2021-11-17 Video playing method and device

Country Status (1)

Country Link
CN (1) CN114143398B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760071A (en) * 2016-01-29 2016-07-13 深圳天珑无线科技有限公司 Method and system for rapidly adjusting video play progress through pressure touch technology
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN106507170A (en) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of method for processing video frequency and device
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
WO2021136134A1 (en) * 2019-12-30 2021-07-08 维沃移动通信有限公司 Video processing method, electronic device, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101678861B1 (en) * 2015-07-28 2016-11-23 엘지전자 주식회사 Mobile terminal and method for controlling the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760071A (en) * 2016-01-29 2016-07-13 深圳天珑无线科技有限公司 Method and system for rapidly adjusting video play progress through pressure touch technology
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN106507170A (en) * 2016-10-27 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of method for processing video frequency and device
CN107277371A (en) * 2017-07-27 2017-10-20 青岛海信移动通信技术股份有限公司 A kind of method and device in mobile terminal amplification picture region
WO2020172826A1 (en) * 2019-02-27 2020-09-03 华为技术有限公司 Video processing method and mobile device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
WO2021136134A1 (en) * 2019-12-30 2021-07-08 维沃移动通信有限公司 Video processing method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN114143398A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954199B (en) Video recording method and device
CN113938748B (en) Video playing method, device, terminal, storage medium and program product
CN112333382B (en) Shooting method and device and electronic equipment
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
CN112492215B (en) Shooting control method and device and electronic equipment
CN113794829B (en) Shooting method and device and electronic equipment
CN113873151A (en) Video recording method and device and electronic equipment
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112565611A (en) Video recording method, video recording device, electronic equipment and medium
CN114143398B (en) Video playing method and device
CN112784081A (en) Image display method and device and electronic equipment
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN113014799B (en) Image display method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN112702518B (en) Shooting method and device and electronic equipment
CN114650370A (en) Image shooting method and device, electronic equipment and readable storage medium
CN113347356A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114520874B (en) Video processing method and device and electronic equipment
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium
CN115134536B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant