CN105635772A - Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing - Google Patents

Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing Download PDF

Info

Publication number
CN105635772A
CN105635772A CN201511027645.5A CN201511027645A CN105635772A CN 105635772 A CN105635772 A CN 105635772A CN 201511027645 A CN201511027645 A CN 201511027645A CN 105635772 A CN105635772 A CN 105635772A
Authority
CN
China
Prior art keywords
audio
data
image data
video
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511027645.5A
Other languages
Chinese (zh)
Inventor
孙颖慧
杨子斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201511027645.5A priority Critical patent/CN105635772A/en
Publication of CN105635772A publication Critical patent/CN105635772A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a picture-in-picture video playing method and a mobile terminal supporting picture-in-picture video playing. The picture-in-picture video playing method comprises steps of: respectively extracting audio data in background audio/video data and user audio/video data and synthesizing the audio data into target audio data; respectively extracting image data in the background audio/video data and the user audio/video data and synthesizing the image data into target image data; and synthesizing the synthesized target audio data and the synthesized image data into target audio/video data and playing the synthesized target audio/video data. Thus, picture-in-picture playing effects are achieved; entirely new user experience is brought for users; and user stickness is improved.

Description

Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing
Technical Field
The invention relates to the technical field of audio and video, in particular to a picture-in-picture video playing method and a mobile terminal supporting picture-in-picture video playing.
Background
At present, with the development of audio and video playing requirements, people have higher and higher personalized requirements on audio and video playing, audio and video synthesis and the like. The existing software with the functions of recording and videotaping has some defects. For example, the user wants to record songs by himself, and the software of the recording function can only simply store songs of the user under the background accompaniment tape. For another example, a user may wish to record a video, usually a pure recorded video or a recorded video may be edited. Because the functions of the existing recording software are single, if a user wants to record own image video and the existing video simply and easily, the recording is difficult to realize, and professional synthesis and manufacturing are needed by professional organization professionals, so that the method is time-consuming, labor-consuming and uneconomical.
In summary, how to implement a simple and easy-to-operate pip video playing method and a mobile terminal supporting pip video playing becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a picture-in-picture video playing method and a mobile terminal supporting picture-in-picture video playing, which are used for solving the defect that only audio and video recording can be simply realized in the prior art.
In order to solve the above problems, the present invention discloses a picture-in-picture video playing method, comprising the steps of:
respectively extracting audio data in background audio and video data and user audio and video data, and synthesizing target audio data;
respectively extracting image data in background audio and video data and user audio and video data, and synthesizing target image data;
and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data and playing the target audio and video data.
The method of the present invention, wherein the steps of extracting the audio data in the background audio/video data and the audio/video data of the user respectively and synthesizing the target audio data further comprise:
and respectively extracting audio data in the background audio and video data and the user audio and video data, and synthesizing target audio data through calibrating time points.
The method of the present invention, wherein the steps of extracting the image data in the background audio/video data and the user audio/video data, respectively, and synthesizing the target image data further comprise:
the method comprises the steps of respectively extracting image data in background audio and video data and user audio and video data to respectively obtain background image data and user image data, placing the background image data at the bottom layer of a playing picture, placing the user image data at one corner of the top layer of the playing picture, and combining the background image data and the user image data into target image data.
The method of the present invention, wherein the step of synthesizing the target audio data and the image data into target audio/video data and playing the target audio/video data further comprises:
and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data through a calibration time point and playing the target audio and video data.
In order to solve the above problem, the present invention also discloses a mobile terminal supporting picture-in-picture playing, which includes a storage unit, wherein the mobile terminal further includes: the device comprises a first audio data extraction unit, a second audio data extraction unit, a first image data extraction unit, a second image data extraction unit, an audio data synthesis unit, an image data synthesis unit and an audio and video data synthesis unit;
the storage unit is used for storing background audio and video data and user audio and video data;
the first audio data extraction unit is used for extracting background audio data from the background audio and video data stored in the storage unit;
the second audio data extraction unit is used for extracting user audio data from the user audio and video data stored in the storage unit;
the audio data synthesis unit is used for synthesizing the background audio data extracted by the first audio data extraction unit and the user audio data extracted by the second audio data extraction unit into target audio data;
the first image data extraction unit is used for extracting background image data from the background audio and video data stored in the storage unit;
the second image data extraction unit is used for extracting user image data from the user audio and video data stored in the storage unit;
the image data synthesis unit is used for synthesizing the background image data extracted by the first image data extraction unit and the user image data extracted by the second image data extraction unit into target image data;
and the audio and video data synthesis unit is used for synthesizing the target audio data synthesized by the audio data synthesis unit and the target image data synthesized by the image data synthesis unit into target audio and video data.
The audio data synthesis unit is further configured to synthesize the background audio data extracted by the first audio data extraction unit and the user audio data extracted by the second audio data extraction unit into target audio data by calibrating a time point.
The image data synthesis unit is further configured to place the background image data extracted by the first image data extraction unit in a video bottom layer, place the user image data extracted by the second image data extraction unit in a corner of a video top layer, and synthesize the background image data and the user-influenced image data into target image data.
The audio/video synthesis unit is further configured to synthesize the target audio data synthesized by the audio data synthesis unit and the target image data synthesized by the image data synthesis unit into target audio/video data in a time-aligned manner.
According to the picture-in-picture video playing method and the mobile terminal supporting picture-in-picture video playing, the background audio data and the user audio data are synthesized in a time calibration mode, the background image data is placed at the bottom layer of a playing picture, the user image data is placed at one corner of the top layer of the playing picture, the target audio and video data are synthesized by the background image data and the user influence image data into the target image data, and then the target audio and video data and the target image data are synthesized into the target audio and video data in a time calibration mode, so that the picture-in-picture playing effect is achieved, the user experience is brought to the user, the user is more convenient and faster, and the adhesion of the user is enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of an embodiment of a picture-in-picture video playing method according to the present invention;
FIG. 2 is a flowchart illustrating steps of another embodiment of a method for supporting PIP video playback according to the present invention;
fig. 3 is a block diagram of a mobile terminal supporting picture-in-picture video playing according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flow chart illustrating steps of a picture-in-picture video playing method according to an embodiment of the present invention is shown.
The method of the embodiment comprises the following steps:
step 1: respectively extracting audio data in background audio and video data and user audio and video data, and synthesizing target audio data;
step 2: respectively extracting image data in background audio and video data and user audio and video data, and synthesizing target image data;
and step 3: and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data and playing the target audio and video data.
In the embodiment of the method, the step 1 and the step 2 are not sequenced.
Example two
Referring to fig. 2, a flow chart illustrating steps of a picture-in-picture video playing method according to an embodiment of the present invention is shown.
The method of the embodiment comprises the following steps:
step 101: respectively extracting audio data in background audio and video data and user audio and video data, and synthesizing target audio data through a calibration time point;
step 102: respectively extracting image data in background audio and video data and user audio and video data, placing the background image data at the bottom layer of a playing picture, placing the user image data at one corner of the top layer of the playing picture, and synthesizing the background image data and the user influence image data into target image data;
step 103: and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data through a calibration time point and playing the target audio and video data.
In the embodiment of the method, the step 101 and the step 102 are not sequenced.
The background audio and video data can be downloaded and stored locally, and the user audio and video data can be a video recorded by the user.
In this embodiment, we can see that:
the data of the two channels of the background audio data and the user audio data are synthesized into the target audio data, so that the sound texture obtained by simply playing and recording the background audio data and the user audio data at the same time is enhanced.
The method comprises the steps of extracting image data in background audio and video data and user audio and video data respectively, placing the background image data at the bottom layer of a playing picture, placing the user image data at one corner of the top layer of the playing picture, wherein the user image data is placed at any one corner of the top layer of the playing picture, and the background image data and the user influence image data are combined into target image data. A picture-in-picture effect is achieved.
The synthesized target audio data and the synthesized image data are synthesized into target audio and video data through a calibration time point and are played, namely, the target audio and video data are synthesized and played in a time calibration and synchronization mode, so that novel user experience is brought to a user, and the adhesion of the user is further improved.
EXAMPLE III
Referring to fig. 3, a block diagram of a mobile terminal supporting picture-in-picture video playing according to a second embodiment of the present invention is shown.
The mobile terminal of the embodiment includes:
the audio data processing device comprises a storage unit 201, a first audio data extraction unit 202, a second audio data extraction unit 203, a first image data extraction unit 204, a second image data extraction unit 205, an audio data synthesis unit 206, an image data synthesis unit 207 and an audio and video data synthesis unit 208.
Wherein,
a storage unit 201, configured to store background audio/video data and user audio/video data;
a first audio data extracting unit 202, configured to extract background audio data from the background audio/video data stored in the storage unit 201;
a second audio data extracting unit 203, configured to extract user audio data from the user audio and video data stored in the storage unit 201;
an audio data synthesizing unit 206 configured to synthesize the background audio data extracted by the first audio data extracting unit 202 and the user audio data extracted by the second audio data extracting unit 203 into target audio data; here, the background audio data and the user audio data may be synthesized into the target audio data by means of time point calibration, i.e., time calibration and synchronization.
A first image data extraction unit 204, configured to extract background image data from the background audio/video data stored in the storage unit 201;
a second image data extracting unit 205, configured to extract user image data from the user audio and video data stored in the storage unit 201;
an image data synthesizing unit 207 configured to synthesize the background image data extracted by the first image data extracting unit 204 and the user image data extracted by the second image data extracting unit 205 into target image data; here, the background image data extracted by the first image data extraction unit 204 is placed at the bottom layer of the video, the user image data extracted by the second image data extraction unit 205 is placed at one corner of the top layer of the video, where the user image data is placed at any one corner of the top layer of the playing screen, and the background image data and the user-influenced image data are synthesized into the target image data.
An audio/video data synthesizing unit 208, configured to synthesize the target audio data synthesized by the audio data synthesizing unit 206 and the target image data synthesized by the image data synthesizing unit 207 into target audio/video data. Here, the target audio data synthesized by the audio data synthesis unit 206 and the target video data synthesized by the video data synthesis unit 207 may be synthesized into target audio/video data by time calibration, that is, target audio/video data is synthesized by time calibration and synchronization.
The mobile terminal supporting picture-in-picture video playing of this embodiment is used to implement the corresponding picture-in-picture video playing method in the first embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A picture-in-picture video playback method, comprising:
respectively extracting audio data in background audio and video data and user audio and video data, and synthesizing target audio data;
respectively extracting image data in background audio and video data and user audio and video data, and synthesizing target image data;
and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data and playing the target audio and video data.
2. The method of claim 1, wherein: the method for respectively extracting the audio data in the background audio and video data and the audio and video data of the user and synthesizing the target audio data comprises the following steps:
and respectively extracting audio data in the background audio and video data and the user audio and video data, and synthesizing target audio data through calibrating time points.
3. The method of claim 1, wherein: the method for respectively extracting the image data in the background audio and video data and the user audio and video data and synthesizing the target image data comprises the following steps:
the method comprises the steps of respectively extracting image data in background audio and video data and user audio and video data to respectively obtain background image data and user image data, placing the background image data at the bottom layer of a playing picture, placing the user image data at one corner of the top layer of the playing picture, and combining the background image data and the user image data into target image data.
4. A method according to any one of claims 1-3, characterized in that: the synthesizing the synthesized target audio data and the image data into target audio and video data and playing the target audio and video data comprises the following steps:
and synthesizing the synthesized target audio data and the synthesized image data into target audio and video data through a calibration time point and playing the target audio and video data.
5. A mobile terminal supporting picture-in-picture playing, comprising a storage unit, and further comprising: the device comprises a first audio data extraction unit, a second audio data extraction unit, a first image data extraction unit, a second image data extraction unit, an audio data synthesis unit, an image data synthesis unit and an audio and video data synthesis unit;
the storage unit is used for storing background audio and video data and user audio and video data;
the first audio data extraction unit is used for extracting background audio data from the background audio and video data stored in the storage unit;
the second audio data extraction unit is used for extracting user audio data from the user audio and video data stored in the storage unit;
the audio data synthesis unit is used for synthesizing the background audio data extracted by the first audio data extraction unit and the user audio data extracted by the second audio data extraction unit into target audio data;
the first image data extraction unit is used for extracting background image data from the background audio and video data stored in the storage unit;
the second image data extraction unit is used for extracting user image data from the user audio and video data stored in the storage unit;
the image data synthesis unit is used for synthesizing the background image data extracted by the first image data extraction unit and the user image data extracted by the second image data extraction unit into target image data;
the audio and video data synthesis unit is used for synthesizing the target audio data synthesized by the audio data synthesis unit and the target image data synthesized by the image data synthesis unit into target audio and video data.
6. The mobile terminal according to claim 5, wherein the audio data synthesizing unit is further configured to synthesize the background audio data extracted by the first audio data extracting unit and the user audio data extracted by the second audio data extracting unit into target audio data by calibrating a time point.
7. The mobile terminal according to claim 5, wherein the image data synthesizing unit is further configured to place the background image data extracted by the first image data extracting unit on a bottom layer of a video, place the user image data extracted by the second image data extracting unit on a top corner of the video, and synthesize the background image data and the user-influenced image data into target image data.
8. The mobile terminal according to any one of claims 5 to 7, wherein the audio/video synthesis unit is further configured to synthesize the target audio data synthesized by the audio data synthesis unit and the target image data synthesized by the image data synthesis unit into target audio/video data by means of time calibration.
CN201511027645.5A 2015-12-30 2015-12-30 Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing Pending CN105635772A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511027645.5A CN105635772A (en) 2015-12-30 2015-12-30 Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511027645.5A CN105635772A (en) 2015-12-30 2015-12-30 Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing

Publications (1)

Publication Number Publication Date
CN105635772A true CN105635772A (en) 2016-06-01

Family

ID=56050195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511027645.5A Pending CN105635772A (en) 2015-12-30 2015-12-30 Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing

Country Status (1)

Country Link
CN (1) CN105635772A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686440A (en) * 2016-12-28 2017-05-17 杭州趣维科技有限公司 Quick and highly efficient picture-in-picture video manufacturing method applied to mobile phone platform
CN112970041A (en) * 2018-11-05 2021-06-15 恩德尔声音有限公司 System and method for creating a personalized user environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1860785A (en) * 2003-10-15 2006-11-08 索尼株式会社 Reproducing device, reproducing method, reproducing program, and recording medium
CN101261864A (en) * 2008-04-21 2008-09-10 中兴通讯股份有限公司 A method and system for mixing recording voice at a mobile terminal
CN101808214A (en) * 2005-06-20 2010-08-18 夏普株式会社 Video data reproducing apparatus, video data generating apparatus and recording medium
CN103428555A (en) * 2013-08-06 2013-12-04 乐视网信息技术(北京)股份有限公司 Multi-media file synthesis method, system and application method
CN104883516A (en) * 2015-06-05 2015-09-02 福建星网视易信息系统有限公司 Method and system for producing real-time singing video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1860785A (en) * 2003-10-15 2006-11-08 索尼株式会社 Reproducing device, reproducing method, reproducing program, and recording medium
CN101808214A (en) * 2005-06-20 2010-08-18 夏普株式会社 Video data reproducing apparatus, video data generating apparatus and recording medium
CN101261864A (en) * 2008-04-21 2008-09-10 中兴通讯股份有限公司 A method and system for mixing recording voice at a mobile terminal
CN103428555A (en) * 2013-08-06 2013-12-04 乐视网信息技术(北京)股份有限公司 Multi-media file synthesis method, system and application method
CN104883516A (en) * 2015-06-05 2015-09-02 福建星网视易信息系统有限公司 Method and system for producing real-time singing video

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686440A (en) * 2016-12-28 2017-05-17 杭州趣维科技有限公司 Quick and highly efficient picture-in-picture video manufacturing method applied to mobile phone platform
CN112970041A (en) * 2018-11-05 2021-06-15 恩德尔声音有限公司 System and method for creating a personalized user environment
CN112970041B (en) * 2018-11-05 2023-03-24 恩德尔声音有限公司 System and method for creating a personalized user environment

Similar Documents

Publication Publication Date Title
US10939069B2 (en) Video recording method, electronic device and storage medium
EP3236345A1 (en) An apparatus and associated methods
JP6093289B2 (en) Video processing apparatus, video processing method, and program
JP2017503394A (en) VIDEO PROCESSING METHOD, VIDEO PROCESSING DEVICE, AND DISPLAY DEVICE
CN109495684A (en) A kind of image pickup method of video, device, electronic equipment and readable medium
US8798437B2 (en) Moving image processing apparatus, computer-readable medium storing thumbnail image generation program, and thumbnail image generation method
CN102868862A (en) Method and equipment for dubbing video applied to mobile terminal
JP6471418B2 (en) Image / sound distribution system, image / sound distribution device, and image / sound distribution program
CN105635772A (en) Picture-in-picture video playing method and mobile terminal supporting picture-in-picture video playing
CN104991950A (en) Picture generating method, display method and corresponding devices
CN109862385B (en) Live broadcast method and device, computer readable storage medium and terminal equipment
JP6227456B2 (en) Music performance apparatus and program
WO2018196811A1 (en) Method and device for determining inter-cut time bucket in audio/video
CN106604144A (en) Video processing method and device
WO2015195390A1 (en) Multiple viewpoints of an event generated from mobile devices
JP5310682B2 (en) Karaoke equipment
EP3321795B1 (en) A method and associated apparatuses
JP6893117B2 (en) Audio / video playback device
WO2017026387A1 (en) Video-processing device, video-processing method, and recording medium
KR20120097785A (en) Interactive media mapping system and method thereof
CN112584225A (en) Video recording processing method, video playing control method and electronic equipment
Cremer et al. Machine-assisted editing of user-generated content
CN104506751A (en) Method and device for generating electronic postcard with voice
KR20130092692A (en) Method and computer readable recording medium for making electronic book which can be realized by user voice
CN109120977A (en) Methods of exhibiting, storage medium, electronic equipment and the system of live video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160601