WO2020093879A1 - 视频合成方法、装置、计算机设备及可读存储介质 - Google Patents

视频合成方法、装置、计算机设备及可读存储介质 Download PDF

Info

Publication number
WO2020093879A1
WO2020093879A1 PCT/CN2019/113117 CN2019113117W WO2020093879A1 WO 2020093879 A1 WO2020093879 A1 WO 2020093879A1 CN 2019113117 W CN2019113117 W CN 2019113117W WO 2020093879 A1 WO2020093879 A1 WO 2020093879A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
encoding
video data
synthesized
time
Prior art date
Application number
PCT/CN2019/113117
Other languages
English (en)
French (fr)
Inventor
唐志新
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Priority to EP19881284.4A priority Critical patent/EP3879815A4/en
Priority to BR112021008934-9A priority patent/BR112021008934A2/pt
Priority to KR1020217012651A priority patent/KR20210064353A/ko
Priority to US17/291,943 priority patent/US11706463B2/en
Priority to JP2021523609A priority patent/JP7213342B2/ja
Publication of WO2020093879A1 publication Critical patent/WO2020093879A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/27Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving both synthetic and natural picture components, e.g. synthetic natural hybrid coding [SNHC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8227Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being at least another television signal

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to video synthesis methods, devices, computer equipment, and readable storage media.
  • a video encoder is used to synthesize the captured video and the existing video that is expected to be saved.
  • the compression algorithm used by the video encoder has a relatively high compression.
  • a higher compression ratio will result in more loss of the video screen, resulting in lower definition of the video screen.
  • an embodiment of the present disclosure provides a video synthesis method, including:
  • the second encoding is performed on the synthesized video data to obtain the target video.
  • the step of encoding the second video data for the first time to obtain an encoded video includes: first encoding the second video data using a first encoding method to obtain the encoding Video; encoding the synthesized video data a second time to obtain the target video, including: using a second encoding method to encode the synthesized video data a second time to obtain the target video; wherein, the first One encoding method is different from the second encoding method.
  • the step of performing the second encoding on the synthesized video data to obtain the target video includes: the bit rate of the second encoding is less than the bit rate of the first encoding.
  • the resolution of the second encoding is less than or equal to the resolution of the first encoding.
  • the step of encoding the second video data for the first time using the first encoding method to obtain an encoded video includes: performing real-time encoding on the second video data using the first entropy encoding method Encoding to obtain the encoded video.
  • the step of encoding the synthesized video data a second time using the second encoding method to obtain the target video includes: performing non-real-time encoding on the synthesized video data using the second entropy encoding method Encoding to obtain the target synthesized video.
  • the duration of the second video is equal to the duration of the first video.
  • an embodiment of the present disclosure provides a video synthesis device, including:
  • the acquisition module is configured to acquire the first video
  • the collection module is configured to collect the second video data captured in real time
  • a first encoding module configured to encode the second video data for the first time to obtain encoded video
  • a video synthesis module configured to synthesize the first video and the second video to obtain synthesized video data
  • the second encoding module is configured to encode the synthesized video data a second time to obtain the target video.
  • an embodiment of the present disclosure provides a computer device including a memory and a processor, wherein the memory stores a computer program that can run on the processor, and the processor implements the computer program when the processor executes the computer program The steps of the method. Get the first video;
  • the second encoding is performed on the synthesized video data to obtain the target video.
  • a readable storage medium provided by an embodiment of the present disclosure has stored thereon a computer program that can run on a processor, and when the processor executes the computer program, the steps of the method in the first aspect are implemented . The following steps:
  • the second encoding is performed on the synthesized video data to obtain the target video.
  • the video data synthesis method, device, computer equipment, and readable storage medium provided in this embodiment can obtain a target video picture with less loss and higher definition of the video picture.
  • FIG. 1 is an application scenario diagram of a video synthesis system provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a video synthesis method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an application program interface provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a video synthesis method provided by another embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a video synthesis device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • the video synthesis method provided by the embodiments of the present disclosure may be applicable to the application scenario of the video synthesis system shown in FIG. 1, wherein the computer device 110 may be a smartphone, tablet computer, notebook computer, desktop computer, or personal digital assistant.
  • the embodiments of the present disclosure do not limit the specific form of the computer device 110.
  • the computer device 110 can support the operation of various application programs (for example, the application program 111).
  • the application program 111 may acquire the video material of the video to be synthesized through the image acquisition device of the computer device 110, synthesize the video data material to obtain the corresponding video, and output the obtained video through the interface of the application program 111.
  • the application program 111 may be a material processing application such as video, image, etc., or a browser application.
  • the embodiment of the present disclosure does not limit the specific form of the application 111.
  • the execution subject may be a video synthesis device, and the device may be implemented as part or all of a computer device through software, hardware, or a combination of software and hardware.
  • FIG. 2 is a schematic flowchart of a video synthesis method provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure relate to a specific process in which the computer device encodes the first synthesized video data to obtain the second synthesized video.
  • the video synthesis method includes:
  • the above-mentioned first video may be a video stored locally by the computer device, a pre-shot local video saved by the computer device or a pre-downloaded saved local video through the computer device, or a video stored in the cloud Therefore, the embodiments of the present disclosure are not limited in any way.
  • the computer device may acquire the first video according to the received click instruction, where the acquired first video may be any local video, which is not limited in this embodiment of the present disclosure.
  • S102 Collect second video data captured in real time.
  • the computer device may shoot the second video data in real time according to the received shooting command.
  • the above-mentioned shooting command may be generated by the user pressing and holding a shooting button (for example, a hardware button, a virtual button, or an interactive interface, etc.) on the computer device.
  • a shooting button for example, a hardware button, a virtual button, or an interactive interface, etc.
  • the interface after the computer device opens the application program may be, for example, the interface shown in FIG. 3, and the application program may have a split-screen function. As shown in FIG. 3, the location of the local video playback area and the shooting video playback area are preset by the application.
  • the user opens the application and clicks the local video playback area on the right to select a locally saved first video.
  • Press and hold the capture button to capture real-time captured video in the video playback area on the left Video, when the duration of the video is equal to the duration of the first video, the shooting is automatically ended; or when the duration of the video is shorter than the duration of the first video, you can release the capture button, at this time, the first small video clip is finished shooting, then press Press and hold the capture button to start shooting the second small video clip until the duration of the last small video clip taken and the duration of all previous small video clips equal the duration of the first video, and the shooting ends automatically.
  • the duration of the second video is equal to the duration of the first video.
  • the user may press and hold the shooting button on the computer device at a time until the duration of the collected second video is equal to the duration of the first video, and the shooting ends automatically.
  • the user can also press and hold the shooting button on the computer device multiple times to obtain the second video data captured in real time, wherein the sum of the duration of the shooting buttons of the multiple buttons is equal to the duration of the first video.
  • the resolution and bit rate of the second video data may be determined according to the initial value of the software corresponding to the second video data shooting on the computer device, and after the software shooting the second video data is determined, the resolution of the second video data And the bit rate can be determined.
  • the computer device encodes the second video data captured in real time for the first time to obtain the encoded video.
  • first encoding can be characterized as a specific compression technique to convert a file in one video format into a file in another video format.
  • the computer device can display the words of loading data on the interface of the application program during the process that the computer device can automatically synthesize the first video and the encoded video and generate synthesized video data, and
  • the loading progress may be displayed, and the loading progress may be characterized as a percentage of synthesized video data.
  • the computer device may encode the synthesized video data a second time to obtain the target video.
  • the duration of the target video may be equal to the duration of the composite video.
  • the above-mentioned second encoding can be characterized as a specific compression technology, a method of converting a file in one video format into another file in a video format.
  • the resolution and bit rate of the target video can be customized and determined according to user needs.
  • the computer device can automatically encode the synthesized video data a second time. At the same time, after the second encoding, the computer device can play the encoded target video.
  • the target video picture that can be obtained by the computer device has less loss, and the definition of the video picture is higher.
  • FIG. 4 is a schematic flowchart of a video synthesis method provided by another embodiment of the present disclosure.
  • the video synthesis method specifically includes steps S101, S102, S1031, S104, and S1051, where steps S101, S102, and S104 are the same as the foregoing content, and here No longer.
  • S1031 Perform the first encoding on the second video data by using the first encoding method to obtain encoded video.
  • the computer device may adopt the first encoding method to perform the first lossy encoding on the second video data captured in real time to obtain the encoded video.
  • the definition of the edited video obtained by the first encoding can be consistent with the definition of the second video captured in real time.
  • S1051 Perform a second encoding on the synthesized video data by using the second encoding method to obtain the target encoded video.
  • the first encoding method is different from the second encoding method.
  • the code rate of the second encoding is less than the code rate of the first encoding, and the resolution of the second encoding is less than or equal to the resolution of the first encoding.
  • the computer device may perform the second lossy encoding on the synthesized video data in the second encoding mode to obtain the target video, and the bit rate of the second encoding is less than the bit rate of the first encoding, The resolution of the second encoding is less than or equal to the resolution of the first encoding.
  • the resolution and bit rate of the second encoding may be equal to the resolution and bit rate of the target video, respectively.
  • the computer device may adopt the first encoding method to perform the first lossy encoding on the second video data, and use the first encoding to make the encoded video and the second video captured in real time as much as possible.
  • the resolutions are equal, and the encoded video obtained after the first encoding is encoded a second time using the second encoding method to obtain the target encoded video, wherein the bit rate of the second encoding is less than the bit rate of the first encoding, the second
  • the resolution of the secondary encoding is less than or equal to the resolution of the first encoding.
  • the step of first encoding the second video data using the first encoding method to obtain the encoded video includes: encoding the second video data in real time using the first entropy encoding method to obtain the encoded video .
  • the computer device may adopt the first entropy encoding method to encode the second video data captured in real time in real time to obtain the encoded video.
  • the above-mentioned entropy coding mode can be characterized as coding without losing any information according to the principle of entropy.
  • the first entropy coding method may be Shannon coding, Huffman coding, arithmetic coding, or Columbus coding, etc., and this embodiment is not limited in any way.
  • the above-mentioned real-time encoding can be characterized as that the total number of frames of the encoded video does not change, and the encoding speed is fixed.
  • the computer device may use the first entropy encoding method to encode the second video data in real time, to ensure that when the coded bit rate is large, the video image is prevented from being lost, and the resulting encoded video is clear Degree is higher.
  • the step of encoding the synthesized video data a second time using the second encoding method to obtain the target video includes: performing non-real-time encoding on the synthesized video data using the second entropy encoding method to obtain the target video .
  • the computer device may use the second entropy encoding method to perform non-real-time encoding on the synthesized video data obtained by the first encoding to obtain the target video.
  • the above-mentioned second entropy coding method may be Shannon coding, Huffman coding, arithmetic coding, or Columbus coding, etc., which is not limited in this embodiment.
  • the above non-real-time encoding can be characterized as that the total number of frames of the encoded video is unchanged, and the encoding speed is not limited, and only the total duration of the encoded video data needs to be considered.
  • the encoding speed of non-real-time encoding may be less than the encoding speed of real-time encoding.
  • the computer device may use the second entropy encoding method to perform non-real-time encoding on the synthesized video data, so that when the bit rate of the video encoding is small, the resulting target video picture loss is reduced and the clarity is ensured Higher.
  • steps in the flowcharts of FIGS. 2-4 are displayed in order according to the arrows, the steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in FIGS. 2-4 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or stages The execution order of is not necessarily sequential, but may be executed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Each module in the video synthesizing device of the above computer equipment may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software so that the processor can call and execute the operations corresponding to the above modules.
  • FIG. 5 is a schematic structural diagram of a video synthesis device provided by an embodiment. As shown in FIG. 5, the device may include: an acquisition module 11, an acquisition module 12, a first encoding module 13, a video synthesis module 14 and a second encoding module 15.
  • the obtaining module 11 can obtain the first video
  • the collection module 12 can collect the second video data captured in real time
  • the first encoding module 13 can encode the second video data for the first time to obtain edited video
  • the video synthesis module 14 can synthesize the first video and the encoded video to obtain synthesized video data
  • the second encoding module 15 can encode the synthesized video data a second time to obtain the target video.
  • the duration of the second video is equal to the duration of the first video.
  • the video synthesis device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • Another embodiment of the present disclosure provides a video synthesis device.
  • the difference from the video synthesis device shown in FIG. 5 is that: the first encoding module 13 can encode the second video data for the first time by using the first encoding method; and obtain the encoded video; The synthesized video data is encoded a second time to obtain the target video.
  • the code rate of the second encoding is smaller than that of the first encoding.
  • the resolution of the second encoding is less than or equal to the resolution of the first encoding.
  • the video synthesis device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the first encoding module 13 may further encode the second video data in real time using the first entropy encoding method to obtain synthesized video data.
  • the video synthesis device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the second encoding module 15 may further adopt the second entropy encoding method to perform non-real-time encoding on the synthesized video data to obtain the target video.
  • the video synthesis device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • a computer device is provided, and its internal structure diagram may be as shown in FIG. 6.
  • the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with external terminals through a network connection.
  • the computer program is executed by the processor to implement the video synthesis method of the present disclosure.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen; the input device of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the computer device housing , Can also be an external keyboard, touchpad or mouse.
  • FIG. 6 is only a block diagram of a part of the structure related to the disclosed solution, and does not constitute a limitation on the computer device to which the disclosed solution is applied.
  • the specific computer device may It includes more or fewer components than shown in the figure, or some components are combined, or have a different component arrangement.
  • a computer device which includes a memory and a processor, and a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • the second encoding is performed on the synthesized video data to obtain the target video.
  • a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • the second encoding is performed on the synthesized video data to obtain the target video.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), Synchronous link (Synchlink) DRAM (SLDRAM), direct bus RAM (DRRAM) memory, direct bus dynamic RAM (DRDRAM) memory, and bus dynamic RAM (RDRAM) memory, etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Synchronous link
  • DRRAM direct bus RAM
  • DRAM direct bus dynamic RAM
  • RDRAM bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Studio Circuits (AREA)

Abstract

本公开提供了视频合成方法、装置、计算机设备及可读存储介质,该方法包括:获取第一视频;采集实时拍摄的第二视频数据;对第二视频数据进行第一次编码,得到编码视频;对第一视频和编码视频进行合成,得到合成视频数据;以及,对合成视频数据进行第二次编码,得到目标视频。该方法得到的目标视频画面损失较少,视频画面的清晰度较高。

Description

视频合成方法、装置、计算机设备及可读存储介质
相关申请的交叉引用
本公开要求于2018年11月8日在中国国家知识产权局提交的申请号为201811326170.3的中国专利申请的权益,该中国专利申请公开的内容通过引用整体并入本文。
技术领域
本公开涉及计算机技术领域,特别是涉及视频合成方法、装置、计算机设备及可读存储介质。
背景技术
随着智能终端技术的发展,人们经常利用手机、平板电脑等智能终端来拍摄视频,以记录日常工作和生活。一般,手机、平板电脑等智能终端的存储空间是有限的,但随着拍摄视频的数量不断增多,不可避免的会导致由于智能终端的存储空间不足而无法保存所拍摄的视频。所以需要将所拍摄的视频与期望保存的现有视频进行合成,以节省智能终端存储空间。
在现有相关技术中,采用视频编码器将所拍摄的视频和期望保存的现有视频进行合成,通常视频编码器采用的压缩算法的压缩比较高。但是,在保证目标合成视频数据大小一定的条件下,较高的压缩比会导致较多的视频画面损失,使得视频画面的清晰度较低。
发明内容
基于此,有必要针对上述技术问题,提供视频合成方法、装置、计算机设备及可读存储介质。
第一方面,本公开实施例提供一种视频合成方法,包括:
获取第一视频;
采集实时拍摄的第二视频数据;
对所述第二视频数据进行第一次编码,得到编码视频;
将所述第一视频与所述编码视频进行合成,得到合成视频数据;
对所述合成视频数据进行第二次编码,得到目标视频。
根据本公开的实施例,对所述第二视频数据进行第一次编码,得到编码视频的步骤,包括:采用第一编码方式对所述第二视频数据进行第一次编码,得到所述编码视频;对所述合成视频数据进行第二次编码,得到目标视频的步骤,包括:采用第二编码方式对所述合成视频数据进行第二次编码,得到所述目标视频;其中,所述第一编码方式与所述第二编码方式不同。
根据本公开的实施例,所述对所述合成视频数据进行第二次编码,得到目标视频的步骤,包括:所述第二次编码的码率小于所述第一次编码的码率。
根据本公开的实施例,所述第二次编码的分辨率小于或等于所述第一次编码的分辨率。
根据本公开的实施例,所述采用第一编码方式对所述第二视频数据进行第一次编码,得到编码视频的步骤,包括:采用第一熵编码方式对所述第二视频数据进行实时编码,得到所述编码视频。
根据本公开的实施例,所述采用第二次编码方式对所述合成视频数据进行第二次编码,得到目标视频的步骤,包括:采用第二熵编码方式对所述合成视频数据进行非实时编码,得到所述目标合成视频。
根据本公开的实施例,所述第二视频的时长等于所述第一视频的时长。
第二方面,本公开实施例提供视频合成装置,包括:
获取模块,被配置为获取第一视频;
采集模块,被配置为采集实时拍摄的第二视频数据;
第一编码模块,被配置为对所述第二视频数据进行第一次编码,得到编码视频;
视频合成模块,被配置为对所述第一视频和所述第二视频进行合成,得到合成视频数据;
第二编码模块,被配置为对所述合成视频数据进行第二次编码,得到目标视频。
第三方面,本公开实施例提供计算机设备,包括存储器和处理器,其中,所述存储器上存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现第一方面中的方法的步骤。获取第一视频;
采集实时拍摄的第二视频数据;
对所述第二视频数据进行第一次编码,得到编码视频;
对所述第一视频和所述编码视频进行合成,得到合成视频数据;
对所述合成视频数据进行第二次编码,得到目标视频。
第四方面,本公开实施例提供的一种可读存储介质,其上存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现第一方面中的方法的步骤。以下步骤:
获取第一视频;
采集实时拍摄的第二视频数据;
对所述第二视频数据进行第一次编码,得到编码视频;
对所述第一视频和所述编码视频进行合成,得到合成视频数据;
对所述合成视频数据进行第二次编码,得到目标视频。
本实施例提供的视频数据合成方法、装置、计算机设备及可读存储介质,可以得到的目标视频画面损失较少,视频画面的清晰度较高。
附图说明
图1为本公开的实施例提供的视频合成系统的应用场景图;
图2为本公开的实施例提供的视频合成方法的流程示意图;
图3为本公开的实施例提供的应用程序界面示意图;
图4为本公开的另一实施例提供的视频合成方法的流程示意图;
图5为本公开的实施例提供的视频合成装置结构示意图;以及
图6为本公开的实施例提供的计算机设备结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开的实施例提供的视频合成方法,可以适用于图1所示的视频合成系统的应用场景,其中,计算机设备110可以为智能手机、平板电脑、笔记本电脑、台式电脑或个人数字助理等具有视频图像采集及处理功能的电子设备,本公开的实施例对计算机设备110的具体形式不做限定。需要说明的是,计算机设备110可以支持各种应用程序(例如应用程序111)的运行。可选地,应用程序111可以通过计算机设备110的图像采集装置来获取待合成视频的视频素材,将视频数据素材进行合成以得到相应的视频,并将得到的视频通过应用程序111的界面输出。可选地,应用程序111可以为视频、图像等素材处理应用,还可以为浏览器应用等。本公开的实施例对应用程序111的具体形式不做限定。
需要说明的是,本公开的实施例提供的视频合成方法,其执行主体可以是视频合成装置,该装置可以通过软件、硬件或者软硬件结合的方式实现成为计算机设备的部分或者全部。
为了使本公开的目的、技术方案及优点更加清楚明白,通过下述实施例并结合附图,对本公开实施例中的技术方案的进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本公开,并不用于限定本公开。
图2为本公开的实施例提供的视频合成方法的流程示意图。本公开的实施例涉及计算机设备对第一合成视频数据进行编码得到第二合成视频的具体过程。如图2所示,该视频合成方法包括:
S101、获取第一视频。
在本公开的一个实施例中,上述第一视频可以为计算机设备本地存储的视频、计算机设备预先拍摄已保存的本地视频或通过计算机设备预先下载已保存的本地视频,还可以是云端存储的视频,对此本公开的实施例并不作任何限定。
需要说明的是,计算机设备可以根据接收到的点击指令来获取第一视频,其中所获取的第一视频可以是任意一段本地视频,对此本公开的实施例并不作任何限定。
S102、采集实时拍摄的第二视频数据。
在本公开的一个实施例中,计算机设备可以根据接收到的拍摄命令,实时拍摄第二视频数据。可选地,上述拍摄命令可以为用户按住计算机设备上的拍摄按钮(例如,硬件按键、虚拟按键或交互界面等)生成的。需要说明的是,计算机设备打开应用程序后的界面可以为例如图3所示的界面,该应用程序可以具有分屏功能。如图3所示,本地视频播放区和拍摄视频播放区的位置是应用程序预先设定的。
示例性地,参考图3所示,用户打开应用程序,点击右侧本地视频播放区,可以选择一段本地已保存的第一视频,按住拍摄按钮可以在左侧拍摄视频播放区采集实时拍摄的视频,当拍摄视频时长等于第一视频时长,则自动结束拍摄;或者当拍摄视频时长小于第一视频时长时,可以松开拍摄按钮,此时,第一个小视频片段拍摄结束,之后再次按住拍摄按钮,开始拍摄第二个小视频片段,直到拍摄的最后一个小视频片段的时长,与之前所有小视频片段的时长之和等于第一视频的时长为止,自动结束拍摄。
可选地,第二视频的时长等于第一视频的时长。
可选地,在拍摄第二视频的过程中,用户可以一次按住计算机设备上的拍摄按钮,直到采集的第二视频的时长等于第一视频的时长为止,自动结束拍摄。同时,用户还可以多次按住计算机设备上的拍摄按钮得到实时拍摄的第二视频数据,其中,多次按钮拍摄按钮的时长之和等于第一视频的时长。示例性地,若第一视频的时长为6分钟,且用户通过3次按住拍摄按钮得到实时拍摄的第二视频数据,若第一次按住拍摄按钮的时长为1分钟,结束后第一次拍摄后第二次按住拍摄按钮的时长为2分钟,则第三次按住拍摄按钮直到该次拍摄时长为3分钟时,自动结束拍摄。可选地,上述第二视频数据的分辨率和码率可以根据计算机设备上拍摄第二视频数据对应软件的初始值确定,并且拍摄第二视频数据的软件确定后,第二视频数据的分辨率和码率就可以确定。
S103、对第二视频数据进行第一次编码,得到编码视频。
具体地,计算机设备在实时拍摄第二视频数据的过程中,对实时拍摄的第二视频数据进行第一次编码,得到编码视频。
需要说明的是,上述第一次编码可以表征为特定的压缩技术,将某一种视频格式的文件转换成另一种视频格式文件的方式。
S104、对第一视频和编码视频进行合成,得到合成视频数据。
需要说明的是,得到编码视频后,在计算机设备可以自动对第一视频和编码视频进行合成并且生成合成视频数据的过程中,计算机设备可以在应用程序的界面上显示正在加载数据的字样,并且可以显示加载进度,该加载进度可以表征为已合成视频数据的百分比。
S105、对合成视频数据进行第二次编码,得到目标视频。
在本公开的一个实施例中,计算机设备可以对合成视频数据进行第二次编码,得到目标视频。需要说明的是,目标视频的时长可以等于合成视频的时长。可选地,上述第二次编码可以表征为特定的压缩技术,将某一种视频格式的文件转换成另一种视频格式文件的方式。可选地,目标视频的分辨率和码率可以根据用户需求自定义确定。
需要说明的是,在得到合成视频数据后,计算机设备可以自动对合成视频数据进行第二次编码。同时,第二次编码结束后,计算机设备可以播放编码后的目标视频。
本实施例提供的视频合成方法,计算机设备可以得到的目标视频画面损失较少,视频画面的清晰度较高。
图4为本公开的另一实施例提供的视频合成方法的流程示意图,该视频合成方法具体包括步骤S101、S102、S1031、S104和S1051,其中步骤S101、S102和S104与前述内容相同,在此不再赘述。
S1031、采用第一编码方式对述第二视频数据进行第一次编码,得到编码视频。
具体地,计算机设备在实时拍摄第二视频数据的过程中,可以采用第一编码方式,对实时拍摄的第二视频数据进行第一次有损编码,得到编码视频。
需要说明的是,第一次编码得到的编辑视频的清晰度,可以与实时拍摄的第二视频的清晰度保持一致。
S1051、采用第二编码方式对合成视频数据进行第二次编码,得到目标编码视频。
其中,第一编码方式与第二编码方式不同。
可选地,第二次编码的码率小于第一次编码的码率,第二次编码的分辨率小于或等于第一次编码的分辨率。
在本公开的一个实施例中,计算机设备可以采用第二编码方式对合成视频数据进行第二次有损编码,得到目标视频,并且第二次编码的码率小于第一次编码的码率,第二次编码的分辨率小于或等于第一次编码的分辨率。
根据本公开的实施例,第二次编码的分辨率和码率,可以分别与目标视频的分辨率和码率均相等。
本实施例提供的视频合成方法,计算机设备可以采用第一编码方式,对第二视频数据进行第一次有损编码,通过第一次编码尽可能的使编码视频和实时拍摄的第二视频的分辨率相等,对第一次编码后得到的编码视频采用第二编码方式进行第二次编码,得到目标编码视频,其中,第二次编码的码率小于第一次编码的码率,第二次编码的分辨率小于或等于第一次编码的分辨率,该方法可以保证得到的目标编码视频文件较小的情况下,能够防止目标编码视频画面丢失,并保证目标编码视频画面的清晰度较高。
在其中一个实施例中,上述采用第一编码方式对第二视频数据进行第一次编码,得到编码视频的步骤,包括:采用第一熵编码方式对第二视频数据进行实时编码,得到编码视频。
在本公开的一个实施例中,计算机设备可以采用第一熵编码方式对实时拍摄的第二视频数据进行实时编码,得到编码视频。可选地,上述熵编码方式可以表征为按照熵原理不丢失任何信息的编码。可选地,上述第一熵编码方式可以为香农编码、哈夫曼编码、算术编码或哥伦布编码等,对此本实施例不作任何限定。
需要说明的是,上述实时编码可以表征为编码视频的总帧数不变,并 且编码的速度固定。
本实施例提供的视频合成方法,计算机设备可以采用第一熵编码方式对第二视频数据进行实时编码,保证在编码的码率较大时,防止视频画面的丢失,保证得到的编码视频的清晰度较高。
在本公开的实施例中,上述采用第二编码方式对合成视频数据进行第二次编码,得到目标视频的步骤,包括:采用第二熵编码方式对合成视频数据进行非实时编码,得到目标视频。
在本公开的一个实施例中,计算机设备可以采用第二熵编码方式对第一次编码得到的合成视频数据进行非实时编码,得到目标视频。可选地,上述第二熵编码方式可以为香农编码、哈夫曼编码、算术编码或哥伦布编码等,对此本实施例不作任何限定。可选地,上述非实时编码可以表征为编码视频的总帧数不变,编码的速度没有任何限定,只需要考虑编码视频数据的总时长。另外,非实时编码的编码速度可以小于实时编码的编码速度。
本实施例提供的视频合成方法,计算机设备可以采用第二熵编码方式对合成视频数据进行非实时编码,从而可以在视频编码的码率较小时,保证得到的目标视频画面损失较少,清晰度较高。
应该理解的是,虽然图2-4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-4中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
关于视频合成装置的具体描述可以参见上文中对于视频合成方法的描述,在此不再赘述。上述计算机设备的视频合成装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设 备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图5为一实施例提供的视频合成装置结构示意图。如图5所示,该装置可以包括:获取模块11,采集模块12,第一编码模块13、视频合成模块14和第二编码模块15。
在本公开的一个实施例中,获取模块11,可以获取第一视频;
采集模块12,可以采集实时拍摄的第二视频数据;
第一编码模块13,可以对第二视频数据进行第一次编码,得到编辑视频;
视频合成模块14,可以将第一视频与编码视频进行合成,得到合成视频数据;
第二编码模块15,可以对合成视频数据进行第二次编码,得到目标视频。
可选地,第二视频的时长等于第一视频的时长。
本实施例提供的视频合成装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
另一个本公开的另一实施例提供了视频合成装置。与图5所示视频合成装置不同的是:第一编码模块13可以采用第一编码方式对第二视频数据进行第一次编码,得到编码视频;第二编码模块15可以采用第二编码方式对合成视频数据进行第二次编码,得到目标视频。其中,第二次编码的码率小于第一次编码的码率。可选地,第二次编码的分辨率小于或等于第一次编码的分辨率。
本实施例提供的视频合成装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
继续参见上述图5,在上述图5所示的实施例的基础上,第一编码模块13可以进一步采用第一熵编码方式对第二视频数据进行实时编码,得到合成视频数据。
本实施例提供的视频合成装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在本公开的实施例中,第二编码模块15可以进一步采用第二熵编码 方式对合成视频数据进行非实时编码,得到目标视频。
本实施例提供的视频合成装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在一个实施例中,提供了一种计算机设备,其内部结构图可以如图6所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现本公开的视频合成方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏;该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外部的键盘、触控板或鼠标等。
本领域技术人员可以理解,图6中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在本公开的实施例中,提供了一种计算机设备,其包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取第一视频;
采集实时拍摄的第二视频数据;
对所述第二视频数据进行第一次编码,得到编辑视频;
将所述第一视频与所述编码视频进行合成,得到合成视频数据;
对所述合成视频数据进行第二次编码,得到目标视频。
在一个实施例中,提供了一种可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取第一视频;
采集实时拍摄的第二视频数据;
对所述第二视频数据进行第一次编码,得到编码视频;
将所述第一视频与所述编码视频进行合成,得到合成视频数据;
对所述合成视频数据进行第二次编码,得到目标视频。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本公开所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限的是,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、直接总线式(Rambus)RAM(DRRAM)存储器、直接总线式动态RAM(DRDRAM)存储器、以及总线式动态RAM(RDRAM)存储器等。
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本公开专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种视频合成方法,包括:
    获取第一视频;
    采集实时拍摄的第二视频数据;
    对所述第二视频数据进行第一次编码,得到编码视频;
    将所述第一视频与所述编码视频进行合成,得到合成视频数据;
    对所述合成视频数据进行第二次编码,得到目标视频。
  2. 根据权利要求1所述的方法,其中,对所述第二视频数据进行第一次编码,得到编码视频的步骤,包括:采用第一编码方式对所述第二视频数据进行第一次编码,得到所述编码视频,并且
    对所述合成视频数据进行第二次编码,得到目标视频的步骤,包括:采用第二编码方式对所述合成视频数据进行第二次编码,得到所述目标视频,
    其中,所述第一编码方式与所述第二编码方式不同。
  3. 根据权利要求1所述的方法,其中,所述第二次编码的码率小于所述第一次编码的码率。
  4. 根据权利要求3所述的方法,其中,所述第二次编码的分辨率小于或等于所述第一次编码的分辨率。
  5. 根据权利要求2所述的方法,其中,所述采用第一编码方式对所述第二视频数据进行第一次编码,得到所述编码视频的步骤,包括:
    采用第一熵编码方式对所述第二视频数据进行实时编码,得到所述编码视频。
  6. 根据权利要求2所述的方法,其中,所述采用第二编码方式对所述 合成视频数据进行第二次编码,得到所述目标视频的步骤,包括:
    采用第二熵编码方式对所述合成视频数据进行非实时编码,得到所述目标合成视频。
  7. 根据权利要求1所述的方法,其特征在于,所述第二视频的时长等于所述第一视频的时长。
  8. 一种视频合成装置,其特征在于,包括:
    获取模块,被配置为获取第一视频;
    采集模块,被配置为采集实时拍摄的第二视频数据;
    第一编码模块,被配置为对所述第二视频数据进行第一次编码,得到编码视频;
    视频合成模块,被配置为将所述第一视频与所述编码视频进行合成,得到合成视频数据;
    第二编码模块,被配置为对所述合成视频数据进行第二次编码,得到目标视频。
  9. 一种计算机设备,包括存储器和处理器,其中,所述存储器上存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤。
  10. 一种可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述方法的步骤。
PCT/CN2019/113117 2018-11-08 2019-10-24 视频合成方法、装置、计算机设备及可读存储介质 WO2020093879A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP19881284.4A EP3879815A4 (en) 2018-11-08 2019-10-24 VIDEO SYNTHESIS METHOD AND APPARATUS, COMPUTER DEVICE AND READABLE STORAGE MEDIA
BR112021008934-9A BR112021008934A2 (pt) 2018-11-08 2019-10-24 método e aparelho de síntese de vídeo, dispositivo de computador e meio de armazenamento legível por computador
KR1020217012651A KR20210064353A (ko) 2018-11-08 2019-10-24 영상 합성방법, 장치, 컴퓨터 기기 및 판독 가능한 저장 매체
US17/291,943 US11706463B2 (en) 2018-11-08 2019-10-24 Video synthesis method, apparatus, computer device and readable storage medium
JP2021523609A JP7213342B2 (ja) 2018-11-08 2019-10-24 映像合成方法、装置、コンピュータ機器及びコンピュータ読み取り可能な記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811326170.3 2018-11-08
CN201811326170.3A CN109547711A (zh) 2018-11-08 2018-11-08 视频合成方法、装置、计算机设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2020093879A1 true WO2020093879A1 (zh) 2020-05-14

Family

ID=65845226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113117 WO2020093879A1 (zh) 2018-11-08 2019-10-24 视频合成方法、装置、计算机设备及可读存储介质

Country Status (7)

Country Link
US (1) US11706463B2 (zh)
EP (1) EP3879815A4 (zh)
JP (1) JP7213342B2 (zh)
KR (1) KR20210064353A (zh)
CN (1) CN109547711A (zh)
BR (1) BR112021008934A2 (zh)
WO (1) WO2020093879A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547711A (zh) 2018-11-08 2019-03-29 北京微播视界科技有限公司 视频合成方法、装置、计算机设备及可读存储介质
CN111324249B (zh) 2020-01-21 2020-12-01 北京达佳互联信息技术有限公司 多媒体素材生成方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040036782A1 (en) * 2002-08-20 2004-02-26 Verna Knapp Video image enhancement method and apparatus
US20120213486A1 (en) * 2011-02-21 2012-08-23 Hon Hai Precision Industry Co., Ltd. Screen recording system and method
CN105979267A (zh) * 2015-12-03 2016-09-28 乐视致新电子科技(天津)有限公司 一种视频压缩、播放方法以及装置
CN107295352A (zh) * 2017-06-14 2017-10-24 北京蜜莱坞网络科技有限公司 一种视频压缩方法、装置、设备及存储介质
CN109547711A (zh) * 2018-11-08 2019-03-29 北京微播视界科技有限公司 视频合成方法、装置、计算机设备及可读存储介质

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002112269A (ja) 2000-09-26 2002-04-12 Yamaha Corp 画像合成方法および画像合成装置
KR100515961B1 (ko) 2002-12-13 2005-09-21 삼성테크윈 주식회사 재압축을 수행하는 디지털 카메라의 제어 방법
JP3677779B2 (ja) 2003-04-04 2005-08-03 ソニー株式会社 情報処理装置および方法、プログラム、並びに記録媒体
JP4806515B2 (ja) * 2003-05-19 2011-11-02 株式会社日立製作所 符号化装置、これを用いたビデオカメラ装置及び符号化方法
KR20050071822A (ko) 2004-01-02 2005-07-08 황정우 동영상 편집 기능을 갖는 이동통신 단말기 및 그 방법과동영상 편집 서비스 서버 및 그 방법
JP2006033755A (ja) 2004-07-21 2006-02-02 Fuji Xerox Co Ltd 画像処理装置
JP5082209B2 (ja) * 2005-06-27 2012-11-28 株式会社日立製作所 送信装置、受信装置、及び映像信号送受信システム
KR100836616B1 (ko) 2006-11-14 2008-06-10 (주)케이티에프테크놀로지스 영상 합성 기능을 가지는 휴대용 단말기 및 휴대용단말기의 영상 합성 방법
JP4847398B2 (ja) 2007-06-06 2011-12-28 キヤノン株式会社 画像処理装置およびその方法
CN101316366A (zh) * 2008-07-21 2008-12-03 北京中星微电子有限公司 图像编/解码方法和图像编/解码装置
WO2010035752A1 (ja) 2008-09-24 2010-04-01 株式会社ニコン 画像生成装置、撮像装置、画像再生装置、画像再生プログラム
CN102238410B (zh) * 2011-06-15 2016-06-29 康佳集团股份有限公司 多视点图像的合成方法
JP5942258B2 (ja) 2012-06-12 2016-06-29 パナソニックIpマネジメント株式会社 映像表示システム、映像合成再符号化装置、映像表示装置、映像表示方法、及び映像合成再符号化プログラム
US8953079B2 (en) 2012-12-31 2015-02-10 Texas Instruments Incorporated System and method for generating 360 degree video recording using MVC
JP6004978B2 (ja) 2013-03-26 2016-10-12 Kddi株式会社 被写体画像抽出装置および被写体画像抽出・合成装置
JP6150652B2 (ja) 2013-07-29 2017-06-21 キヤノン株式会社 撮像装置、撮像装置の制御方法及びプログラム並びに記録媒体
JP2016119594A (ja) 2014-12-22 2016-06-30 キヤノン株式会社 表示装置及び制御方法
WO2016180486A1 (en) * 2015-05-12 2016-11-17 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Composite scalable video streaming
JP6632638B2 (ja) * 2015-06-23 2020-01-22 テレフオンアクチーボラゲット エルエム エリクソン(パブル) トランスコーディングのための方法および構成
KR102509939B1 (ko) 2015-10-13 2023-03-15 삼성전자 주식회사 전자 장치 및 전자 장치의 영상 인코딩 방법
CN105791938B (zh) * 2016-03-14 2019-06-21 腾讯科技(深圳)有限公司 多媒体文件的拼接方法和装置
CN107230187B (zh) * 2016-03-25 2022-05-24 北京三星通信技术研究有限公司 多媒体信息处理的方法和装置
CN108243318B (zh) * 2016-12-30 2020-09-25 广州华多网络科技有限公司 一种单接口实现多影像采集装置直播的方法及装置
EP3364342A1 (en) * 2017-02-17 2018-08-22 Cogisen SRL Method for image processing and video compression
US10306250B2 (en) * 2017-06-16 2019-05-28 Oath Inc. Video encoding with content adaptive resource allocation
ES2922532T3 (es) * 2018-02-01 2022-09-16 Fraunhofer Ges Forschung Codificador de escena de audio, decodificador de escena de audio y procedimientos relacionados que utilizan el análisis espacial híbrido de codificador / decodificador
CN108769561B (zh) * 2018-06-22 2021-05-14 广州酷狗计算机科技有限公司 视频录制方法及装置
CN108924464B (zh) * 2018-07-10 2021-06-08 腾讯科技(深圳)有限公司 视频文件的生成方法、装置及存储介质
US11398833B2 (en) * 2019-10-02 2022-07-26 Apple Inc. Low-latency encoding using a bypass sub-stream and an entropy encoded sub-stream
US11297332B1 (en) * 2020-10-30 2022-04-05 Capital One Services, Llc Gaze-tracking-based image downscaling for multi-party video communication

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040036782A1 (en) * 2002-08-20 2004-02-26 Verna Knapp Video image enhancement method and apparatus
US20120213486A1 (en) * 2011-02-21 2012-08-23 Hon Hai Precision Industry Co., Ltd. Screen recording system and method
CN105979267A (zh) * 2015-12-03 2016-09-28 乐视致新电子科技(天津)有限公司 一种视频压缩、播放方法以及装置
CN107295352A (zh) * 2017-06-14 2017-10-24 北京蜜莱坞网络科技有限公司 一种视频压缩方法、装置、设备及存储介质
CN109547711A (zh) * 2018-11-08 2019-03-29 北京微播视界科技有限公司 视频合成方法、装置、计算机设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3879815A4 *

Also Published As

Publication number Publication date
EP3879815A4 (en) 2022-07-13
BR112021008934A2 (pt) 2021-08-10
JP7213342B2 (ja) 2023-01-26
CN109547711A (zh) 2019-03-29
US11706463B2 (en) 2023-07-18
EP3879815A1 (en) 2021-09-15
US20220007061A1 (en) 2022-01-06
KR20210064353A (ko) 2021-06-02
JP2022506305A (ja) 2022-01-17

Similar Documents

Publication Publication Date Title
CN102521235B (zh) 连拍模式图像压缩和解压
CN111163345B (zh) 一种图像渲染方法及装置
CN112770059B (zh) 拍照方法、装置及电子设备
JP6396021B2 (ja) カメラの画像データ圧縮方法及びこれを支援する端末
WO2020093879A1 (zh) 视频合成方法、装置、计算机设备及可读存储介质
CN107277411B (zh) 一种视频录制方法及移动终端
WO2019072140A1 (zh) 图像硬件编码处理方法和装置
CN105827484A (zh) 一种测试画面同步显示方法及系统
JP7255841B2 (ja) 情報処理装置、情報処理システム、制御方法、及びプログラム
TWI577178B (zh) 影像處理裝置及其影像壓縮方法
CN107959791B (zh) 视频图像采集处理的方法、装置、计算机设备及存储介质
CN114691063A (zh) 一种屏幕采集方法、终端及存储介质
CN103795928A (zh) 一种原始数据格式照片处理方法及系统
TW201445322A (zh) 基於雲端運算技術之影像儲存、瀏覽與處理架構
US20140193083A1 (en) Method and apparatus for determining the relationship of an image to a set of images
WO2021057464A1 (zh) 视频处理方法和装置、存储介质和电子装置
CN110413616A (zh) 一种数据库undo表空间的备份方法与装置
WO2022061723A1 (zh) 一种图像处理方法、设备、终端及存储介质
CN112188213B (zh) 编码方法、装置、计算机设备和存储介质
CN109615648B (zh) 景深数据转换方法、装置、设备及计算机可读存储介质
CN117372551A (zh) 图像压缩方法、装置、设备、存储介质和程序产品
JP2017098897A (ja) 情報処理システム、情報処理装置及びその制御方法、コンピュータプログラム
CN112437224B (zh) 一种相机数据处理方法、装置、计算机设备及存储介质
CN105052138B (zh) 降低外推
TWI482119B (zh) 影像處理控制方法及其裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19881284

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217012651

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021523609

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021008934

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2019881284

Country of ref document: EP

Effective date: 20210608

ENP Entry into the national phase

Ref document number: 112021008934

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210507