WO2020125009A1 - 一种视频处理方法及电视 - Google Patents

一种视频处理方法及电视 Download PDF

Info

Publication number
WO2020125009A1
WO2020125009A1 PCT/CN2019/096882 CN2019096882W WO2020125009A1 WO 2020125009 A1 WO2020125009 A1 WO 2020125009A1 CN 2019096882 W CN2019096882 W CN 2019096882W WO 2020125009 A1 WO2020125009 A1 WO 2020125009A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
composite video
series
action
user
Prior art date
Application number
PCT/CN2019/096882
Other languages
English (en)
French (fr)
Inventor
周杉
曲晓奎
Original Assignee
聚好看科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 聚好看科技股份有限公司 filed Critical 聚好看科技股份有限公司
Publication of WO2020125009A1 publication Critical patent/WO2020125009A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • This application relates to the field of data processing, in particular to a video processing method and a television.
  • the camera's interactive capabilities bring new business extensions and rich experiences to TV content. How different users can achieve diverse interactions through smart TVs and enhance immersive experiences. Bringing people closer is the current demand of users, such as the weather outside When it’s not good, users can exercise at home and may need to interact naturally with other people through smart TVs.
  • the embodiments of the present application provide a video processing method and a TV to implement video synthesis of multi-user terminals, and meet the needs of different users for community interaction through different terminals through video synthesis.
  • a composite video of the first series of actions is sent to the other television.
  • the composite video of the first series of actions includes a first composite video and a second composite video, and the first composite video is different from the second composite video.
  • the outputting the composite video of the first series of actions to the current TV includes:
  • the sending the composite video of the first series of actions to the other television includes:
  • the user in the first composite video and the second composite video have different positions in the video image.
  • the first user image is located in the central area of the image in the first composite video.
  • the second user image is located adjacent to the central area of the image in the first composite video.
  • the background images of the first composite video and the second composite video are different.
  • the background image in the first composite video is determined according to the first user instruction.
  • the frame position of the first user image in the first action video precedes the second user image in The frame position in the second action video.
  • a memory for storing program instructions and data associated with the display screen
  • the processor is configured to execute the program instructions to make the television execute the above method.
  • Embodiments of the present application also provide a machine-readable non-volatile storage medium, where the machine-readable non-volatile storage medium stores computer-executable instructions, which are implemented when the computer-executable instructions are executed Either way.
  • FIG. 1 is a schematic diagram of a television set provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of a video processing method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a multi-person interaction scenario provided by an embodiment of this application.
  • FIG. 4 is a schematic diagram of the user 1 in the position of a leading dancer provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of a user 1 selecting a video background provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a TV provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a TV according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a television set provided by an embodiment of the present application
  • a square dance video is being played on a screen, and a camera on a television is used to photograph a user.
  • the TV terminal synthesizes the videos of different users who dance the same square dance, and outputs the synthesized square dance video to different user terminals.
  • a video processing method provided by an embodiment of the present application includes:
  • the user image is a user profile image after removing the background in the video.
  • the movement may be a square dance movement, or a group dance movement such as a fitness dance.
  • the same series of actions can be the same square dance action, or the actions included in the same square dance track, etc.
  • the first action video captured by the current TV camera is received, and the first user image in the first action video is extracted; the second action video captured by the other TV camera is received, and the second action video is extracted A second user image, wherein the action associated with the second action video is associated with the first action video as a first series of actions; based on the first user image and the second user image, the first A synthesized video of a series of actions, and output the synthesized video of the first series of actions to the current TV, wherein the synthesized first series of action videos are synchronously synthesized according to the music corresponding to the first series of actions.
  • video synthesis can meet the needs of different users through different terminals for community interaction.
  • the user terminal is a TV
  • the synthesized square dance video is a multiplayer video where multiple users dance the same dance together.
  • a composite video of the first series of actions is sent to the other television.
  • the composite video of the first series of actions includes a first composite video and a second composite video, and the first composite video is different from the second composite video.
  • the dance video output to the TV of family A is the same as the dance video of the TV of family B, or the dance video output to the TV of family A is different from the dance video of the TV of family B.
  • the user 1 is located in the central area of the television screen of family A (that is, the position of the lead dancer), and user 2 is located in the central area of the television screen of family B.
  • the composite video of the first series of actions includes a first composite video and a second composite video, and the first composite video is different from the second composite video.
  • the outputting the composite video of the first series of actions to the current TV includes:
  • the sending the composite video of the first series of actions to the other television includes:
  • the user in the first composite video and the second composite video have different positions in the video image.
  • the user 1 in the video output to the TV of family A is located in the central area of the screen
  • the user 1 in the video output to the TV of family B is located in the adjacent area of the central area of the screen.
  • the first user image is located in the central area of the image in the first composite video.
  • user 1 is located in the center of the image in the TV's video
  • user 2 is located in the center of the image in the TV's video
  • the first user image is located in the center area of the image in the first composite video, so that each user is located in the center area of the home user terminal.
  • the second user image is located adjacent to the central area of the image in the first composite video.
  • the background images of the first composite video and the second composite video are different.
  • the background image in the video output to the TV of family A is the West Lake
  • the background image in the video output to the TV of family B is People's Square.
  • the background image in the first composite video is determined according to the first user instruction.
  • the background image in the first composite video is determined according to the first user instruction, so that the user can select the background image according to the dance or his own preference.
  • the frame position of the first user image in the first action video precedes the second user image in The frame position in the second action video.
  • the frame position in the second action video is such that in each user terminal, the leader dancer located in the central area is faster than other dancers by one rhythm.
  • FIG. 3 it is a schematic diagram of a multi-person interactive scene provided by an embodiment of the present application.
  • the camera shoots a dancer in front of a TV.
  • user 1 is in family A
  • user 2 is in family B
  • user 3 In family C, all three users have chosen to dance the same dance, then the camera of the TV in family A shoots user 1, the camera of the TV in family B shoots user 2, and the camera in family C
  • keying is a picture of two video signal input sources, a basic switching method during the switching process
  • a video signal parameters such as brightness and chromaticity
  • high/low dual-valued keying signal is to divide one channel of video to form a foreground image signal and a background image signal
  • remove a color background by calculation to obtain a transparent background video
  • crop the video with transparent background to get the outline range of the dancers in the video.
  • the lead dancers of different families are different, that is to say, on the TV of family A, user 1 is shown as the position of the lead dancer, and other dancers are located in other positions of the video screen; on the TV of home B, It is shown that user 2 is in the position of the lead dancer, and other dancers are in other positions of the video screen; on the TV of family C, it is shown that user 3 is in the position of lead dancer, and other dancers are in other positions of the video screen.
  • FIG. 4 it is a schematic diagram of the user 1 being in the position of the lead dancer provided by the embodiment of the present application. Taking the TV of the family A as an example, the user 1 is in the position of the lead dancer.
  • the specific implementation method is:
  • the width and height of each dancer's contour signal After acquiring the dancer's contour signal, record the width and height of each dancer's contour signal in sequence.
  • the width of the contour signal of user 1 is signalWidth1
  • the height is signalHeight1
  • the width of the contour signal of the second dancer is signalWidth2
  • the height is signalHeight2
  • the screen width of the TV is screenWidth
  • the screen height is screenHeight.
  • the x-axis coordinate x0 of the contour signal of user 1 is
  • the y-coordinate y0 is (According to the screen coordinate axis, the screen has a positive value from the upper left corner to the upper right corner, and the screen has a negative value from the upper left corner to the lower left corner).
  • the signal is spliced to the center of the video.
  • the second dancer For example, you want the second dancer to be on the left of user 1 (from the perspective of the user watching TV).
  • the left and right separation distance between the second dancer and user 1 is space (space is the edge interval, That is, the distance between the rightmost point of the second dancer and the leftmost point of user 1), and the distance between the second dancer and user 1 is space1 (space1 is the center point, that is, the center point of the second dancer and The interval of the center point of user 1 on the y-axis), therefore, the x-axis coordinate x1 of the second dancer's contour signal is x0-space-signalWidth2, and the y-axis coordinate y1 is y0+space1, and the coordinates x1 and y1 are transmitted Into the splicing control processor, splicing, so that the contour signal of the second dancer is spliced to the left of user 1.
  • the x-axis and y-axis coordinate values of the third dancer, the fourth dancer and other dancers are calculated in sequence, and the coordinate values are sequentially passed to the splicing control processor for splicing.
  • the lead dancers of each family are faster than other dancers, that is, on the TV of family A, user 1 is displayed as the lead dancer, and other dancers are located elsewhere on the video screen. And user 1 is one rhythm faster than other dancers; on the TV of family B, it is shown that user 2 is at the position of the lead dancer, other dancers are located at other positions on the video screen, and user 2 is one rhythm faster than other dancers; at home On the TV of C, it is shown that user 3 is in the position of the leading dancer, other dancers are in other positions of the video screen, and user 2 is faster than other dancers by one rhythm.
  • the first signal is the signal of user 1.
  • the stitching control processor for the first signal, each frame of the video is taken for stitching .
  • the time range of 2 seconds not limited to 2 seconds, but also other time ranges
  • the second signal and the third signal only the first frame of the video image is taken into the splicing control processor, and The first signal is spliced, and the spliced composite frame image is returned after splicing; after 2 seconds, each frame image of the second signal and the third signal is taken and spliced with the first signal.
  • the dancers of these three signals may not start dancing at the same time.
  • the second signal is pre-stored in the server.
  • the dancer of the second signal may jump a little earlier than the dancer of the first signal. , Such as a few minutes in advance, a few hours or a few days, etc.
  • FIG. 5 it is a schematic diagram of a user 1 selecting a video background provided by an embodiment of the present application.
  • a user may select a scene (that is, a background of a synthesized video) according to the dance or his preference.
  • the keying method first remove the default background image in the composite video to obtain a transparent background video picture, and then use keying to combine the background image selected by the user with the transparent background video picture.
  • the background is the background selected by the user.
  • a television provided by an embodiment of the present application includes:
  • the receiving unit 11 is configured to receive the first action video captured by the current TV camera and extract the first user image in the first action video; receive the second action video captured by other TV cameras and extract the second action A second user image in the video, wherein the action associated with the second action video is associated with the first action video as a first series of actions;
  • the processing unit 12 is configured to generate a composite video of the first series of actions based on the first user image and the second user image, and output the composite video of the first series of actions to the current TV, wherein, The synthesized first series of action videos are synthesized synchronously according to the music corresponding to the first series of actions.
  • a TV provided by an embodiment of the present application includes:
  • a display screen (the user interface in the figure can be understood as including a display screen) for displaying images
  • the memory 610 is used to store program instructions and data associated with the display screen
  • the processor 600 is used to read the program in the memory 610 and perform the following processes:
  • the TV set receive the first action video captured by the current TV camera and extract the first user image in the first action video; receive the second action video captured by the other TV camera and extract the second action video The second user image in, wherein the action associated with the second action video is associated with the first action video as a first series of actions; based on the first user image and the second user image, generating the A synthesized video of the first series of actions, and output the synthesized video of the first series of actions to the current TV, wherein the synthesized first series of action videos are synchronized according to the music corresponding to the first series of actions Synthetic, so as to realize the video synthesis of multi-user terminals, through video synthesis to meet the needs of different users through different terminals for community interaction.
  • a composite video of the first series of actions is sent to the other television.
  • the composite video of the first series of actions includes a first composite video and a second composite video, and the first composite video is different from the second composite video.
  • the outputting the composite video of the first series of actions to the current TV includes:
  • the sending the composite video of the first series of actions to the other television includes:
  • the user in the first composite video and the second composite video have different positions in the video image.
  • the first user image is located in the central area of the image in the first composite video.
  • the second user image is located adjacent to the central area of the image in the first composite video.
  • the background images of the first composite video and the second composite video are different.
  • the background image in the first composite video is determined according to the first user instruction.
  • the background image in the first composite video is determined according to the first user instruction, so that the user can select the background image according to the dance or his own preference.
  • the frame position of the first user image in the first action video precedes the second user image in The frame position in the second action video.
  • the bus architecture may include any number of interconnected buses and bridges, and specifically, one or more processors represented by the processor and various circuits of the memory represented by the memory are linked together.
  • the bus architecture can also link various other circuits such as peripheral devices, voltage regulators, and power management circuits, etc., which are well known in the art, and therefore, they will not be further described in this article.
  • the bus interface provides an interface.
  • the display terminal may specifically be a desktop computer, a portable computer, a smart phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), or the like.
  • the display terminal may include a central processing unit (CPU), memory, input/output devices, etc.
  • the input device may include a keyboard, a mouse, a touch screen, etc.
  • the output device may include a display device, such as a liquid crystal display (Liquid Crystal Display, LCD), cathode ray tube (Cathode Ray Tube, CRT), etc.
  • the user interface 620 may be an interface that can be externally connected to a required device.
  • the connected devices include but are not limited to a keypad, a display, a speaker, a microphone, a joystick, and the like.
  • the processor is responsible for managing the bus architecture and normal processing, and the memory can store data used by the processor when performing operations.
  • the processor may be a CPU (Central Embedded Device), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), or CPLD (Complex Programmable Logic Device, complex programmable logic device).
  • CPU Central Embedded Device
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • CPLD Complex Programmable Logic Device, complex programmable logic device
  • the memory may include a read-only memory (ROM) and a random access memory (RAM), and provide the processor with program instructions and data stored in the memory.
  • ROM read-only memory
  • RAM random access memory
  • the memory may be used to store the program of any of the methods provided in the embodiments of the present application.
  • the processor calls the program instructions stored in the memory, and the processor is configured to execute any of the methods provided in the embodiments of the present application according to the obtained program instructions.
  • An embodiment of the present application provides a machine-readable non-volatile storage medium for storing computer program instructions used by the apparatus provided in the foregoing embodiment of the present application, which includes any program for performing any of the One method of procedure.
  • the machine-readable non-volatile storage medium may be any available medium or data storage device accessible by a computer, including but not limited to magnetic memory (such as floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.), optical Memory (such as CD, DVD, BD, HVD, etc.), and semiconductor memory (such as ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid state hard disk (SSD)), etc.
  • magnetic memory such as floppy disk, hard disk, magnetic tape, magneto-optical disk (MO), etc.
  • optical Memory such as CD, DVD, BD, HVD, etc.
  • semiconductor memory such as ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), solid state hard disk (SSD)
  • the video processing method and the television provided by the embodiments of the present application can realize the video synthesis of multi-user terminals, and the video synthesis can meet the community interaction needs of different users through different terminals.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present application may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage and optical storage, etc.) containing computer usable program code.
  • a computer usable storage media including but not limited to disk storage and optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions
  • the device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and/or block diagrams.

Abstract

本申请公开了一种视频处理方法及电视,用以实现多电视的视频合成,通过视频合成满足不同用户通过不同终端的社群化互动需求。本申请实施例提供的一种视频处理方法,包括:接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。

Description

一种视频处理方法及电视
相关申请的交叉引用
本申请要求在2018年12月20日提交中国专利局、申请号为201811563673.2、申请名称为“一种视频处理方法及电视机”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理领域,尤其涉及一种视频处理方法及电视。
背景技术
摄像头的交互能力为电视内容带来新的业务延伸和体验的丰富,不同用户如何通过智能电视实现多样互动,增强沉浸式体验,拉近人们之间的距离是目前用户热衷的需求,比如外面天气不好时,用户在家中健身,可能需要通过智能电视实现与其他人的自然互动。
发明内容
本申请实施例提供了一种视频处理方法及电视,用以实现多用户终端的视频合成,通过视频合成满足不同用户通过不同终端的社群化互动需求。
本申请实施例提供的一种视频处理方法,包括:
接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;
接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;
根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中, 所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。
在一些实施方式中,向所述其他电视发送所述第一系列动作的合成视频。
在一些实施方式中,所述第一系列动作的合成视频包括第一合成视频和第二合成视频,所述第一合成视频与第二合成视频不同。
在一些实施方式中,所述将所述第一系列动作的合成视频输出给所述当前电视包括:
将所述第一合成视频输出给所述当前电视。
在一些实施方式中,所述向所述其他电视发送所述第一系列动作的合成视频包括:
将所述第二合成视频向所述其他电视发送。
在一些实施方式中,所述第一合成视频和所述第二合成视频中的用户在视频图像中的位置不同。
在一些实施方式中,所述第一用户图像位于所述第一合成视频中的图像的中央区域。
在一些实施方式中,所述第二用户图像位于所述第一合成视频中的图像的中央区域的相邻区域。
在一些实施方式中,所述第一合成视频和第二合成视频的背景图像不同。
在一些实施方式中,所述第一合成视频中的背景图像是根据第一用户指示确定的。
在一些实施方式中,所述第一系列动作的合成视频中的任一帧图像中,所述第一用户图像在所述第一动作视频中的帧位置,先于所述第二用户图像在所述第二动作视频中的帧位置。
本申请实施例还提供的一种电视,包括:
显示屏,用于显示图像;
存储器,用于存储程序指令和与所述显示屏关联的数据;
处理器,用于执行所述程序指令以使得所述电视执行上述的方法。
本申请实施例还提供了一种机器可读的非易失性存储介质,所述机器可读的非易失性存储介质存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述任一种方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的电视整机示意图;
图2为本申请实施例提供的一种视频处理方法示意图;
图3为本申请实施例提供的一种多人互动场景示意图;
图4为本申请实施例提供的用户1处于领舞者位置的示意图;
图5为本申请实施例提供的用户1选择视频背景示意图;
图6为本申请实施例提供的一种电视的结构示意图;
图7为本申请实施例还提供的一种电视的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,并不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面结合说明书附图对本申请各个实施例进行详细描述。需要说明的是,本申请实施例的展示顺序仅代表实施例的先后顺序,并不代表实施例所提供的技术方案的优劣。
参见图1,为本申请实施例提供的电视整机示意图,在屏幕中正播放广场舞视频,电视机上的摄像头用来对用户进行拍摄。针对所拍摄的不同用户的视频,电视端对跳同一支广场舞的不同用户的视频进行合成,并将合成的广场舞视频输出给不同的用户终端。
参见图2,本申请实施例提供的一种视频处理方法,包括:
S101、接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;
例如,用户图像为去除视频中背景之后的用户轮廓图像。
例如,所述动作,可以是广场舞动作,也可以是健身舞等等群体性的舞蹈动作。
例如,同一系列动作,可以是同一个广场舞动作,也可以是同一个广场舞曲目所包含的动作等等。
S102、根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。
通过该方法,接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的,从而实现多用户终端的视频合成,通过视频合成满足不同用户通过不同终端的社群化互动需求。
例如,用户终端为电视机;合成的广场舞视频为多个用户一起跳同一支 舞的多人视频。
在一些实施方式中,向所述其他电视发送所述第一系列动作的合成视频。
在一些实施方式中,所述第一系列动作的合成视频包括第一合成视频和第二合成视频,所述第一合成视频与第二合成视频不同。
例如,输出给A家庭的电视机中的舞蹈视频与B家庭的电视机中的舞蹈视频相同,或者输出给A家庭的电视机中的舞蹈视频与B家庭的电视机中的舞蹈视频不同,用户1位于A家庭的电视屏幕的中央区域(即领舞的位置),用户2位于B家庭的电视屏幕的中央区域。
在一些实施方式中,所述第一系列动作的合成视频包括第一合成视频和第二合成视频,所述第一合成视频与第二合成视频不同。
在一些实施方式中,所述将所述第一系列动作的合成视频输出给所述当前电视包括:
将所述第一合成视频输出给所述当前电视。
在一些实施方式中,所述向所述其他电视发送所述第一系列动作的合成视频包括:
将所述第二合成视频向所述其他电视发送。
在一些实施方式中,所述第一合成视频和所述第二合成视频中的用户在视频图像中的位置不同。
例如,输出给A家庭的电视机的视频中的用户1位于屏幕的中央区域,输出给B家庭的电视机中的视频中的用户1位于屏幕的中央区域的相邻区域。
在一些实施方式中,所述第一用户图像位于所述第一合成视频中的图像的中央区域。
例如,在用户1的电视机上,用户1位于电视机的视频中的图像的中央区域;在用户2的电视机上,用户2位于电视机的视频中的图像的中央区域。
通过该方法,所述第一用户图像位于所述第一合成视频中的图像的中央区域,使得每个用户位于自家用户终端中的中央区域。
在一些实施方式中,所述第二用户图像位于所述第一合成视频中的图像 的中央区域的相邻区域。
例如,在用户1的电视机上,其他跳舞者位于用户1的相邻区域。
在一些实施方式中,所述第一合成视频和第二合成视频的背景图像不同。
例如,输出给A家庭的电视机的视频中的背景图像为西湖畔,输出给B家庭的电视机的视频中的背景图像为人民广场。
在一些实施方式中,所述第一合成视频中的背景图像是根据第一用户指示确定的。
例如,在用户1的电视机上,用户1选择西湖畔作为视频的背景图像,则输出给用户1的电视机的视频中的背景图像为西湖畔;在用户2的电视机上,用户2选择人民广场作为视频的背景图像,则输出给用户2的电视机的视频中的背景图像为人民广场。
通过该方法,所述第一合成视频中的背景图像是根据第一用户指示确定的,从而使得用户可以根据舞蹈或自身的偏好来选择背景图像。
在一些实施方式中,所述第一系列动作的合成视频中的任一帧图像中,所述第一用户图像在所述第一动作视频中的帧位置,先于所述第二用户图像在所述第二动作视频中的帧位置。
通过该方法,所述第一系列动作的合成视频中的任一帧图像中,所述第一用户图像在所述第一动作视频中的帧位置,先于所述第二用户图像在所述第二动作视频中的帧位置,使得在每个用户终端中,位于中央区域的领舞者比其他舞者快一个节奏。
例如,在用户1的电视机上,用户1的舞蹈动作比其他舞者的动作快一个节奏。视频是由一张一张的图像组成,由于不同的用户跳同一支舞,那么舞蹈动作应该是一样的,因此拍摄出来的视频中每帧图像应该是一样的,如果第一广场舞中的第60个图像帧与第二广场舞中的第1个图像帧进行合成,则合成的广场舞视频中用户1的舞蹈动作比其他舞者的动作快一个节奏。
如图3所示,为本申请实施例提供的一种多人互动场景示意图,摄像头对电视机前的舞者进行拍摄,例如,用户1在家庭A中,用户2在家庭B中, 用户3在家庭C中,这3位用户都选择了跳同一支舞,那么家庭A中的电视的摄像头对用户1进行拍摄,家庭B中的电视的摄像头对用户2进行拍摄,家庭C中的电视的摄像头对用户3进行拍摄。
将所拍摄的视频拼接在一起,让用户有在一起互动跳舞的感觉,具体的实现方法包括:
采用键控的方法(键控是两个视频信号输入源的画面,在切换过程中的一种基本切换方式),利用一个视频信号中不同部位参量(例如亮度和色度)的不同,经过处理形成高/低双值键控信号(高/低双值键控信号是将一路视频进行分割,形成前景图像信号和背景图像信号),通过计算去掉一种颜色背景,得到一个透明背景的视频,然后对透明背景的视频进行裁剪,获得视频中舞者的轮廓范围。
拼接视频中的轮廓数据,将不同舞者的舞蹈区域进行计算,进行拼接合成同一个视频画面。
在合成的视频中,不同家庭的领舞者不同,也就是说,在家庭A的电视上,显示用户1处于领舞者的位置,其他跳舞者位于视频画面的其他位置;在家庭B的电视上,显示用户2处于领舞者的位置,其他跳舞者位于视频画面的其他位置;在家庭C的电视上,显示用户3处于领舞者的位置,其他跳舞者位于视频画面的其他位置。
如图4所示,为本申请实施例提供的用户1处于领舞者位置的示意图,以在家庭A的电视上,用户1处于领舞者的位置为例,具体的实现方法为:
获取到舞者的轮廓信号后,依次记录每个舞者的轮廓信号的宽和高,例如用户1的轮廓信号的宽为signalWidth1,高为signalHeight1,第二个舞者的轮廓信号的宽为signalWidth2,高为signalHeight2,电视的屏幕宽为screenWidth,屏幕高为screenHeight,通过设置用户1的轮廓信号的x轴和y轴的坐标,使得用户1的轮廓信号位于电视屏幕中心位置;
用户1的轮廓信号的x轴坐标x0为
Figure PCTCN2019096882-appb-000001
y轴坐标y0 为
Figure PCTCN2019096882-appb-000002
(根据屏幕坐标轴,屏幕从左上角到右上角为正值,屏幕从左上角到左下角为负值),将坐标x0和y0传入拼接控制处理器,进行拼接,则使得用户1的轮廓信号被拼接到视频中央。
继续拼接第二个舞者,例如想要第二个舞者位于用户1的左边(从用户看电视的角度),第二个舞者与用户1的左右间隔距离为space(space为边缘间隔,即第二个舞者的最右边与用户1的最左边的间隔),第二个舞者与用户1的上下间隔距离为space1(space1为中心点间隔,即第二个舞者的中心点与用户1的中心点在y轴上的间隔),因此,第二个舞者的轮廓信号的x轴坐标x1为x0-space-signalWidth2,y轴坐标y1为y0+space1,将坐标x1和y1传入拼接控制处理器,进行拼接,则使得第二个舞者的轮廓信号被拼接到用户1的左边。
以此类推,依次计算第三个舞者,第四个舞者等其他舞者的x轴和y轴坐标值,依次将坐标值传入拼接控制处理器,进行拼接。
在合成的视频中,每个家庭的领舞者比其他跳舞者快一个节奏,也就是说,在家庭A的电视上,显示用户1处于领舞者的位置,其他跳舞者位于视频画面的其他位置,并且用户1比其他跳舞者快一个节奏;在家庭B的电视上,显示用户2处于领舞者的位置,其他跳舞者位于视频画面的其他位置,并且用户2比其他跳舞者快一个节奏;在家庭C的电视上,显示用户3处于领舞者的位置,其他跳舞者位于视频画面的其他位置,并且用户2比其他跳舞者快一个节奏。
以在家庭A的电视上,用户1处于领舞者的位置为例,具体的实现方法为:
例如,有3路信号传入到服务器,第一路信号为用户1的信号,则在这3路信号传入拼接控制处理器时,对于第一路信号,取视频的每一帧图像进行拼接,在2秒的时间范围内(不局限于2秒,也可以是其他时间范围),对于第二路信号和第三路信号,只取视频的第一帧图像传入拼接控制处理器,与 第一路信号进行拼接,拼接后返回拼接完成的合成帧图像;2秒后,再取第二路信号与第三路信号画面的每一帧图像与第一路信号进行拼接。
另外,这3路信号的舞者可能不是在同一时间开始跳舞的,第二路信号是预先存储在服务器中的,第二路信号的舞者可能比第一路信号的舞者提前跳一段时间,例如提前几分钟,几个小时或几天等。
如图5所示,为本申请实施例提供的用户1选择视频背景示意图,在合成的视频中,用户可以根据舞蹈或者自己的偏好来选择场景(即合成视频的背景),以所选择的场景替代现实场景:
例如,用户1选择西湖畔,则家庭A的电视上跳舞的背景为西湖畔;用户2选择人民广场,则家庭B的电视上跳舞的背景为人民广场。具体的实现方法为:
采用键控的方法,先去除合成视频中的默认背景图像,得到透明背景的视频画面,再通过键控,将用户选择的背景图像与透明背景的视频画面结合在一起,则电视上视频中的背景为用户选择的背景。
相应地,参见图6,本申请实施例提供的一种电视,包括:
接收单元11,用于接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;
处理单元12,用于根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。
参见图7,本申请实施例还提供的一种电视,包括:
显示屏(图中用户接口可以理解为包括显示屏),用于显示图像;
存储器610,用于存储程序指令和与所述显示屏关联的数据;
处理器600,用于读取存储器610中的程序,执行下列过程:
接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;
根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。
通过该电视机,接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的,从而实现多用户终端的视频合成,通过视频合成满足不同用户通过不同终端的社群化互动需求。
在一些实施方式中,向所述其他电视发送所述第一系列动作的合成视频。
在一些实施方式中,所述第一系列动作的合成视频包括第一合成视频和第二合成视频,所述第一合成视频与第二合成视频不同。
在一些实施方式中,所述将所述第一系列动作的合成视频输出给所述当前电视包括:
将所述第一合成视频输出给所述当前电视。
在一些实施方式中,所述向所述其他电视发送所述第一系列动作的合成视频包括:
将所述第二合成视频向所述其他电视发送。
在一些实施方式中,所述第一合成视频和所述第二合成视频中的用户在视频图像中的位置不同。
在一些实施方式中,所述第一用户图像位于所述第一合成视频中的图像的中央区域。
在一些实施方式中,所述第二用户图像位于所述第一合成视频中的图像的中央区域的相邻区域。
在一些实施方式中,所述第一合成视频和第二合成视频的背景图像不同。
在一些实施方式中,所述第一合成视频中的背景图像是根据第一用户指示确定的。通过该方法,所述第一合成视频中的背景图像是根据第一用户指示确定的,从而使得用户可以根据舞蹈或自身的偏好来选择背景图像。
在一些实施方式中,所述第一系列动作的合成视频中的任一帧图像中,所述第一用户图像在所述第一动作视频中的帧位置,先于所述第二用户图像在所述第二动作视频中的帧位置。
其中,在图7中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器代表的一个或多个处理器和存储器代表的存储器的各种电路链接在一起。总线架构还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口提供接口。
本申请实施例提供了一种显示终端,该显示终端具体可以为桌面计算机、便携式计算机、智能手机、平板电脑、个人数字助理(Personal Digital Assistant,PDA)等。该显示终端可以包括中央处理器(Center Processing Unit,CPU)、存储器、输入/输出设备等,输入设备可以包括键盘、鼠标、触摸屏等,输出设备可以包括显示设备,如液晶显示器(Liquid Crystal Display,LCD)、阴极射线管(Cathode Ray Tube,CRT)等。
针对不同的显示终端,在一些实施方式中,用户接口620可以是能够外接内接需要设备的接口,连接的设备包括但不限于小键盘、显示器、扬声器、麦克风、操纵杆等。
处理器负责管理总线架构和通常的处理,存储器可以存储处理器在执行操作时所使用的数据。
在一些实施方式中,处理器可以是CPU(中央处埋器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或CPLD(Complex Programmable Logic Device,复杂可编程逻辑器件)。
存储器可以包括只读存储器(ROM)和随机存取存储器(RAM),并向处理器提供存储器中存储的程序指令和数据。在本申请实施例中,存储器可以用于存储本申请实施例提供的任一所述方法的程序。
处理器通过调用存储器存储的程序指令,处理器用于按照获得的程序指令执行本申请实施例提供的任一所述方法。
本申请实施例提供了一种机器可读的非易失性存储介质,用于储存为上述本申请实施例提供的装置所用的计算机程序指令,其包含用于执行上述本申请实施例提供的任一方法的程序。
所述机器可读的非易失性存储介质可以是计算机能够存取的任何可用介质或数据存储设备,包括但不限于磁性存储器(例如软盘、硬盘、磁带、磁光盘(MO)等)、光学存储器(例如CD、DVD、BD、HVD等)、以及半导体存储器(例如ROM、EPROM、EEPROM、非易失性存储器(NAND FLASH)、固态硬盘(SSD))等。
综上所述,本申请实施例提供的一种视频处理方法及电视机,从而实现多用户终端的视频合成,通过视频合成满足不同用户通过不同终端的社群化互动需求。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图 和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (13)

  1. 一种视频处理方法,该方法包括:
    接收当前电视摄像头拍摄的第一动作视频,并提取所述第一动作视频中的第一用户图像;
    接收其他电视摄像头拍摄的第二动作视频,并提取所述第二动作视频中的第二用户图像,其中,所述第二动作视频关联的动作与所述第一动作视频关联为第一系列动作;
    根据所述第一用户图像与所述第二用户图像,生成所述第一系列动作的合成视频,并将所述第一系列动作的合成视频输出给所述当前电视,其中,所述合成第一系列的动作视频是根据所述第一系列的动作对应的音乐同步合成的。
  2. 根据权利要求1所述的方法,该方法还包括:
    向所述其他电视发送所述第一系列动作的合成视频。
  3. 根据权利要求2所述的方法,所述第一系列动作的合成视频包括第一合成视频和第二合成视频,所述第一合成视频与第二合成视频不同。
  4. 根据权利要求3所述的方法,所述将所述第一系列动作的合成视频输出给所述当前电视包括:
    将所述第一合成视频输出给所述当前电视。
  5. 根据权利要求3所述的方法,所述向所述其他电视发送所述第一系列动作的合成视频包括:
    将所述第二合成视频向所述其他电视发送。
  6. 根据权利要求3所述的方法,所述第一合成视频和所述第二合成视频中的用户在视频图像中的位置不同。
  7. 根据权利要求3所述的方法,所述第一用户图像位于所述第一合成视频中的图像的中央区域。
  8. 根据权利要求3所述的方法,所述第二用户图像位于所述第一合成视 频中的图像的中央区域的相邻区域。
  9. 根据权利要求3~8任一所述的方法,所述第一合成视频和第二合成视频的背景图像不同。
  10. 根据权利要求9所述的方法,所述第一合成视频中的背景图像是根据第一用户指示确定的。
  11. 根据权利要求1所述的方法,所述第一系列动作的合成视频中的任一帧图像中,所述第一用户图像在所述第一动作视频中的帧位置,先于所述第二用户图像在所述第二动作视频中的帧位置。
  12. 一种电视,包括:
    显示屏,用于显示图像;
    存储器,用于存储程序指令和与所述显示屏关联的数据;
    处理器,用于执行所述程序指令以使得所述电视实现权利要求1到11任一所述的方法。
  13. 一种机器可读的非易失性存储介质,所述存储介质存储有计算机可执行指令,所述计算机可执行指令被执行时实现权利要求1到11任一所述的方法。
PCT/CN2019/096882 2018-12-20 2019-07-19 一种视频处理方法及电视 WO2020125009A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811563673.2 2018-12-20
CN201811563673.2A CN109743625A (zh) 2018-12-20 2018-12-20 一种视频处理方法及电视机

Publications (1)

Publication Number Publication Date
WO2020125009A1 true WO2020125009A1 (zh) 2020-06-25

Family

ID=66360697

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096882 WO2020125009A1 (zh) 2018-12-20 2019-07-19 一种视频处理方法及电视

Country Status (2)

Country Link
CN (2) CN113115108A (zh)
WO (1) WO2020125009A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115108A (zh) * 2018-12-20 2021-07-13 聚好看科技股份有限公司 一种视频处理方法及计算设备
CN110266968B (zh) * 2019-05-17 2022-01-25 小糖互联(北京)网络科技有限公司 一种共舞视频的制作方法和装置
CN112423015B (zh) * 2020-11-20 2023-03-03 广州欢网科技有限责任公司 一种云跳舞的方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288635A (zh) * 1998-01-16 2001-03-21 洛桑联邦综合工科学校 以时空调整结合视频序列的方法及系统
US7843510B1 (en) * 1998-01-16 2010-11-30 Ecole Polytechnique Federale De Lausanne Method and system for combining video sequences with spatio-temporal alignment
CN102158755A (zh) * 2010-09-02 2011-08-17 青岛海信传媒网络技术有限公司 一种机顶盒支持卡拉ok的方法、机顶盒、服务器及系统
CN104469441A (zh) * 2014-11-21 2015-03-25 天津思博科科技发展有限公司 一种应用智能终端及互联网技术实现的集体舞装置
CN106162221A (zh) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 直播视频的合成方法、装置及系统
CN109743625A (zh) * 2018-12-20 2019-05-10 聚好看科技股份有限公司 一种视频处理方法及电视机

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014192565A (ja) * 2013-03-26 2014-10-06 Sony Corp 映像処理装置、映像処理方法及びコンピュータプログラム
CN106210599B (zh) * 2015-04-30 2021-02-12 中兴通讯股份有限公司 一种多画面调整方法、装置及多点控制单元
CN106231368B (zh) * 2015-12-30 2019-03-26 深圳超多维科技有限公司 主播类互动平台道具呈现方法及其装置、客户端
CN106534954B (zh) * 2016-12-19 2019-11-22 广州虎牙信息科技有限公司 基于直播视频流的信息交互方法、装置和终端设备
CN107682656B (zh) * 2017-09-11 2020-07-24 Oppo广东移动通信有限公司 背景图像处理方法、电子设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1288635A (zh) * 1998-01-16 2001-03-21 洛桑联邦综合工科学校 以时空调整结合视频序列的方法及系统
US7843510B1 (en) * 1998-01-16 2010-11-30 Ecole Polytechnique Federale De Lausanne Method and system for combining video sequences with spatio-temporal alignment
CN102158755A (zh) * 2010-09-02 2011-08-17 青岛海信传媒网络技术有限公司 一种机顶盒支持卡拉ok的方法、机顶盒、服务器及系统
CN104469441A (zh) * 2014-11-21 2015-03-25 天津思博科科技发展有限公司 一种应用智能终端及互联网技术实现的集体舞装置
CN106162221A (zh) * 2015-03-23 2016-11-23 阿里巴巴集团控股有限公司 直播视频的合成方法、装置及系统
CN109743625A (zh) * 2018-12-20 2019-05-10 聚好看科技股份有限公司 一种视频处理方法及电视机

Also Published As

Publication number Publication date
CN109743625A (zh) 2019-05-10
CN113115108A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2020248640A1 (zh) 一种显示设备
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
WO2020125009A1 (zh) 一种视频处理方法及电视
TWI556639B (zh) 用於將互動特徵加入視頻之技術
US20120069143A1 (en) Object tracking and highlighting in stereoscopic images
GB2590545A (en) Video photographing method and apparatus, electronic device and computer readable storage medium
JP2014215828A (ja) 画像データ再生装置、および視点情報生成装置
WO2021254502A1 (zh) 目标对象显示方法、装置及电子设备
US20230360184A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
WO2022062903A1 (zh) 弹幕播放方法、相关设备及存储介质
CN113064684B (zh) 一种虚拟现实设备及vr场景截屏方法
US10115431B2 (en) Image processing device and image processing method
CN113473207B (zh) 直播方法、装置、存储介质及电子设备
US20240144976A1 (en) Video processing method, device, storage medium, and program product
WO2023104102A1 (zh) 一种直播评论展示方法、装置、设备、程序产品及介质
KR20230049691A (ko) 비디오 처리 방법, 단말기 및 저장매체
TWI669958B (zh) 預覽影片的方法、處理裝置及其電腦系統
CN115061617A (zh) 直播画面的处理方法、装置、计算机设备和存储介质
CN113923498A (zh) 一种处理方法及装置
JP2021168147A (ja) 動画配信システム、動画配信方法および動画配信プログラム
US11921971B2 (en) Live broadcasting recording equipment, live broadcasting recording system, and live broadcasting recording method
WO2020248682A1 (zh) 一种显示设备及虚拟场景生成方法
JP2015136069A (ja) 映像配信システム、映像配信方法及び映像配信プログラム
WO2024088322A1 (zh) 视频展示方法、装置、视频展示设备和存储介质
US20240129575A1 (en) Live content presentation method and apparatus, electronic device, and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19900947

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19900947

Country of ref document: EP

Kind code of ref document: A1