CN116828216A - Live video processing method and device, electronic equipment, medium and live video processing system - Google Patents

Live video processing method and device, electronic equipment, medium and live video processing system Download PDF

Info

Publication number
CN116828216A
CN116828216A CN202211454008.6A CN202211454008A CN116828216A CN 116828216 A CN116828216 A CN 116828216A CN 202211454008 A CN202211454008 A CN 202211454008A CN 116828216 A CN116828216 A CN 116828216A
Authority
CN
China
Prior art keywords
image
live video
portrait
target
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211454008.6A
Other languages
Chinese (zh)
Inventor
周达婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202211454008.6A priority Critical patent/CN116828216A/en
Publication of CN116828216A publication Critical patent/CN116828216A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the technical field of video processing, and relates to a live video processing method, a live video processing device, electronic equipment, media and a live video processing system, which comprises the following steps: acquiring live video to be shared, which is acquired by a device side in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; and sending the target live video to a user for playing. The method has the advantages that the situation that the frames of the video are dropped can be avoided, the effect that the whole video is smooth and transmitted in real time is achieved, and the experience of the live video is enhanced.

Description

Live video processing method and device, electronic equipment, medium and live video processing system
Technical Field
The invention belongs to the technical field of video processing, and relates to a live video processing method, a live video processing device, electronic equipment, media and a live video processing system.
Background
Network social contact breaks through the limit of industrial social contact, and networking of human-to-human relationships is represented as popularization of network social contact with various social networking software. The form of network social interaction varies from the original form of network chat to various rich forms, and network live broadcast is a form of network social interaction that is very important at present, and through which users can watch live broadcast content that interests themselves.
At present, in the live broadcast process, referring to fig. 7, a live broadcast user side can shoot live broadcast video, then acquire video or material sources locally in a storage device, namely, a system application layer, and store the video or material sources in a cache region of the system application layer, then carry out transmission material in a multi-cache region of the system application layer, perform scene layer matting processing on an image through an algorithm, remove an image background, retain a portrait original shape, and then output the processed video. However, the process of processing the video is performed in the application layer, so that the live video is excessively long in path when being scratched, the number of times of moving the data is excessive, resources are occupied excessively, time delay and frame dropping are caused, the video is not smooth, and the experience of the live video is reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a live video processing method, a device, electronic equipment, a medium and a live video system, which can avoid the situation that frames of video fall, achieve the effect of smooth and real-time transmission of the whole video and enhance the experience of live video.
In order to achieve the above object, a first aspect of the present invention provides a live video processing method, including the following steps:
acquiring live video to be shared, which is acquired by a device side in a live scene;
sampling each frame of image in the live video to be shared according to the live video to be shared;
carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting;
synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
and sending the target live video to a user for playing.
Further, the synthesizing the target portrait matting and the preset target scene image at the system abstraction layer includes:
acquiring a material scene file image;
selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
determining the target portrait matting as a portrait layer;
and placing the portrait layer between the foreground layer and the background layer for layer synthesis to obtain a synthesized scene portrait image.
Further, performing image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting, including:
inputting each frame of image in the live video to be shared into a pre-trained portrait processing model to obtain overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
and obtaining a target portrait matting according to the overall portrait characteristic and the edge detail characteristic.
Further, the method includes, after acquiring the material scene file image: and decoding the acquired material scene file image, and storing the decoded material scene file into a cache region of an application layer.
A second aspect of the present invention provides a live video processing apparatus, including:
the first acquisition module is used for acquiring live video to be shared, which is acquired by the equipment end in a live scene;
the sampling module is used for sampling each frame of image in the live video to be shared according to the live video to be shared;
the image matting module is used for performing image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain target image matting;
the first synthesis module is used for synthesizing the target portrait matting and a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
the second synthesis module is used for synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
and the sending module is used for sending the target live video to a user side for playing.
Further, the first synthesis module includes:
the second acquisition module is used for acquiring the material scene file images;
the selecting module is used for selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
the determining module is used for determining the target portrait matting as a portrait layer;
and the first synthesis submodule is used for placing the image layer between the foreground image layer and the background image layer to synthesize the image layer, so as to obtain a synthesized scene image.
Further, the matting module includes:
the input module inputs each frame of image in the live video to be shared into a pre-trained portrait processing model so as to obtain portrait integral features and edge detail features in each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
the obtaining module is used for obtaining the target portrait matting according to the portrait integral features and the edge detail features.
A third aspect of the present invention provides an electronic device comprising a memory and a processor, said memory storing a computer program, characterized in that said processor implements the steps of said live video processing method when executing said computer program.
A fourth aspect of the present invention is to provide a storage medium storing a computer program which, when executed by a processor, implements the steps of the live video processing method.
A fifth aspect of the present invention is to provide a live broadcast system, including: the device end, the server and the user end; the equipment end acquires live video to be shared, which is acquired by the equipment end in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; uploading the target live video to a server;
the server is used for sending the target live video to a user side;
the user terminal is used for receiving the target live video and playing the target live video.
The invention has the beneficial effects that:
acquiring live video to be shared acquired by the equipment end in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; the target live video is sent to a user for playing; the existing video processing process is replaced by processing in the system abstraction layer, so that the problems of video delay and frame dropping caused by too many times of moving data and too many occupied resources when the live video is scratched can be avoided, the effect that the whole video is smooth and transmitted in real time can be guaranteed, and the experience of the live video is enhanced.
Drawings
FIG. 1 is a schematic diagram of the overall flow of the live video processing method of the present invention;
fig. 2 is a schematic sub-flowchart of step S400 of the live video processing method according to the present invention;
fig. 3 is a schematic structural diagram of a live video processing device in the present invention;
FIG. 4 is a schematic diagram of a first synthesis module according to the present invention;
FIG. 5 is a schematic diagram of a computer device in accordance with the present invention;
FIG. 6 is a path diagram of a live method implementation in the invention;
fig. 7 is a path diagram of a prior art live method implementation.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
In the description of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the embodiments of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured" and the like are to be construed broadly and include, for example, either permanently connected, removably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The method and the system are applied to live broadcasting scenes, conference live broadcasting scenes, teaching live broadcasting scenes, live broadcasting with goods scenes and the like, wherein the live broadcasting scenes generally relate to live broadcasting user terminals, servers and user terminals, and the live broadcasting user terminals refer to user terminals used by anchor users (such as teaching teachers/conference lecturers/with goods anchor); the user terminal refers to a user terminal used by a user (student/person listening to a meeting) watching live broadcast; from the hardware dimension, the live user side and the user side can be smart phones, tablet computers and other devices in general; the server refers to a server that carries live services, and may be an independent server, a cluster server, or the like.
Because the video processing process is performed in the system application layer, the live video is excessively long in path when being scratched, the number of times of moving data is excessive, resources are occupied excessively, and the problems of delay, frame dropping and unsmooth video are caused.
In view of this, referring to fig. 1 and 6, a first aspect of the present invention is to provide a live video processing method, which includes the following steps:
s100: acquiring live video to be shared, which is acquired by a device side in a live scene; s200: sampling each frame of image in the live video to be shared according to the live video to be shared;
s300: carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting;
s400: synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
s500: synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
s600: and sending the target live video to a user for playing.
Acquiring live video to be shared acquired by the equipment end in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; the target live video is sent to a user for playing; the existing video processing process is replaced by processing in a system abstraction layer, so that the problems of video delay and frame dropping caused by too many times of moving data and too many occupied resources when the live video is scratched can be avoided, the effect that the whole video is smoothly and real-timely transmitted can be ensured, and the experience of the live video is enhanced; in the live broadcast process, the target portrait matting and a preset target scene image can be synthesized at a system abstraction layer to obtain a synthesized scene portrait image; and synthesizing the synthesized scene portrait images to obtain target live video, so that a user can switch scenes in the live broadcast process, and multi-scene live broadcast is realized.
The system abstraction layer is an interface layer between the operating system kernel and the hardware circuit, and is aimed at abstracting the hardware. The hardware interface details of a specific platform are hidden, and a virtual hardware platform is provided for an operating system, so that the operating system has hardware independence.
Referring to fig. 2, in one embodiment, synthesizing, at a system abstraction layer, the target portrait matting and a preset target scene image includes:
acquiring a material scene file image;
it should be understood that the scene file image is obtained from a host material library, and the material data in the host material library includes: the files such as pictures, animations and documents enable a user to adjust related material data in live broadcast according to requirements during live broadcast. In this embodiment, the host may be an electronic device such as a direct broadcast machine, a mobile phone, a tablet computer, or a computer.
Selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
determining the target portrait matting as a portrait layer;
and placing the portrait layer between the foreground layer and the background layer for layer synthesis to obtain a synthesized scene portrait image. Specifically, the foreground layer is the layer in front of the portrait layer, and the background layer is the layer behind the portrait layer.
In one embodiment, step S300 includes:
inputting each frame of image in the live video to be shared into a pre-trained portrait processing model to obtain overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
and obtaining a target portrait matting according to the overall portrait characteristic and the edge detail characteristic.
It should be understood that the portrait processing model is trained in advance, and can be optimized and updated in the later period, mainly by collecting sample data in advance, training an initial model through the sample data, obtaining a final trained portrait processing model after training meeting conditions, wherein the important training model has the capability of extracting overall portrait characteristics and edge characteristics in the training process, and makes decisions based on the characteristics of the two dimensions.
In one embodiment, when the system abstraction layer performs image matting processing on each frame of image in the live video to be shared to obtain a target image matting, a convolutional neural network is adopted to perform image matting processing on each frame of image in the live video to be shared.
The convolutional neural network is a feedforward neural network which comprises convolutional calculation and has a depth structure, is one of representative algorithms of deep learning, and is widely applied to visual recognition, image processing and the like.
In one embodiment, after acquiring the live video to be shared acquired by the device side in the live scene, the method further includes: and decoding the acquired live video to be shared at the system application layer, and storing the video obtained after decoding into a cache region of the application layer.
In one embodiment, after acquiring the material scene file image, the method includes: and decoding the acquired material scene file image, and storing the decoded material scene file into a cache region of an application layer.
Referring to fig. 3, a second aspect of the present invention is to provide a live video processing apparatus, including:
the first acquisition module is used for acquiring live video to be shared, which is acquired by the equipment end in a live scene;
the sampling module is used for sampling each frame of image in the live video to be shared according to the live video to be shared;
the image matting module is used for performing image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain target image matting;
the first synthesis module is used for synthesizing the target portrait matting and a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
the second synthesis module is used for synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
and the sending module is used for sending the target live video to a user side for playing.
By combining the first acquisition module, the sampling module, the matting module, the first synthesis module, the second synthesis module and the sending module, the problems of video delay and frame dropping caused by too many times of moving data and too many occupied resources when the live video is subjected to matting can be avoided, so that the effect of smooth and real-time transmission of the whole video can be ensured for the live video, and the experience of the live video is enhanced; in the live broadcast process, the target portrait matting and a preset target scene image can be synthesized at a system abstraction layer to obtain a synthesized scene portrait image; and synthesizing the synthesized scene portrait images to obtain target live video, so that a user can switch scenes in the live broadcast process, and multi-scene live broadcast is realized.
Referring to fig. 4, in one embodiment, the first synthesizing module includes:
the second acquisition module is used for acquiring the material scene file images;
the selecting module is used for selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
the determining module is used for determining the target portrait matting as a portrait layer;
and the first synthesis submodule is used for placing the image layer between the foreground image layer and the background image layer to synthesize the image layer, so as to obtain a synthesized scene image.
In one embodiment, the matting module includes:
the input module inputs each frame of image in the live video to be shared into a pre-trained portrait processing model so as to obtain portrait integral features and edge detail features in each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
the obtaining module is used for obtaining the target portrait matting according to the portrait integral features and the edge detail features.
A third inventive aspect of the present invention, referring to fig. 5, is to provide a computer device, comprising a memory storing a computer program and a processor implementing the live video processing method when executing the computer program.
The memory may be used to store software programs and modules that the processor executes to perform various functional applications and data processing by executing the software programs and modules stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
Internal configurations of the computer device may include, but are not limited to: the processor, the network interface and the memory in the in-vehicle personal health monitoring terminal may be connected by a bus or other means, and in fig. 5 shown in the embodiment of the present specification, the connection by the bus is exemplified.
The processor (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of the in-vehicle personal health monitoring terminal. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI, mobile communication interface, etc.). Memory (Memory) is a Memory device in a computer device for storing programs and data. It will be appreciated that the memory herein may be a high speed RAM memory device or a non-volatile memory device, such as at least one magnetic disk memory device; optionally, at least one memory device located remotely from the processor. The memory provides a storage space that stores an operating system of the computer device, which may include, but is not limited to: windows (an operating system), linux (an operating system), etc., as the present invention is not limited in this regard; also stored in the memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor. In the embodiment of the present disclosure, the processor loads and executes one or more instructions stored in the memory to implement the live video processing method provided in the above embodiment of the method.
Embodiments of the present invention also provide a computer readable storage medium that may be disposed in a live video processing terminal to store at least one instruction, at least one program, a set of codes, or a set of instructions related to implementing a live video processing method in a method embodiment, where the at least one instruction, the at least one program, the set of codes, or the set of instructions may be loaded and executed by a processor of an electronic device to implement the live video processing method provided in the method embodiment.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
A fifth aspect of the present invention is to provide a live broadcast system, including: the device end, the server and the user end; the equipment end acquires live video to be shared, which is acquired by the equipment end in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; uploading the target live video to a server;
the server is used for sending the target live video to a user side;
the user terminal is used for receiving the target live video and playing the target live video.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.

Claims (10)

1. The live video processing method is characterized by comprising the following steps of:
acquiring live video to be shared, which is acquired by a device side in a live scene;
sampling each frame of image in the live video to be shared according to the live video to be shared;
carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting;
synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
and sending the target live video to a user for playing.
2. The method of claim 1, wherein synthesizing the target portrait matting with a preset target scene image at a system abstraction layer comprises:
acquiring a material scene file image;
selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
determining the target portrait matting as a portrait layer;
and placing the portrait layer between the foreground layer and the background layer for layer synthesis to obtain a synthesized scene portrait image.
3. The method for processing live video according to claim 1, wherein performing image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting comprises:
inputting each frame of image in the live video to be shared into a pre-trained portrait processing model to obtain overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
and obtaining a target portrait matting according to the overall portrait characteristic and the edge detail characteristic.
4. The method according to claim 1, wherein after acquiring the material scene file image, comprising: and decoding the acquired material scene file image, and storing the decoded material scene file into a cache region of an application layer.
5. A live video processing apparatus, the live video processing apparatus comprising:
the first acquisition module is used for acquiring live video to be shared, which is acquired by the equipment end in a live scene;
the sampling module is used for sampling each frame of image in the live video to be shared according to the live video to be shared;
the image matting module is used for performing image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain target image matting;
the first synthesis module is used for synthesizing the target portrait matting and a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image;
the second synthesis module is used for synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video;
and the sending module is used for sending the target live video to a user side for playing.
6. The live video processing device of claim 5, wherein the first composition module comprises:
the second acquisition module is used for acquiring the material scene file images;
the selecting module is used for selecting a target scene image from the material scene file image to generate a layer, wherein the layer comprises a foreground layer and a background layer;
the determining module is used for determining the target portrait matting as a portrait layer;
and the first synthesis submodule is used for placing the image layer between the foreground image layer and the background image layer to synthesize the image layer, so as to obtain a synthesized scene image.
7. A live video processing device as defined in claim 5 wherein the matting module comprises:
the input module inputs each frame of image in the live video to be shared into a pre-trained portrait processing model so as to obtain portrait integral features and edge detail features in each frame of image in the live video to be shared; the portrait processing model is used for extracting overall portrait characteristics and edge detail characteristics of each frame of image in the live video to be shared;
the obtaining module is used for obtaining the target portrait matting according to the portrait integral features and the edge detail features.
8. An electronic device, characterized in that: comprising a memory and a processor, said memory storing a computer program, characterized in that the processor implements the steps of the live video processing method of any of claims 1-4 when said computer program is executed.
9. A storage medium storing a computer program which, when executed by a processor, implements the steps of the live video processing method of any of claims 1-4.
10. A live broadcast system, comprising: the device end, the server and the user end; the equipment end acquires live video to be shared, which is acquired by the equipment end in a live scene; sampling each frame of image in the live video to be shared according to the live video to be shared; carrying out image matting processing on each frame of image in the live video to be shared at a system abstraction layer to obtain a target image matting; synthesizing the target portrait matting with a preset target scene image at a system abstraction layer to obtain a synthesized scene portrait image; synthesizing the obtained synthesized scene portrait images to obtain target live broadcast video; uploading the target live video to a server;
the server is used for sending the target live video to a user side;
the user terminal is used for receiving the target live video and playing the target live video.
CN202211454008.6A 2022-11-21 2022-11-21 Live video processing method and device, electronic equipment, medium and live video processing system Pending CN116828216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211454008.6A CN116828216A (en) 2022-11-21 2022-11-21 Live video processing method and device, electronic equipment, medium and live video processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211454008.6A CN116828216A (en) 2022-11-21 2022-11-21 Live video processing method and device, electronic equipment, medium and live video processing system

Publications (1)

Publication Number Publication Date
CN116828216A true CN116828216A (en) 2023-09-29

Family

ID=88139799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211454008.6A Pending CN116828216A (en) 2022-11-21 2022-11-21 Live video processing method and device, electronic equipment, medium and live video processing system

Country Status (1)

Country Link
CN (1) CN116828216A (en)

Similar Documents

Publication Publication Date Title
US10896478B2 (en) Image grid with selectively prominent images
CN110263642B (en) Image cache for replacing portions of an image
WO2020052523A1 (en) Method and apparatus for cropping image
CN110188719B (en) Target tracking method and device
CN107507155B (en) Video segmentation result edge optimization real-time processing method and device and computing equipment
JP2022523606A (en) Gating model for video analysis
CN109035288B (en) Image processing method and device, equipment and storage medium
CN113348486A (en) Image display with selective motion description
JP7247327B2 (en) Techniques for Capturing and Editing Dynamic Depth Images
CN113949808B (en) Video generation method and device, readable medium and electronic equipment
US9384384B1 (en) Adjusting faces displayed in images
US8983188B1 (en) Edge-aware smoothing in images
CN113033677A (en) Video classification method and device, electronic equipment and storage medium
CN114630057B (en) Method and device for determining special effect video, electronic equipment and storage medium
CN114463470A (en) Virtual space browsing method and device, electronic equipment and readable storage medium
CN116501432A (en) Vehicle wallpaper generation method and device, electronic equipment and readable storage medium
CN113709560B (en) Video editing method, device, equipment and storage medium
CN110913118B (en) Video processing method, device and storage medium
JP2023545052A (en) Image processing model training method and device, image processing method and device, electronic equipment, and computer program
WO2021057644A1 (en) Photographing method and apparatus
CN112967299A (en) Image cropping method and device, electronic equipment and computer readable medium
CN116828216A (en) Live video processing method and device, electronic equipment, medium and live video processing system
CN116546304A (en) Parameter configuration method, device, equipment, storage medium and product
CN112188116B (en) Video synthesis method, client and system based on object
CN112839167A (en) Image processing method, image processing device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination