CN117596420A - Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence - Google Patents

Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence Download PDF

Info

Publication number
CN117596420A
CN117596420A CN202410074104.0A CN202410074104A CN117596420A CN 117596420 A CN117596420 A CN 117596420A CN 202410074104 A CN202410074104 A CN 202410074104A CN 117596420 A CN117596420 A CN 117596420A
Authority
CN
China
Prior art keywords
video
live
digital
target
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410074104.0A
Other languages
Chinese (zh)
Other versions
CN117596420B (en
Inventor
成传伟
胡文佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tuoshe Technology Group Co ltd
Jiangxi Tuoshi Intelligent Technology Co ltd
Original Assignee
Tuoshe Technology Group Co ltd
Jiangxi Tuoshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tuoshe Technology Group Co ltd, Jiangxi Tuoshi Intelligent Technology Co ltd filed Critical Tuoshe Technology Group Co ltd
Priority to CN202410074104.0A priority Critical patent/CN117596420B/en
Priority claimed from CN202410074104.0A external-priority patent/CN117596420B/en
Publication of CN117596420A publication Critical patent/CN117596420A/en
Application granted granted Critical
Publication of CN117596420B publication Critical patent/CN117596420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a fusion live broadcast method, a system, a medium and electronic equipment based on artificial intelligence, which are applied to a server in a digital live broadcast system, wherein the digital live broadcast system comprises the server and a digital live broadcast machine, and the digital live broadcast machine comprises a camera, and the method comprises the following steps: acquiring a live broadcast scenario corresponding to a live broadcast video of a digital person, which is being broadcast at the current moment of a target store; acquiring a field video of a target store through a digital live person machine; determining a target playing period of the live video in the digital live video according to the live scenario; the live video is fused into a video segment corresponding to a target playing period of the digital live video, so that a target digital live video is obtained; and live broadcasting is carried out based on the target digital person live broadcasting video. By adopting the embodiment of the invention, the authenticity of the live broadcast of the digital person is improved.

Description

Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence
Technical Field
The invention relates to the field of video live broadcast application in image communication, in particular to a fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence.
Background
The current live broadcasting of the digital person usually plays a pre-recorded video, so that the live broadcasting of the digital person cannot correspond to the actual situation of a store site, the live broadcasting of the digital person is not true enough, the person watching the live broadcasting is not immersed enough, and the purchasing desire of the person watching the live broadcasting cannot be stimulated, and therefore, how to improve the live broadcasting authenticity of the digital person becomes a urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention provides a fusion live broadcast method, a fusion live broadcast system, a fusion live broadcast medium and an electronic device based on artificial intelligence.
In a first aspect, an embodiment of the present invention provides an artificial intelligence based fusion live broadcast method, which is applied to a server in a digital live broadcast system, where the digital live broadcast system includes: the server and the digital live man machine, wherein the digital live man machine comprises a camera, and the method comprises the following steps:
acquiring a digital person live video which is being live at the current moment of a target store, and a live script corresponding to the digital person live video;
Acquiring the field video of the target store through the digital live person machine;
determining a target playing period of the live video in the digital live video according to the live scenario;
fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video;
and live broadcasting is carried out based on the target digital person live broadcasting video.
In a second aspect, an embodiment of the present invention provides an artificial intelligence based converged live broadcast system, which is applied to a server in a digital live human broadcast system, where the digital live human broadcast system includes: the server, digital live man machine includes the camera, the fusion live broadcast system based on artificial intelligence includes: the system comprises an acquisition unit, a determination unit, a fusion unit and a live broadcast unit; wherein,
the acquisition unit is used for acquiring the live video of the digital person being live broadcast at the current moment of the target store and the live script corresponding to the live video of the digital person; acquiring the field video of the target store through the digital live person machine;
the determining unit is used for determining a target playing period of the live video in the digital live video according to the live scenario;
The fusion unit is used for fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video;
and the live broadcast unit is used for carrying out live broadcast based on the target digital person live broadcast video.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory for storing one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs including steps for performing the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program causes a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
According to the embodiment of the application, the live video of the digital person being live broadcast at the current moment of the target store and the live script corresponding to the live video of the digital person are obtained; acquiring a field video of a target store through a digital live person machine; determining a target playing period of the live video in the digital live video according to the live scenario; the live video is fused into a video segment corresponding to a target playing period of the digital live video, so that a target digital live video is obtained; live broadcasting is carried out based on target digital person live broadcasting video, the live video is fused into the digital person live broadcasting video by collecting live video of a store, and the fused digital person live broadcasting video is used for live broadcasting, so that the reality of the digital person live broadcasting is improved.
Drawings
In order to more clearly describe the embodiments of the present invention or the technical solutions in the background art, the following description will describe the drawings that are required to be used in the embodiments of the present invention or the background art.
Fig. 1 is a schematic architecture diagram of a digital live system according to an embodiment of the present application;
fig. 2 is a flowchart of an artificial intelligence based fusion live broadcast method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an artificial intelligence-based fusion live broadcast system according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The electronic device described in the embodiments of the present application may include a smart Phone (such as an Android mobile Phone, an iOS mobile Phone, a Windows Phone mobile Phone, etc.), a tablet computer, a palm computer, a notebook computer, a video matrix, a monitoring platform, a mobile internet device (mobile internet devices, MID), or a wearable device, etc., which are merely examples, but not exhaustive, including but not limited to the above devices, and of course, the above electronic device may also be a server, for example, a cloud server.
Referring to fig. 1, fig. 1 is a schematic architecture diagram of a digital live system 100 according to an embodiment of the present application, and as shown in fig. 1, the digital live system 100 includes a server 101 and a digital live machine 102.
The digital live video can be live broadcast through the server 101 on each large live broadcast platform, the server 101 can be communicated with the digital live broadcast machine 102, the server 101 can receive the live video collected by the digital live broadcast machine 102 at a target store, the live video can be fused into the digital live broadcast video through the server 101, the target digital live broadcast video is obtained after fusion, the target digital live broadcast video is live broadcast, and the reality of the live broadcast of the digital person is improved.
Referring to fig. 2, fig. 2 is a flowchart of an artificial intelligence based fusion live broadcast method provided in an embodiment of the present application, where the method shown in fig. 2 may be applied to a server in the digital live human broadcast system shown in fig. 1, where the digital live human broadcast system includes: the server and the digital live man machine, wherein the digital live man machine comprises a camera, and the method comprises the following steps:
s201, acquiring a digital person live video which is being live at the current moment of a target store, and a live script corresponding to the digital person live video.
In an embodiment of the present application, the target store may be one or more stores,
in a specific embodiment, a live video of a digital person being live broadcast at the current moment of a target store and a live scenario corresponding to the live video of the digital person are obtained, specifically, the target store can be a store, the live video of the digital person being live broadcast at the current moment and the live scenario corresponding to the live video of the digital person can be manually imported by a staff of the target store, or the live situation of the target store can be monitored in real time through some third party tools, so that the live video of the digital person being live broadcast at the current moment and the live scenario corresponding to the live video of the digital person can be obtained.
S202, acquiring the live video of the target store through the digital live person machine.
In this embodiment of the present application, the digital live-broadcasting machine may include a camera, a microphone, etc., and has a video acquisition function, and may further perform live broadcasting offline, and interact with an audience, for example, answer questions of the audience, receive comments of the audience, etc., and may also analyze acquired data, for example, count the number of the audience, analyze behaviors of the audience, etc.
In a specific embodiment, the live video of the target store is collected through the digital live person machine, and the live person machine can shoot the surroundings by using a camera to collect the live video of the target store.
S203, determining a target playing period of the live video in the digital live video according to the live scenario.
In the embodiment of the application, the target playing period is a period of playing live video in digital live video.
In a specific embodiment, a target playing period of a live video in a live video of a digital person is determined according to a live scenario, specifically, a main scenario of the live scenario and which parts need to be used for the live video can be known according to contents of the live scenario, so that the target playing period is obtained.
Optionally, in step S203, the determining, according to the live scenario, a target playing period of the live video in the digital live video may include the following steps:
a1, carrying out scene type identification on the field video to obtain a target scene type;
a2, determining a live scenario segment corresponding to the target scene type in the live scenario;
a3, determining a reference playing period corresponding to the live scenario segment;
a4, intercepting relevant video clips of the digital live video according to the reference playing period;
a5, carrying out integrity detection on the related video segments to obtain an integrity detection result;
a6, determining the target playing period according to the integrity detection result.
In the embodiment of the application, the scene types may include at least one of the following: the customer entering scene, the customer consulting scene, the customer browsing commodity scene, etc., are not limited herein, and the integrity detection result is that the relevant video clip is complete or the relevant video clip is incomplete.
In a specific embodiment, scene type identification is performed on the field video to obtain a target scene type; determining a live scenario segment corresponding to the target scene type in the live scenario; determining a reference playing period corresponding to the live script fragment; specifically, artificial intelligence can be adopted to identify scene types of the field video, for example, a trained residual neural network (ResNet) can be used to identify scene types of the field video, the field video is input into the residual neural network to be identified, an identification result is obtained, and a target scene type is determined according to the identification result; then, determining a live scenario segment corresponding to a target scene type in the live scenario, specifically, firstly acquiring specific content of the live scenario, knowing different scenes and episodes of the live scenario description, marking a plurality of scenario segments related to the target scene type, matching each scenario segment in the scenario segments with the target scene type, finding out a live scenario segment most matched with the target scene type in the scenario segments, and then determining a reference playing period corresponding to the live scenario segment, for example, decomposing the live scenario into different segments, wherein each segment has a specific theme and target, determining a live position of the live scenario segment in the scenario, and predicting a playing period of the live scenario segment in a live video of a digital person according to the segment position to obtain the reference playing period.
Further, relevant video clips of the digital live video are intercepted according to the reference playing time period; carrying out integrity detection on the related video segments to obtain an integrity detection result; determining a target playing period according to the integrity detection result, specifically, capturing a relevant video segment corresponding to the reference playing period in the digital live video by using a video editing tool, and then performing integrity detection on the relevant video segment, for example, manually checking or performing integrity detection on the relevant video segment by using a video analysis tool to obtain an integrity detection result; and determining a target playing period according to the integrity detection result.
In this way, the scene type is identified by the scene type of the field video to obtain the target scene type; determining a live scenario segment corresponding to the target scene type in the live scenario; determining a reference playing period corresponding to the live script fragment; intercepting relevant video clips of the digital live video according to the reference playing time period; carrying out integrity detection on the related video segments to obtain an integrity detection result; according to the method, a target playing period is determined according to an integrity detection result, and automatic scene type identification and live transcript segment matching are carried out, so that the time for manually screening and editing videos can be greatly reduced, the efficiency of digital human video live broadcasting is improved, in addition, the corresponding transcript segment is selected according to the target scene type, the attention of a spectator can be better attracted and kept, and the viewing experience of the spectator is improved.
Optionally, step A6, the determining the target playing period according to the integrity detection result may include the following steps:
b1, when the integrity detection result is that the relevant video clip is determined to be complete, taking the reference playing period as the target playing period;
b2, when the integrity detection result is that the related video segment is incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment;
b3, tracing according to the head video segment and the tail video segment respectively to obtain a complete head video segment corresponding to the head video segment and a complete tail video segment corresponding to the tail video segment;
b4, splicing the complete header video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the related video segment;
and B5, acquiring a playing period corresponding to the complete video clip, and obtaining the target playing period.
In the embodiment of the present application, when the integrity detection result is that the related video clip is confirmed to be complete, the reference playing period may be used as the target playing period; when the integrity detection result is that the related video segment is determined to be incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment; tracing is carried out according to the head video segment and the tail video segment respectively, so that a complete head video segment corresponding to the head video segment and a complete tail video segment corresponding to the tail video segment are obtained; specifically, when the relevant video segment is determined to be incomplete, a preset interception time length may be preset, the beginning and the end of the relevant video segment are intercepted according to the preset interception time length to obtain a head video segment and a tail video segment, the rest is an intermediate video segment, and then, tracing is performed according to the head video segment and the tail video segment respectively, for example, a preset tracing time length may be preset, the preset tracing time length may be set to be greater than half of a complete video segment, the preset tracing time length may be traced back to a video before the head video segment of the digital live video, so as to obtain the complete head video segment, and similarly, the preset tracing time length may be traced back to a video after the tail video segment of the digital live video, so as to obtain the complete tail video segment.
Further, splicing the complete head video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the relevant video segment; the playing time period corresponding to the complete video segment is obtained to obtain a target playing time period, specifically, the video clip tool is used for splicing, the complete head video segment, the middle video segment and the complete tail video segment are sequentially led into the video clip tool, the complete video segment is obtained by splicing according to the time sequence of the three, and then the playing time period of the complete video segment in the digital live video, namely the target playing time period, is determined.
When the integrity detection result is that the related video clip is determined to be complete, the reference playing period is taken as a target playing period; when the integrity detection result is that the related video segment is determined to be incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment; tracing is carried out according to the head video segment and the tail video segment respectively, so that a complete head video segment and a complete tail video segment are obtained; and splicing the complete head video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the related video segment, taking a playing period corresponding to the complete video segment as a target playing period, ensuring the integrity of the related video segment through integrity detection, avoiding the watching experience degradation caused by incomplete related video segments, and in addition, obtaining the high-quality related video segment by tracing the complete head video segment and the complete tail video segment, thereby improving the overall video quality.
S204, fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video.
In the embodiment of the application, the video clip tool can be used for extracting the video clip corresponding to the target playing period of the digital live video to obtain a video clip, and the live video is fused into the video clip to obtain the target digital live video.
Optionally, in step S204, the fusing the live video to a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video may include the following steps:
c1, intercepting a video clip corresponding to the target playing period of the digital live video;
c2, extracting background videos and foreground videos of the video clips;
c3, replacing the background video with the field video to obtain a target background video;
c4, fusing the target background video and the foreground video to obtain a fused video segment;
and C5, replacing the video segment corresponding to the target playing period in the digital live video according to the fusion video segment to obtain the target digital live video.
In the embodiment of the application, the background video refers to a video serving as a main scene or environment in video editing, and the foreground video refers to a video serving as a main focus or subject in video editing.
In a specific embodiment, capturing a video clip corresponding to a target playing period of a digital live video; extracting background videos and foreground videos of the video clips; replacing the background video with the field video to obtain a target background video; the method comprises the steps of fusing a target background video and a foreground video to obtain a fused video segment, specifically, intercepting a corresponding video segment in a digital live video according to a target playing period by using a video clipping tool, then, guiding the video segment into an image processing tool by using an image processing tool, respectively selecting a background part and a foreground part in the video segment, starting the image processing tool to process the video segment to obtain the background video and the front Jing Shipin of the video segment, then, guiding a live video into the image processing tool to replace the background video, specifically, using a moving tool in the image processing tool to move the live video to a corresponding background position, then, adjusting the size and the position of the live video to enable the live video to be completely matched with the background to obtain the target background video, and then, merging the target background video and the foreground video together by using the image processing tool to obtain the fused video segment.
Further, according to the method, the video segments corresponding to the target playing time periods in the digital live video are replaced by the fusion video segments to obtain the target digital live video, specifically, the fusion video segments can be replaced to the target playing time periods in the digital live video by using video editing software to cover the original video content of the digital live video, and the length and the position of the fusion video segments can be adjusted to be completely matched with the target playing time periods to finally obtain the target digital live video.
Thus, capturing video clips corresponding to the target playing time period of the digital live video; extracting background video and foreground video of the video clip; replacing the background video with the field video to obtain a target background video; fusing the target background video and the foreground video to obtain a fused video segment; according to the method, the video segments corresponding to the target playing time periods in the digital live video are replaced by the fusion video segments, the target digital live video is obtained, the live video is used as a background, so that a viewer can feel stronger live atmosphere, interaction with a digital anchor is enhanced, in addition, the process of intercepting, extracting and replacing the video segments is automatically completed, time and cost of manual operation can be reduced, and the method has great advantages for large-scale digital live programs.
Optionally, in step C4, the fusing the target background video and the foreground video to obtain a fused video segment includes:
d1, fusing the target background video and the foreground video to obtain a reference video segment;
d2, acquiring a historical live broadcast record of the target store corresponding to the target playing period to obtain a plurality of historical live broadcast records;
d3, determining average online people corresponding to each historical live record in the plurality of historical live records to obtain a plurality of average online people;
d4, selecting the average online people of the appointed number which is the front among the plurality of average online people to obtain at least one average online people;
d5, acquiring historical live videos corresponding to the at least one average online number of people to obtain at least one historical live video;
d6, obtaining video playing parameters of each historical live video in the at least one historical live video to obtain at least one video playing parameter;
and D7, optimizing the reference video segment according to the at least one video playing parameter to obtain the fusion video segment.
In this embodiment of the present application, the video playing parameters may include at least one of the following: play sharpness, play filter, play duration, play volume, play timbre, etc., without limitation, the specified number may be a system default or user setting.
In a specific embodiment, a video editing tool is used for fusing a target background video and a foreground video to obtain a reference video segment; and acquiring all the historical live broadcast records of the target store through a live broadcast database of the target store, and finding out a plurality of historical live broadcast records corresponding to the target playing period.
Then, the average online population corresponding to each history live record can be obtained according to the plurality of history live records, and a plurality of average online population are obtained; selecting a first specified number of average online people in the plurality of average online people to obtain at least one average online people, for example, if the specified number is 3, the average online people with the first three people in the plurality of average online people can be selected; the method comprises the steps of obtaining at least one historical live video corresponding to at least one average online number, and specifically, determining a historical live record corresponding to the at least one average online number, and obtaining the corresponding historical live video according to the historical live record to obtain the at least one historical live video.
Further, obtaining video playing parameters of each historical live video in at least one historical live video to obtain at least one video playing parameter; specifically, the video playing parameters may be playing filters, the video playing parameters of each historical live video in the at least one historical live video may be found according to the historical live record, then the reference video segments are optimized according to the at least one video playing parameters to obtain the fused video segments, for example, the best historical live video with the largest average online number corresponding to the at least one historical live video may be determined first, the reference video segments may be optimized according to the video playing parameters of the best historical live video, and specifically, the video playing parameters corresponding to the reference video segments may be adjusted to the video playing parameters of the best historical live video to obtain the fused video segments.
In this way, a reference video segment is obtained by fusing the target background video and the foreground video, and then the historical live broadcast records of the target store corresponding to the target playing period are obtained, so that a plurality of historical live broadcast records and a plurality of corresponding average online people are obtained; selecting the average online population of the appointed number in front to obtain at least one average online population; acquiring at least one historical live video corresponding to at least one average online population; acquiring at least one video playing parameter corresponding to at least one historical live video; and optimizing the reference video clips according to the at least one video playing parameter to obtain the fused video clips, and enabling the fused video clips to be more attractive by optimizing the reference video clips, so that the viewing experience of audiences is improved.
S205, live broadcasting is carried out based on the target digital person live broadcasting video.
In the embodiment of the application, the target store can live on line according to the target digital person live video on each large live broadcasting platform, and meanwhile, the digital person live broadcasting machine can live off line on the target store, and the live broadcasting is carried out off line simultaneously, so that the attention of customers is attracted.
Optionally, in step S205, when the target store is a chain store, the live broadcasting based on the target digital person live broadcasting video may include the following steps:
E1, determining all interlocking stores corresponding to the target store to obtain a plurality of stores, wherein each store corresponds to a digital live broadcasting machine;
and E2, controlling each store in the stores to live broadcast the target digital person live broadcast video by the corresponding digital person live broadcast machine.
In the embodiment of the application, all the interlocking stores corresponding to the target store are determined, so that a plurality of stores are obtained, and each store corresponds to a digital live broadcast machine; controlling each store in the plurality of stores to live broadcast target digital human live broadcast video by a corresponding digital human live broadcast machine, specifically, determining all interlocking stores corresponding to the target store by a database or store length of the target store, thereby obtaining a plurality of stores corresponding to the target store, wherein each store is provided with a digital human live broadcast machine; the digital personal live video in each of the plurality of stores may be controlled to live with the target digital personal live video off-line.
Thus, a plurality of stores are obtained by determining all interlocking stores corresponding to the target store, and each store corresponds to a digital live broadcast machine; controlling each store in the plurality of stores to live a target digital live video with a corresponding digital live video machine: by simultaneously carrying out digital person live broadcast in a plurality of stores, the influence and the awareness of brands can be enlarged, and more potential customers are attracted; in addition, the digital human live broadcast data of a plurality of stores can be analyzed, so that the demands and the favorites of consumers can be better known, and the basis is provided for subsequent product optimization and marketing strategies.
Optionally, in step S205, the method may further include the following steps:
f1, acquiring reference live videos shot by the digital live broadcasting machine of each store in the plurality of stores to obtain a plurality of reference live videos;
f2, analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows;
f3, selecting the maximum value of the plurality of people flow rates;
and F4, taking the store corresponding to the maximum value as the target store.
In the embodiment of the application, a reference field video shot by a digital human live broadcast machine of each store in a plurality of stores is obtained, and a plurality of reference field videos are obtained; specifically, the digital live broadcasting machine of each store in the plurality of stores can be used for shooting and recording the store to obtain a plurality of reference live videos, and then the plurality of reference live videos can be uploaded to a server through the network for subsequent processing.
Then, analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows; selecting the maximum value of the plurality of people flow rates; taking the store corresponding to the maximum value as a target store, specifically, analyzing the traffic in the reference field video through manual analysis, or detecting and analyzing pedestrians in the video by utilizing a trained Convolutional Neural Network (CNN) model, such as a fast regional convolutional neural network (Faster R-CNN) and a regional full convolutional network (R-FCN), so as to obtain the traffic, analyzing the traffic in each reference field video in a plurality of reference field videos to obtain a plurality of traffic, and selecting the maximum value in the plurality of traffic; the store corresponding to the maximum value is set as the target store.
In this way, a plurality of reference live videos are obtained by acquiring the reference live video shot by the digital live video camera of each store in the plurality of stores; analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows; selecting a maximum value of the plurality of people flow rates; the store corresponding to the maximum value is taken as a target store, and the store with the maximum traffic is selected as the target store, so that more customers can be attracted, and the exposure rate is improved; in addition, by analyzing the people flow of different stores, the areas which are more popular can be found, so that the store layout is optimized, and the customer satisfaction is improved.
According to the embodiment of the application, the live video of the digital person being live broadcast at the current moment of the target store and the live script corresponding to the live video of the digital person are obtained; acquiring a field video of a target store through a digital live person machine; determining a target playing period of the live video in the digital live video according to the live scenario; the live video is fused into a video segment corresponding to a target playing period of the digital live video, so that a target digital live video is obtained; live broadcasting is carried out based on target digital person live broadcasting video, the live video is fused into the digital person live broadcasting video by collecting live video of a store, and the fused digital person live broadcasting video is used for live broadcasting, so that the reality of the digital person live broadcasting is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an artificial intelligence-based fusion live broadcast system 300 according to an embodiment of the present application; the artificial intelligence based converged live system 300 shown in fig. 3 can be applied to a server in the digital live system shown in fig. 1, which includes: the server, the digital live man machine includes the camera, the fusion live man system 300 based on artificial intelligence includes: an acquisition unit 301, a determination unit 302, a fusion unit 303, and a live broadcast unit 304; wherein,
the acquiring unit 301 is configured to acquire a live video of a digital person being live at a current moment of a target store and a live scenario corresponding to the live video of the digital person; acquiring the field video of the target store through the digital live person machine;
the determining unit 302 is configured to determine, according to the live scenario, a target playing period of the live video in the digital live video;
the fusing unit 303 is configured to fuse the live video to a video segment corresponding to the target playing period of the digital live video, so as to obtain a target digital live video;
The live broadcast unit 304 is configured to perform live broadcast based on the target digital person live broadcast video.
Optionally, in the aspect of determining, according to the live scenario, a target playing period of the live video in the digital live video, the determining unit 302 is specifically configured to:
performing scene type identification on the field video to obtain a target scene type;
determining a live scenario segment corresponding to the target scene type in the live scenario;
determining a reference playing period corresponding to the live scenario segment;
intercepting relevant video clips of the digital human live video according to the reference playing period;
carrying out integrity detection on the related video segments to obtain an integrity detection result;
and determining the target playing period according to the integrity detection result.
Optionally, in the aspect of determining the target playing period according to the integrity detection result, the determining unit 302 is further specifically configured to:
when the integrity detection result is that the relevant video clip is determined to be complete, the reference playing period is taken as the target playing period;
when the integrity detection result is that the related video segment is determined to be incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment;
Tracing is carried out according to the head video segment and the tail video segment respectively, so that a complete head video segment corresponding to the head video segment and a complete tail video segment corresponding to the tail video segment are obtained;
splicing the complete head video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the relevant video segment;
and obtaining the playing time period corresponding to the complete video clip to obtain the target playing time period.
Optionally, when the target store is a chain store, the live broadcast unit 304 is specifically configured to:
determining all interlocking stores corresponding to the target store to obtain a plurality of stores, wherein each store corresponds to a digital live broadcasting machine;
and controlling each store in the plurality of stores to live the target digital person live video with the corresponding digital person live video machine.
Optionally, the artificial intelligence based fusion live broadcast system 300 is specifically configured to:
acquiring reference live videos shot by the digital live broadcasting machine of each store in the plurality of stores to obtain a plurality of reference live videos;
Analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows;
selecting the maximum value of the plurality of people flow rates;
and taking the store corresponding to the maximum value as the target store.
Optionally, in the aspect of fusing the live video to a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video, the fusing unit 303 is specifically configured to:
intercepting a video clip corresponding to the target playing period of the digital live video;
extracting background videos and foreground videos of the video clips;
replacing the background video with the field video to obtain a target background video;
fusing the target background video and the foreground video to obtain a fused video segment;
and replacing the video segment corresponding to the target playing period in the digital live video according to the fusion video segment to obtain the target digital live video.
Optionally, in the aspect of fusing the target background video and the foreground video to obtain a fused video segment, the fusing unit 303 is further specifically configured to:
Fusing the target background video and the foreground video to obtain a reference video segment;
acquiring a historical live broadcast record of the target store corresponding to the target playing period to obtain a plurality of historical live broadcast records;
determining average online people corresponding to each historical live record in the plurality of historical live records to obtain a plurality of average online people;
selecting the average online people of the designated number in front of the plurality of average online people to obtain at least one average online people;
acquiring historical live videos corresponding to the at least one average online number of people, and obtaining at least one historical live video;
acquiring video playing parameters of each historical live video in the at least one historical live video to obtain at least one video playing parameter;
and optimizing the reference video segment according to the at least one video playing parameter to obtain the fusion video segment.
In a specific implementation, the artificial intelligence based fusion live broadcast system 300 described in the embodiment of the present invention may also perform other embodiments described in the artificial intelligence based fusion live broadcast method provided in the embodiment of the present invention, which are not described herein.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, where the electronic device includes a processor, a memory, and one or more programs, and may further include a communication interface, where the processor, the memory, and the communication interface are connected to each other through a bus, and are applied to a server in a digital live system, where the digital live system includes: the server, the digital live person machine comprising a camera, the one or more programs being stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of:
acquiring a digital person live video which is being live at the current moment of a target store, and a live script corresponding to the digital person live video;
acquiring the field video of the target store through the digital live person machine;
determining a target playing period of the live video in the digital live video according to the live scenario;
fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video;
And live broadcasting is carried out based on the target digital person live broadcasting video.
Optionally, in the determining, according to the live scenario, a target playing period of the live video in the digital live video, the program further includes instructions for:
performing scene type identification on the field video to obtain a target scene type;
determining a live scenario segment corresponding to the target scene type in the live scenario;
determining a reference playing period corresponding to the live scenario segment;
intercepting relevant video clips of the digital human live video according to the reference playing period;
carrying out integrity detection on the related video segments to obtain an integrity detection result;
and determining the target playing period according to the integrity detection result.
Optionally, in the aspect of determining the target playing period according to the integrity detection result, the program further includes instructions for:
when the integrity detection result is that the relevant video clip is determined to be complete, the reference playing period is taken as the target playing period;
when the integrity detection result is that the related video segment is determined to be incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment;
Tracing is carried out according to the head video segment and the tail video segment respectively, so that a complete head video segment corresponding to the head video segment and a complete tail video segment corresponding to the tail video segment are obtained;
splicing the complete head video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the relevant video segment;
and obtaining the playing time period corresponding to the complete video clip to obtain the target playing time period.
Optionally, when the target store is a chain store, the live video is live based on the target digital person live broadcast, and the program further includes instructions for executing the following steps:
determining all interlocking stores corresponding to the target store to obtain a plurality of stores, wherein each store corresponds to a digital live broadcasting machine;
and controlling each store in the plurality of stores to live the target digital person live video with the corresponding digital person live video machine.
Optionally, the above program further comprises instructions for performing the steps of:
acquiring reference live videos shot by the digital live broadcasting machine of each store in the plurality of stores to obtain a plurality of reference live videos;
Analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows;
selecting the maximum value of the plurality of people flow rates;
and taking the store corresponding to the maximum value as the target store.
Optionally, in the aspect of fusing the live video to a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video, the program further includes instructions for executing the following steps:
intercepting a video clip corresponding to the target playing period of the digital live video;
extracting background videos and foreground videos of the video clips;
replacing the background video with the field video to obtain a target background video;
fusing the target background video and the foreground video to obtain a fused video segment;
and replacing the video segment corresponding to the target playing period in the digital live video according to the fusion video segment to obtain the target digital live video.
Optionally, in the aspect of fusing the target background video and the foreground video to obtain a fused video segment, the program further includes instructions for executing the following steps:
Fusing the target background video and the foreground video to obtain a reference video segment;
acquiring a historical live broadcast record of the target store corresponding to the target playing period to obtain a plurality of historical live broadcast records;
determining average online people corresponding to each historical live record in the plurality of historical live records to obtain a plurality of average online people;
selecting the average online people of the designated number in front of the plurality of average online people to obtain at least one average online people;
acquiring historical live videos corresponding to the at least one average online number of people, and obtaining at least one historical live video;
acquiring video playing parameters of each historical live video in the at least one historical live video to obtain at least one video playing parameter;
and optimizing the reference video segment according to the at least one video playing parameter to obtain the fusion video segment.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The fusion live broadcast method based on artificial intelligence is characterized by being applied to a server in a digital live broadcast system, wherein the digital live broadcast system comprises the following components: the server and the digital live man machine, wherein the digital live man machine comprises a camera, and the method comprises the following steps:
acquiring a digital person live video which is being live at the current moment of a target store, and a live script corresponding to the digital person live video;
acquiring the field video of the target store through the digital live person machine;
determining a target playing period of the live video in the digital live video according to the live scenario;
fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video;
And live broadcasting is carried out based on the target digital person live broadcasting video.
2. The method of claim 1, wherein the determining a target playing period of the live video in the digital human live video from the live scenario comprises:
performing scene type identification on the field video to obtain a target scene type;
determining a live scenario segment corresponding to the target scene type in the live scenario;
determining a reference playing period corresponding to the live scenario segment;
intercepting relevant video clips of the digital human live video according to the reference playing period;
carrying out integrity detection on the related video segments to obtain an integrity detection result;
and determining the target playing period according to the integrity detection result.
3. The method of claim 2, wherein the determining the target playback period based on the integrity check result comprises:
when the integrity detection result is that the relevant video clip is determined to be complete, the reference playing period is taken as the target playing period;
when the integrity detection result is that the related video segment is determined to be incomplete, intercepting a head video segment and a tail video segment in the related video segment to obtain an intermediate video segment;
Tracing is carried out according to the head video segment and the tail video segment respectively, so that a complete head video segment corresponding to the head video segment and a complete tail video segment corresponding to the tail video segment are obtained;
splicing the complete head video segment, the middle video segment and the complete tail video segment to obtain a complete video segment of the relevant video segment;
and obtaining the playing time period corresponding to the complete video clip to obtain the target playing time period.
4. The method of any of claims 1-3, wherein, when the target store is a chain store, the live video based on the target digital person live video comprises:
determining all interlocking stores corresponding to the target store to obtain a plurality of stores, wherein each store corresponds to one digital live broadcasting machine;
and controlling each store in the plurality of stores to live the target digital person live video with the corresponding digital person live video machine.
5. The method of claim 4, wherein the method further comprises:
acquiring reference live videos shot by the digital live broadcasting machine of each store in the plurality of stores to obtain a plurality of reference live videos;
Analyzing the people flow in each reference field video in the plurality of reference field videos to obtain a plurality of people flows;
selecting the maximum value of the plurality of people flow rates;
and taking the store corresponding to the maximum value as the target store.
6. The method as claimed in any one of claims 1 to 3, wherein the fusing the live video into the video segment corresponding to the target playing period of the digital live video to obtain the target digital live video includes:
intercepting a video clip corresponding to the target playing period of the digital live video;
extracting background videos and foreground videos of the video clips;
replacing the background video with the field video to obtain a target background video;
fusing the target background video and the foreground video to obtain a fused video segment;
and replacing the video segment corresponding to the target playing period in the digital live video according to the fusion video segment to obtain the target digital live video.
7. The method of claim 6, wherein fusing the target background video and the foreground video to obtain a fused video segment comprises:
Fusing the target background video and the foreground video to obtain a reference video segment;
acquiring a historical live broadcast record of the target store corresponding to the target playing period to obtain a plurality of historical live broadcast records;
determining average online people corresponding to each historical live record in the plurality of historical live records to obtain a plurality of average online people;
selecting the average online people of the designated number in front of the plurality of average online people to obtain at least one average online people;
acquiring historical live videos corresponding to the at least one average online number of people, and obtaining at least one historical live video;
acquiring video playing parameters of each historical live video in the at least one historical live video to obtain at least one video playing parameter;
and optimizing the reference video segment according to the at least one video playing parameter to obtain the fusion video segment.
8. An artificial intelligence based fusion live broadcast system, which is characterized in that the system is applied to a server in a digital live broadcast system, and the digital live broadcast system comprises: the server, digital live man machine includes the camera, the fusion live broadcast system based on artificial intelligence includes: the system comprises an acquisition unit, a determination unit, a fusion unit and a live broadcast unit; wherein,
The acquisition unit is used for acquiring the live video of the digital person being live broadcast at the current moment of the target store and the live script corresponding to the live video of the digital person; acquiring the field video of the target store through the digital live person machine;
the determining unit is used for determining a target playing period of the live video in the digital live video according to the live scenario;
the fusion unit is used for fusing the live video into a video segment corresponding to the target playing period of the digital live video to obtain a target digital live video;
and the live broadcast unit is used for carrying out live broadcast based on the target digital person live broadcast video.
9. An electronic device, comprising: a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising steps for performing the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that a computer program is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202410074104.0A 2024-01-18 Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence Active CN117596420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410074104.0A CN117596420B (en) 2024-01-18 Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410074104.0A CN117596420B (en) 2024-01-18 Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN117596420A true CN117596420A (en) 2024-02-23
CN117596420B CN117596420B (en) 2024-05-31

Family

ID=

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system
CN110069342A (en) * 2019-04-11 2019-07-30 西安交通大学 Net cast channel dispositions method is merged under a kind of mobile cloud computing environment
CN111683260A (en) * 2020-05-07 2020-09-18 广东康云科技有限公司 Program video generation method, system and storage medium based on virtual anchor
CN113822970A (en) * 2021-09-23 2021-12-21 广州博冠信息科技有限公司 Live broadcast control method and device, storage medium and electronic equipment
WO2022127747A1 (en) * 2020-12-14 2022-06-23 郑州大学综合设计研究院有限公司 Method and system for real social using virtual scene
CN114845136A (en) * 2022-06-28 2022-08-02 北京新唐思创教育科技有限公司 Video synthesis method, device, equipment and storage medium
CN116451139A (en) * 2023-06-16 2023-07-18 杭州新航互动科技有限公司 Live broadcast data rapid analysis method based on artificial intelligence
US20230230152A1 (en) * 2022-01-14 2023-07-20 Shopify Inc. Systems and methods for generating customized augmented reality video
US20230308693A1 (en) * 2020-09-25 2023-09-28 Mofa (Shanghai) Information Technology Co., Ltd. Virtual livestreaming method, apparatus, system, and storage medium
CN117319699A (en) * 2023-12-01 2023-12-29 江西拓世智能科技股份有限公司 Live video generation method and device based on intelligent digital human model
CN117336519A (en) * 2023-11-30 2024-01-02 江西拓世智能科技股份有限公司 Method and device for synchronous live broadcasting in multi-live broadcasting room based on AI digital person
CN117395449A (en) * 2023-12-08 2024-01-12 江西拓世智能科技股份有限公司 Tolerance dissimilarisation processing method and processing device for AI digital live broadcast
CN117409119A (en) * 2022-07-06 2024-01-16 北京达佳互联信息技术有限公司 Image display method and device based on virtual image and electronic equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system
CN110069342A (en) * 2019-04-11 2019-07-30 西安交通大学 Net cast channel dispositions method is merged under a kind of mobile cloud computing environment
CN111683260A (en) * 2020-05-07 2020-09-18 广东康云科技有限公司 Program video generation method, system and storage medium based on virtual anchor
US20230308693A1 (en) * 2020-09-25 2023-09-28 Mofa (Shanghai) Information Technology Co., Ltd. Virtual livestreaming method, apparatus, system, and storage medium
WO2022127747A1 (en) * 2020-12-14 2022-06-23 郑州大学综合设计研究院有限公司 Method and system for real social using virtual scene
CN113822970A (en) * 2021-09-23 2021-12-21 广州博冠信息科技有限公司 Live broadcast control method and device, storage medium and electronic equipment
US20230230152A1 (en) * 2022-01-14 2023-07-20 Shopify Inc. Systems and methods for generating customized augmented reality video
CN114845136A (en) * 2022-06-28 2022-08-02 北京新唐思创教育科技有限公司 Video synthesis method, device, equipment and storage medium
CN117409119A (en) * 2022-07-06 2024-01-16 北京达佳互联信息技术有限公司 Image display method and device based on virtual image and electronic equipment
CN116451139A (en) * 2023-06-16 2023-07-18 杭州新航互动科技有限公司 Live broadcast data rapid analysis method based on artificial intelligence
CN117336519A (en) * 2023-11-30 2024-01-02 江西拓世智能科技股份有限公司 Method and device for synchronous live broadcasting in multi-live broadcasting room based on AI digital person
CN117319699A (en) * 2023-12-01 2023-12-29 江西拓世智能科技股份有限公司 Live video generation method and device based on intelligent digital human model
CN117395449A (en) * 2023-12-08 2024-01-12 江西拓世智能科技股份有限公司 Tolerance dissimilarisation processing method and processing device for AI digital live broadcast

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹惠龙;: "三维虚拟与实景视频的融合平台研究及设计", 电脑知识与技术, no. 11, 15 April 2009 (2009-04-15) *

Similar Documents

Publication Publication Date Title
CN110166827B (en) Video clip determination method and device, storage medium and electronic device
US11605402B2 (en) Video-log production system
CN104410920B (en) The method of wonderful mark is carried out based on video segmentation playback volume
CN104735542B (en) A kind of video broadcasting method and device
CN111432235A (en) Live video generation method and device, computer readable medium and electronic equipment
CN107659831B (en) Media data processing method, client and storage medium
US20220172476A1 (en) Video similarity detection method, apparatus, and device
CN108124170A (en) A kind of video broadcasting method, device and terminal device
KR20090093904A (en) Apparatus and method for scene variation robust multimedia image analysis, and system for multimedia editing based on objects
CN107547922B (en) Information processing method, device, system and computer readable storage medium
US11849241B2 (en) Dynamically configured processing of a region of interest dependent upon published video data selected by a runtime configuration file
CN111093069A (en) Quality evaluation method and device for panoramic video stream
CN104320670A (en) Summary information extracting method and system for network video
CN111372116A (en) Video playing prompt information processing method and device, electronic equipment and storage medium
CN111277898A (en) Content pushing method and device
CN110062163B (en) Multimedia data processing method and device
Husa et al. HOST-ATS: automatic thumbnail selection with dashboard-controlled ML pipeline and dynamic user survey
CN105975494A (en) Service information pushing method and apparatus
CN114339451A (en) Video editing method and device, computing equipment and storage medium
CN117596420B (en) Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence
CN114845149A (en) Editing method of video clip, video recommendation method, device, equipment and medium
CN117596420A (en) Fusion live broadcast method, system, medium and electronic equipment based on artificial intelligence
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
KR102534270B1 (en) Apparatus and method for providing meta-data
US11398091B1 (en) Repairing missing frames in recorded video with machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant