CN113923463A - Real-time keying and scene synthesis system for live broadcast scene and implementation method - Google Patents

Real-time keying and scene synthesis system for live broadcast scene and implementation method Download PDF

Info

Publication number
CN113923463A
CN113923463A CN202111088028.1A CN202111088028A CN113923463A CN 113923463 A CN113923463 A CN 113923463A CN 202111088028 A CN202111088028 A CN 202111088028A CN 113923463 A CN113923463 A CN 113923463A
Authority
CN
China
Prior art keywords
data
live broadcast
scene
live
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111088028.1A
Other languages
Chinese (zh)
Other versions
CN113923463B (en
Inventor
葛渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Anhui Technology Development Co ltd
Original Assignee
Nanjing Anhui Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Anhui Technology Development Co ltd filed Critical Nanjing Anhui Technology Development Co ltd
Priority to CN202111088028.1A priority Critical patent/CN113923463B/en
Publication of CN113923463A publication Critical patent/CN113923463A/en
Application granted granted Critical
Publication of CN113923463B publication Critical patent/CN113923463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to a real-time keying and scene synthesis system and an implementation method for a live scene. The implementation method comprises three steps of system training prefabrication, live video acquisition, video graphic processing and the like. The invention can effectively meet the requirements of video data processing operation in various scene environments; the method and the device can flexibly realize the accurate identification and collection operation of the video data according to the playing requirement, and simultaneously realize the flexible adjustment and editing of the video content.

Description

Real-time keying and scene synthesis system for live broadcast scene and implementation method
Technical Field
The invention relates to a real-time keying and scene synthesis system and an implementation method for a live broadcast scene, and belongs to the field of information management and technology.
Background
At present, with the development of the live broadcast industry, during the live broadcast activity, after the live video data is collected by the devices such as the camera and the like, the live video data is directly played by the devices such as the network device after being subjected to common compression, filtering and the like, although the requirements of use can be met to a certain extent, the data volume related to the live broadcast video is large, the types of video content, characters, props and the like are huge, and the requirements of the live broadcast video data are also various differences, so that when the live broadcast is performed by inverting the current live broadcast of the video, the video content often effectively meets the watching and playing requirements of a plurality of users, the working efficiency of the live broadcast operation of the inverted video is greatly influenced, and meanwhile, the same or similar content is repeatedly shot to meet the requirements of different users, so that the working efficiency of the live broadcast of the video is seriously influenced, in addition, the cost of live video is increased, and in addition, the current live broadcast system cannot flexibly and accurately identify and store corresponding data information in live video in operation, so that the flexibility and convenience of live video operation are poor. Therefore, in order to solve this problem, it is urgently needed to develop a completely new embodiment to meet the actual use requirement.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a real-time keying and scene synthesis system and an implementation method for a live broadcast scene, which effectively meet the requirements of different use occasions and live broadcast video playing operation and can help to reduce the cost of the video live broadcast operation.
A real-time keying and scene synthesis system of a live broadcast scene comprises a comprehensive server based on big data, an image synthesis server based on cloud computing, a data communication network platform, an image synthesis terminal, a live broadcast scene acquisition terminal, a distributed data storage system and a live broadcast scene playing terminal, wherein the image synthesis server based on cloud computing is respectively in data connection with the comprehensive server based on big data, the image synthesis terminal, the live broadcast scene acquisition terminal, the distributed data storage system and the live broadcast scene playing terminal through the data communication network platform, the comprehensive server based on big data is respectively in data connection with the distributed data storage system and the live broadcast scene playing terminal through the data communication network platform, the image synthesis terminals are provided with a plurality of data connections which are established through a data communication network platform and form at least one local area network.
Further, an artificial intelligence based big data processing system is arranged in the big data based integrated server, an artificial intelligence based cloud computing system is arranged in the cloud computing based image synthesis server, and a CNN convolutional neural network system, a residual error neural network system, a BP neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority computing subprogram are simultaneously arranged in the big data based integrated server and the cloud computing based image synthesis server, wherein the CNN convolutional neural network system, the data stack subprogram and the priority computing subprogram are respectively connected with the artificial intelligence based big data processing system and the artificial intelligence based cloud computing system, the CNN convolutional neural network system is respectively connected with the residual error neural network system through the BP neural network system, And the residual error neural network system and the deep learning neural network system simultaneously establish data connection with the keyword retrieval subprogram, the keyword counting subprogram, the data stack subprogram and the priority calculation subprogram respectively.
Furthermore, the BP neural network system is a nested structure BP neural network system adopting a C/S structure and a B/S structure; the deep learning neural network system is based on an LSTM intelligent prediction system.
Furthermore, a binarization image processing system, a connected domain image processing system, at least one third-party video stream processing system and at least one third-party video processing system are additionally arranged in the image synthesis server based on cloud computing.
Further, the live broadcast scene acquisition terminal comprises arc guide rails, vertical guide rails, a bearing seat, a lifting bearing column, a three-dimensional turntable, a three-axis gyroscope, a bearing supporting plate, a camera, a light supplementing lamp, a distance measuring device, a brightness sensor and a driving circuit, wherein the arc guide rails are of an arc structure with the upper end surface parallel to a ground plane, the central angle of the arc guide rails is not less than 90 degrees, at least two arc guide rails are coaxially distributed among the arc guide rails and are connected with the ground plane through a plurality of vertical guide rails, the axes of the vertical guide rails are distributed along the diameter direction of the arc guide rails and are intersected, the intersection point is located at the position of the circle center of the arc guide rails, the lower end surface of each arc guide rail is in sliding connection with the vertical guide rails through a sliding block, the upper end surface is in sliding connection with the plurality of bearing seats through a sliding block, and the axes of the lifting bearing column are vertically distributed with the upper end surface of the arc guide rails, the upper end surface of the bearing support plate is hinged with the lower end surface of the bearing support plate through a three-dimensional turntable, the upper end surface of the bearing support plate and the axis of a lifting bearing column form an included angle of 30-135 degrees, the upper end surface of the bearing support plate is connected with a camera and at least one light supplement lamp, the optical axes of the camera and the light supplement lamp are distributed in parallel and are distributed in parallel with the upper end surface of the bearing support plate, at least four distance measuring devices and at least four brightness sensors are uniformly distributed around the axis of the bearing support plate and are embedded on the side surface of the bearing support plate, the axis of the distance measuring devices and the upper end surface of the bearing support plate are distributed in parallel, a three-axis gyroscope is additionally arranged on the lower end surface of the bearing support plate, the lifting bearing column, the three-dimensional turntable, the three-axis gyroscope, the bearing support plate, the camera, the light supplement lamp, the distance measuring device and the brightness sensor are electrically connected with driving circuits, the number of the driving circuits is the same as that of the bearing seats are arranged in each bearing seat, and all the driving circuits are connected in parallel.
Furthermore, the vertical guide rail is connected with the sliding block in a sliding mode through a traveling mechanism, and the traveling mechanism is electrically connected with the driving circuit; the lifting bearing column is any one of at least two stages of electric telescopic columns, hydraulic telescopic columns and pneumatic telescopic columns; the camera is any one or more of wide angle camera, long focus camera and 3D camera.
Furthermore, the driving circuit is any one of circuit systems based on an FPGA chip and a DSP chip.
Further, the image synthesis terminal is any one of a PC computer, an industrial computer and a graphic workstation; the live scene playing terminal is any one of a PC computer, an industrial computer, a graphic workstation, a video distributor, a multimedia terminal and a display screen.
A method for realizing a real-time keying and scene synthesis system of a live broadcast scene comprises the following steps:
s1, performing system training, namely summarizing current live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data, and storing the summarized data in a distributed data storage system; meanwhile, summarizing and counting the live broadcast operation demand data, storing the data in a distributed data storage system, and then simultaneously controlling an image synthesis server and the distributed data storage system based on cloud computing by using an image synthesis terminal through a data communication network platform, on one hand, respectively identifying and independently separating live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data from the distributed data storage system according to a background, auxiliary props and figures to obtain background, auxiliary props and figure data, and independently storing the background, auxiliary props and figure data in the distributed data storage system; on the other hand, live broadcast operation requirements stored in the distributed data storage system are identified, the operation requirements are classified and identified according to data requirements of backgrounds, auxiliary props and characters in live broadcast videos, a live broadcast requirement classification statistical list is obtained, and the live broadcast requirement classification statistical list is stored in the distributed data storage system; and finally synchronously operating with a CNN convolutional neural network system, a residual neural network system, a BP neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority calculation subprogram in a big data-based integrated server and a cloud computing-based image synthesis server, based on the requirements of the operation background, the auxiliary props and the character data in the classification statistical list of the live broadcast requirements, the background, the auxiliary props and the character data stored in the distributed data storage system are combined, combining live broadcast video files according to the requirements of operating backgrounds, auxiliary props and character data in a live broadcast scene classification statistical list according to the live broadcast requirements, so as to obtain live broadcast video data simulation training meeting the playing requirements, and obtaining live broadcast video data processing logic through the simulation training;
s2, collecting the live video, after S1, arranging the live scenes by the live scene collecting terminal, the arc guide rail of the live broadcast scene acquisition terminal is positioned at the center of live broadcast operation, and the distance between each camera, the light filling lamp and the live broadcast operation position is adjusted through the matching of the arc guide rail, the vertical guide rail and the bearing seat, meanwhile, the included angle between each camera and the live broadcast operation position is adjusted through the lifting bearing column and the bearing supporting plate, and the distance and the angle of each camera are accurately measured by the distance measuring device and the three-axis gyroscope in the adjusting process, the ambient brightness is detected by the brightness sensor, and the light supplement lamp is driven to operate according to the ambient brightness, therefore, video data acquisition is carried out on a live broadcast site, and the acquired data are synchronously sent to an image synthesis server and a distributed data storage system based on cloud computing through a data communication network platform; meanwhile, a live broadcast scene playing terminal is respectively in data connection with a distributed data storage system and an external third-party live broadcast playing platform through a data communication network platform;
s3, video graphics processing, namely after the live video data obtained in the step S2 are stored in a distributed data storage system, acquiring information required by a user for playing the live video through a live scene playing terminal according to any one or two of a large data-based integrated server and a cloud computing-based image synthesis server, and cooperatively operating the large data-based integrated server and the cloud computing-based image synthesis server according to live video data processing logic generated in the step S1, and firstly driving the live video data processing logic to generate picture data frame by frame for the live video data; then according to the live video data processing logic, calling S1 character data, auxiliary prop data, background image data, background video data and newly-collected video which are not stored in the distributed data storage system to generate picture data frame by frame for assembly, replacing the picture of the newly-collected video information with the corresponding frame, finally assembling the replaced file to obtain a target video file, and playing the target video file according to the live scene;
further, in the step S3, in the editing operation of the live video image data, the integrated server based on the big data generates picture data frame by frame for the live video data, compares the picture data, the auxiliary property data, the background image data and the background video data stored in the distributed data storage system in the step S1, backs up the data information that is not recorded in the step S1, and records the data information in the distributed data storage system, thereby expanding the system database.
The system has high integration, modularization degree and operation automation degree, and good universality, and can effectively meet the requirements of video data processing operation in various scene environments; on the other hand, the data processing operation has high precision and efficiency, and when the video information live broadcast operation is effectively met, the video content is flexibly adjusted and edited while the video data is flexibly identified and collected according to the broadcast requirement, so that the requirements of different use occasions and live broadcast video broadcast operation are effectively met, and the cost of the video live broadcast operation can be reduced.
Drawings
The invention is described in detail below with reference to the drawings and the detailed description;
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a live scene capture terminal;
FIG. 3 is a schematic top view of a circular guide rail and a vertical guide rail;
FIG. 4 is a schematic flow chart of the method of the present invention.
Detailed Description
In order to facilitate the implementation of the technical means, creation features, achievement of the purpose and the efficacy of the invention, the invention is further described below with reference to specific embodiments.
As shown in fig. 1-3, a real-time keying and scene synthesis system for live broadcast scene comprises a large data-based integrated server 1, a cloud computing-based image synthesis server 2, a data communication network platform 3, an image synthesis terminal 4, a live broadcast scene acquisition terminal 5, a distributed data storage system 6 and a live broadcast scene playing terminal 7, wherein the cloud computing-based image synthesis server 2 is respectively in data connection with the large data-based integrated server 1, the image synthesis terminal 4, the live broadcast scene acquisition terminal 5, the distributed data storage system 6 and the live broadcast scene playing terminal 7 through the data communication network platform 3, the large data-based integrated server 1 is respectively in data connection with the distributed data storage system and the live broadcast scene playing terminal through the data communication network platform, the image synthesis terminals are a plurality of, and the image synthesis terminals are in data connection through the data communication network platform, and constitute at least one local area network.
In this embodiment, the synthetic server 1 based on big data is internally provided with a big data processing system based on artificial intelligence, the image synthetic server 2 based on cloud computing is provided with a cloud computing system based on artificial intelligence, and the synthetic server 1 based on big data and the image synthetic server 2 based on cloud computing are simultaneously provided with a CNN convolutional neural network system, a residual error neural network system, a BP neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority computing subprogram, wherein the CNN convolutional neural network system, the data stack subprogram and the priority computing subprogram are respectively connected with the big data processing system based on artificial intelligence, the cloud computing system based on artificial intelligence, the CNN convolutional neural network system is respectively connected with the residual error neural network system, the cloud computing system, And the residual error neural network system and the deep learning neural network system simultaneously establish data connection with the keyword retrieval subprogram, the keyword counting subprogram, the data stack subprogram and the priority calculation subprogram respectively.
Further optimized, the BP neural network system is a nested structure BP neural network system adopting a C/S structure and a B/S structure; the deep learning neural network system is based on an LSTM intelligent prediction system.
Meanwhile, a binarization image processing system, a connected domain image processing system, at least one third-party video stream processing system and at least one third-party video processing system are additionally arranged in the image synthesis server 2 based on cloud computing.
It should be particularly noted that the live broadcast scene collecting terminal 5 includes an arc guide rail 51, a vertical guide rail 52, a bearing seat 53, a lifting bearing column 54, a three-dimensional turntable 55, a three-axis gyroscope 56, a bearing support plate 57, a camera 58, a light supplement lamp 59, a distance measuring device 501, a brightness sensor 502 and a driving circuit 503, where the arc guide rail 51 is an arc structure whose upper end surface and a ground plane are distributed in parallel, and the central angle of the arc guide rail 51 is not less than 90 °, at least two arc guide rails 51 are coaxially distributed among the arc guide rails 51 and connected with the ground plane through a plurality of vertical guide rails 52, the axes of the vertical guide rails 52 are distributed along the diameter direction of the arc guide rail 51 and the vertical guide rails 52 intersect, the intersection point is located at the position of the centers of the arc guide rails 51, the lower end surface of the arc guide rails 51 is connected with the vertical guide rails 52 in a sliding manner through sliders 504, and the upper end surface is connected with the bearing seats 53 in a sliding manner through sliders 504, the axial line of the lifting bearing column 54 is vertically distributed with the upper end face of the arc guide rail 51, the upper end face of the lifting bearing column is hinged with the lower end face of the bearing supporting plate 57 through the three-dimensional turntable 55, the upper end face of the bearing supporting plate 57 forms an included angle of 30-135 degrees with the axial line of the lifting bearing column 54, the upper end face of the bearing supporting plate 57 is connected with a camera 58 and at least one light supplement lamp 59, the optical axes of the camera 58 and the light supplement lamp 59 are distributed in parallel and are distributed in parallel with the upper end face of the bearing supporting plate 57, at least four distance measuring devices 501 and at least four brightness sensors 502 are uniformly distributed around the axial line of the bearing supporting plate 57 and are embedded in the side surface of the bearing supporting plate 57, the axial line of the distance measuring devices is distributed in parallel with the upper end face of the bearing supporting plate 57, the lower end face of the bearing supporting plate 57 is additionally provided with a three-axis gyroscope 56, and the lifting bearing column 54, the three-dimensional turntable 55, the three-axis gyroscope 56, the bearing supporting plate 57, the camera 58, the light supplement lamp 59, and the light supplement lamp 59 are embedded in parallel with the upper end face of the bearing supporting plate 57, The distance measuring device 501 and the brightness sensor 502 are electrically connected with the driving circuits 503, the number of the driving circuits 503 is the same as that of the bearing seats 53, one driving circuit 503 is arranged in each bearing seat 53, and the driving circuits 503 are connected in parallel.
Meanwhile, the vertical guide rail 52 and the slider 504 are connected in a sliding manner through a traveling mechanism 505, and the traveling mechanism 505 is electrically connected with the driving circuit 503; the lifting bearing column 54 is any one of at least two stages of electric telescopic columns, hydraulic telescopic columns and pneumatic telescopic columns; the camera 58 is any one or more of a wide-angle camera, a telephoto camera and a 3D camera.
In this embodiment, the driving circuit 503 is any one of circuit systems based on an FPGA chip and a DSP chip.
In operation, the cameras at a plurality of different positions and angles can acquire the same video content at different angles and positions, so that the live video data information can be comprehensively and comprehensively acquired, comprehensive and accurate video information can be provided for subsequent video editing and processing, and the quality of video data editing operation is improved. Meanwhile, the image synthesis terminal 4 is any one of a PC computer, an industrial computer and a graphic workstation; the live scene playing terminal is any one of a PC computer, an industrial computer, a graphic workstation, a video distributor, a multimedia terminal and a display screen.
As shown in fig. 4, an implementation method of a real-time keying and scene synthesis system for a live broadcast scene includes the following steps:
s1, performing system training, namely summarizing current live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data, and storing the summarized data in a distributed data storage system; meanwhile, summarizing and counting the live broadcast operation demand data, storing the data in a distributed data storage system, and then simultaneously controlling an image synthesis server and the distributed data storage system based on cloud computing by using an image synthesis terminal through a data communication network platform, on one hand, respectively identifying and independently separating live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data from the distributed data storage system according to a background, auxiliary props and figures to obtain background, auxiliary props and figure data, and independently storing the background, auxiliary props and figure data in the distributed data storage system; on the other hand, live broadcast operation requirements stored in the distributed data storage system are identified, the operation requirements are classified and identified according to data requirements of backgrounds, auxiliary props and characters in live broadcast videos, a live broadcast requirement classification statistical list is obtained, and the live broadcast requirement classification statistical list is stored in the distributed data storage system; and finally synchronously operating with a CNN convolutional neural network system, a residual neural network system, a BP neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority calculation subprogram in a big data-based integrated server and a cloud computing-based image synthesis server, based on the requirements of the operation background, the auxiliary props and the character data in the classification statistical list of the live broadcast requirements, the background, the auxiliary props and the character data stored in the distributed data storage system are combined, combining live broadcast video files according to the requirements of operating backgrounds, auxiliary props and character data in a live broadcast scene classification statistical list according to the live broadcast requirements, so as to obtain live broadcast video data simulation training meeting the playing requirements, and obtaining live broadcast video data processing logic through the simulation training;
s2, collecting the live video, after S1, arranging the live scenes by the live scene collecting terminal, the arc guide rail of the live broadcast scene acquisition terminal is positioned at the center of live broadcast operation, and the distance between each camera, the light filling lamp and the live broadcast operation position is adjusted through the matching of the arc guide rail, the vertical guide rail and the bearing seat, meanwhile, the included angle between each camera and the live broadcast operation position is adjusted through the lifting bearing column and the bearing supporting plate, and the distance and the angle of each camera are accurately measured by the distance measuring device and the three-axis gyroscope in the adjusting process, the ambient brightness is detected by the brightness sensor, and the light supplement lamp is driven to operate according to the ambient brightness, therefore, video data acquisition is carried out on a live broadcast site, and the acquired data are synchronously sent to an image synthesis server and a distributed data storage system based on cloud computing through a data communication network platform; meanwhile, a live broadcast scene playing terminal is respectively in data connection with a distributed data storage system and an external third-party live broadcast playing platform through a data communication network platform;
s3, video graphics processing, namely after the live video data obtained in the step S2 are stored in a distributed data storage system, acquiring information required by a user for playing the live video through a live scene playing terminal according to any one or two of a large data-based integrated server and a cloud computing-based image synthesis server, and cooperatively operating the large data-based integrated server and the cloud computing-based image synthesis server according to live video data processing logic generated in the step S1, and firstly driving the live video data processing logic to generate picture data frame by frame for the live video data; then according to the live video data processing logic, calling S1 character data, auxiliary prop data, background image data, background video data and newly-collected video which are not stored in the distributed data storage system to generate picture data frame by frame for assembly, replacing the picture of the newly-collected video information with the corresponding frame, finally assembling the replaced file to obtain a target video file, and playing the target video file according to the live scene;
in this embodiment, in the step S3, in the editing operation of live video image data, the integrated server based on big data generates picture data frame by frame for the live video data, compares the picture data, the auxiliary property data, the background image data, and the background video data stored in the distributed data storage system in the step S1, and backs up and additionally records data information that is not recorded in the step S1 into the distributed data storage system, thereby implementing system database expansion.
The system has high integration, modularization degree and operation automation degree, and good universality, and can effectively meet the requirements of video data processing operation in various scene environments; on the other hand, the data processing operation has high precision and efficiency, and the video content can be adjusted and edited flexibly while the video data can be flexibly identified and collected according to the playing requirement in the process of effectively meeting the video information live broadcasting operation, so that the requirements of different use occasions and live video broadcasting operation can be effectively met.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A real-time keying and scene synthesis system of a live broadcast scene is characterized in that: the real-time keying and scene synthesis system of the live broadcast scene comprises a large data-based integrated server, a cloud computing-based image synthesis server, a data communication network platform, an image synthesis terminal, a live broadcast scene acquisition terminal, a distributed data storage system and a live broadcast scene playing terminal, wherein the cloud computing-based image synthesis server is respectively in data connection with the large data-based integrated server, the image synthesis terminal, the live broadcast scene acquisition terminal, the distributed data storage system and the live broadcast scene playing terminal through the data communication network platform, the large data-based integrated server is respectively in data connection with the distributed data storage system and the live broadcast scene playing terminal through the data communication network platform, the number of the image synthesis terminals is multiple, and the data connection is established among the image synthesis terminals through the data communication network platform, and constitute at least one local area network.
2. The system of claim 1, wherein the system comprises: the synthetic server based on big data is internally provided with a big data processing system based on artificial intelligence, the image synthetic server based on cloud computing is provided with a cloud computing system based on artificial intelligence, and the synthetic server based on big data and the image synthetic server based on cloud computing are simultaneously provided with a CNN (convolutional neural network) system, a residual error neural network system, a BP (back propagation) neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority computing subprogram, wherein the CNN convolutional neural network system, the data stack subprogram and the priority computing subprogram are respectively connected with the big data processing system based on artificial intelligence and the cloud computing system based on artificial intelligence, and the CNN convolutional neural network system is respectively connected with the residual error neural network system, the BP, And the residual error neural network system and the deep learning neural network system simultaneously establish data connection with the keyword retrieval subprogram, the keyword counting subprogram, the data stack subprogram and the priority calculation subprogram respectively.
3. The system of claim 1, wherein the system comprises: the BP neural network system is a nested structure BP neural network system adopting a C/S structure and a B/S structure; the deep learning neural network system is based on an LSTM intelligent prediction system.
4. The system of claim 1, wherein the system comprises: and a binarization image processing system, a connected domain image processing system, at least one third-party video stream processing system and at least one third-party video processing system are additionally arranged in the image synthesis server based on cloud computing.
5. The system of claim 1, wherein the system comprises: the live broadcast scene acquisition terminal comprises arc guide rails, vertical guide rails, bearing seats, a lifting bearing column, a three-dimensional rotary table, a three-axis gyroscope, a bearing supporting plate, a camera, a light supplementing lamp, a distance measuring device, a brightness sensor and a driving circuit, wherein the arc guide rails are of an arc structure with the upper end surfaces parallel to a ground plane, the central angle of the arc guide rails is not less than 90 degrees, the arc guide rails are at least two, the arc guide rails are coaxially distributed and connected with the ground plane through a plurality of vertical guide rails, the axes of the vertical guide rails are distributed along the diameter direction of the arc guide rails, the vertical guide rails are intersected, the intersection point is located at the circle center position of the arc guide rails, the lower end surfaces of the arc guide rails are in sliding connection with the vertical guide rails through sliding blocks, the upper end surfaces are in sliding connection with the bearing seats through sliding blocks, the axes of the lifting bearing column are vertically distributed with the upper end surfaces of the arc guide rails, and the upper end surfaces of the lifting bearing column are hinged with the lower end surfaces of the bearing supporting plate through the three-dimensional rotary table, the upper end surface of the bearing supporting plate and the axis of the lifting bearing column form an included angle of 30-135 degrees, the upper end surface of the bearing supporting plate is connected with a camera and at least one light supplement lamp, the optical axes of the camera and the light supplement lamp are distributed in parallel, at least four distance measuring devices and at least four brightness sensors are distributed in parallel with the upper end surface of the bearing supporting plate, are uniformly distributed around the axis of the bearing supporting plate and are embedded on the side surface of the bearing supporting plate, the axis of the bearing plate is parallel to the upper end surface of the bearing plate, a three-axis gyroscope is arranged on the lower end surface of the bearing plate, the lifting bearing column, the three-dimensional turntable, the three-axis gyroscope, the bearing supporting plate, the camera, the light supplementing lamp, the distance measuring device and the brightness sensor are all electrically connected with the driving circuit, the number of the driving circuits is consistent with that of the bearing seats, one driving circuit is arranged in each bearing seat, and the driving circuits are connected in parallel.
6. The system of claim 5 for real-time matting and scene composition of a live scene, wherein: the vertical guide rail is connected with the sliding block in a sliding mode through a traveling mechanism, and the traveling mechanism is electrically connected with the driving circuit; the lifting bearing column is any one of at least two stages of electric telescopic columns, hydraulic telescopic columns and pneumatic telescopic columns; the camera is any one or more of wide angle camera, long focus camera and 3D camera.
7. The system of claim 5 for real-time matting and scene composition of a live scene, wherein: the driving circuit is any one of circuit systems based on an FPGA chip and a DSP chip.
8. The system of claim 1, wherein the system comprises: the image synthesis terminal is any one of a PC computer, an industrial computer and a graphic workstation; the live scene playing terminal is any one of a PC computer, an industrial computer, a graphic workstation, a video distributor, a multimedia terminal and a display screen.
9. A method for realizing a real-time keying and scene synthesis system of a live broadcast scene is characterized by comprising the following steps:
s1, performing system training, namely summarizing current live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data, and storing the summarized data in a distributed data storage system; meanwhile, summarizing and counting the live broadcast operation demand data, storing the data in a distributed data storage system, and then simultaneously controlling an image synthesis server and the distributed data storage system based on cloud computing by using an image synthesis terminal through a data communication network platform, on one hand, respectively identifying and independently separating live broadcast video data, scene modulation human object data, auxiliary prop data, background image data and background video data from the distributed data storage system according to a background, auxiliary props and figures to obtain background, auxiliary props and figure data, and independently storing the background, auxiliary props and figure data in the distributed data storage system; on the other hand, live broadcast operation requirements stored in the distributed data storage system are identified, the operation requirements are classified and identified according to data requirements of backgrounds, auxiliary props and characters in live broadcast videos, a live broadcast requirement classification statistical list is obtained, and the live broadcast requirement classification statistical list is stored in the distributed data storage system; and finally synchronously operating with a CNN convolutional neural network system, a residual neural network system, a BP neural network system, a deep learning neural network system, a data stack subprogram, a keyword retrieval subprogram, a keyword counting subprogram and a priority calculation subprogram in a big data-based integrated server and a cloud computing-based image synthesis server, based on the requirements of the operation background, the auxiliary props and the character data in the classification statistical list of the live broadcast requirements, the background, the auxiliary props and the character data stored in the distributed data storage system are combined, combining live broadcast video files according to the requirements of operating backgrounds, auxiliary props and character data in a live broadcast scene classification statistical list according to the live broadcast requirements, so as to obtain live broadcast video data simulation training meeting the playing requirements, and obtaining live broadcast video data processing logic through the simulation training;
s2, collecting the live video, after S1, arranging the live scenes by the live scene collecting terminal, the arc guide rail of the live broadcast scene acquisition terminal is positioned at the center of live broadcast operation, and the distance between each camera, the light filling lamp and the live broadcast operation position is adjusted through the matching of the arc guide rail, the vertical guide rail and the bearing seat, meanwhile, the included angle between each camera and the live broadcast operation position is adjusted through the lifting bearing column and the bearing supporting plate, and the distance and the angle of each camera are accurately measured by the distance measuring device and the three-axis gyroscope in the adjusting process, the ambient brightness is detected by the brightness sensor, and the light supplement lamp is driven to operate according to the ambient brightness, therefore, video data acquisition is carried out on a live broadcast site, and the acquired data are synchronously sent to an image synthesis server and a distributed data storage system based on cloud computing through a data communication network platform; meanwhile, a live broadcast scene playing terminal is respectively in data connection with a distributed data storage system and an external third-party live broadcast playing platform through a data communication network platform;
s3, video graphics processing, namely after the live video data obtained in the step S2 are stored in a distributed data storage system, acquiring information required by a user for playing the live video through a live scene playing terminal according to any one or two of a large data-based integrated server and a cloud computing-based image synthesis server, and cooperatively operating the large data-based integrated server and the cloud computing-based image synthesis server according to live video data processing logic generated in the step S1, and firstly driving the live video data processing logic to generate picture data frame by frame for the live video data; and then according to the live video data processing logic, calling S1 character data, auxiliary prop data, background image data, background video data and newly acquired video which are not stored in the distributed data storage system to generate picture data frame by frame for assembly, replacing the picture of the newly acquired video information with the corresponding frame, finally assembling the replaced file to obtain a target video file, and playing the target video file according to the live scene.
10. The implementation method of claim 9, wherein: in the step S3, in editing live video image data, the integrated server based on big data generates picture data frame by frame for the live video data, compares the picture data, the auxiliary property data, the background image data and the background video data stored in the distributed data storage system in the step S1, backs up data information that is not recorded in the step S1, and additionally records the data information into the distributed data storage system, thereby implementing system database expansion.
CN202111088028.1A 2021-09-16 2021-09-16 Real-time matting and scene synthesis system for live broadcast scene and implementation method Active CN113923463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111088028.1A CN113923463B (en) 2021-09-16 2021-09-16 Real-time matting and scene synthesis system for live broadcast scene and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111088028.1A CN113923463B (en) 2021-09-16 2021-09-16 Real-time matting and scene synthesis system for live broadcast scene and implementation method

Publications (2)

Publication Number Publication Date
CN113923463A true CN113923463A (en) 2022-01-11
CN113923463B CN113923463B (en) 2022-07-29

Family

ID=79234968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111088028.1A Active CN113923463B (en) 2021-09-16 2021-09-16 Real-time matting and scene synthesis system for live broadcast scene and implementation method

Country Status (1)

Country Link
CN (1) CN113923463B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506559A (en) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 Virtual reality panoramic multimedia processing system and method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245937A (en) * 2015-09-30 2016-01-13 武汉科优达科技有限公司 Video scene control system and method
CN106412558A (en) * 2016-09-08 2017-02-15 深圳超多维科技有限公司 Method, equipment and device for stereo virtual reality live broadcasting
CN110139030A (en) * 2019-04-24 2019-08-16 薄涛 Mixed reality processing system, method, server and its storage medium
CN111274910A (en) * 2020-01-16 2020-06-12 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment
CN112019771A (en) * 2020-08-20 2020-12-01 新华智云科技有限公司 Holographic cloud conference system based on real-time image matting
CN112235591A (en) * 2020-10-15 2021-01-15 深圳市歌华智能科技有限公司 Virtual reality live broadcast distribution platform

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245937A (en) * 2015-09-30 2016-01-13 武汉科优达科技有限公司 Video scene control system and method
CN106412558A (en) * 2016-09-08 2017-02-15 深圳超多维科技有限公司 Method, equipment and device for stereo virtual reality live broadcasting
CN110139030A (en) * 2019-04-24 2019-08-16 薄涛 Mixed reality processing system, method, server and its storage medium
CN111274910A (en) * 2020-01-16 2020-06-12 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment
CN112019771A (en) * 2020-08-20 2020-12-01 新华智云科技有限公司 Holographic cloud conference system based on real-time image matting
CN112235591A (en) * 2020-10-15 2021-01-15 深圳市歌华智能科技有限公司 Virtual reality live broadcast distribution platform

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506559A (en) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 Virtual reality panoramic multimedia processing system and method thereof

Also Published As

Publication number Publication date
CN113923463B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
US20190238800A1 (en) Imaging systems and methods for immersive surveillance
CN104268939B (en) Transformer substation virtual-reality management system based on three-dimensional panoramic view and implementation method of transformer substation virtual-reality management system based on three-dimensional panoramic view
CN206850908U (en) The measuring system that a kind of spliced panorama camera merges with tracking head
CN104410834A (en) Intelligent switching method for teaching videos
CN106156199B (en) Video monitoring image storage and retrieval method
CN101379530A (en) System and method for capturing facial and body motion
CN107004271A (en) Display methods, device, electronic equipment, computer program product and non-transient computer readable storage medium storing program for executing
WO2020211427A1 (en) Segmentation and recognition method, system, and storage medium based on scanning point cloud data
US9087380B2 (en) Method and system for creating event data and making same available to be served
CN113923463B (en) Real-time matting and scene synthesis system for live broadcast scene and implementation method
CN111402414A (en) Point cloud map construction method, device, equipment and storage medium
US11580616B2 (en) Photogrammetric alignment for immersive content production
CN115442542B (en) Method and device for splitting mirror
CN101253538A (en) Mobile motion capture cameras
CN106982357A (en) A kind of intelligent camera system based on distribution clouds
GB2456802A (en) Image capture and motion picture generation using both motion camera and scene scanning imaging systems
CN104469303A (en) Intelligent switching method of teaching video
CN111475675A (en) Video processing system
CN108989739A (en) A kind of full view system for live broadcast of video conference and method
CN103777644A (en) Low-altitude orbit intelligent robot image pick-up system and photographing method thereof
CN203193774U (en) Automatic tracing machine based on space grid technology
CN112396831A (en) Three-dimensional information generation method and device for traffic identification
CN213126248U (en) Intelligent interaction system for metro vehicle section construction site and BIM scene
CN112102490B (en) Modeling method for three-dimensional model of transformer substation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant