CN113518158B - Video splicing method and device, electronic equipment and readable storage medium - Google Patents

Video splicing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113518158B
CN113518158B CN202010273831.1A CN202010273831A CN113518158B CN 113518158 B CN113518158 B CN 113518158B CN 202010273831 A CN202010273831 A CN 202010273831A CN 113518158 B CN113518158 B CN 113518158B
Authority
CN
China
Prior art keywords
video
synchronization time
video stream
synchronization
time scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010273831.1A
Other languages
Chinese (zh)
Other versions
CN113518158A (en
Inventor
马旭炳
乐振晓
余跃
冯禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010273831.1A priority Critical patent/CN113518158B/en
Publication of CN113518158A publication Critical patent/CN113518158A/en
Application granted granted Critical
Publication of CN113518158B publication Critical patent/CN113518158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application provides a video splicing method, a video splicing device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: determining a synchronization time scale of an unsynchronized subsystem based on a synchronized subsystem of a video splicing system; when a video stream is acquired, adding a current synchronization time mark in the acquired video stream to obtain a video stream comprising a first synchronization time mark; and sending the video stream including the first synchronization time scale to a target output end so that the target output end carries out video output based on the first synchronization time scale and a local second synchronization time scale included in the video stream, wherein the target output end includes a plurality of output ends respectively corresponding to each display unit of a target video wall of the video splicing system. The method can ensure that each display unit in the video wall simultaneously displays different contents of the same frame of video.

Description

Video splicing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to a video stitching method and apparatus, an electronic device, and a readable storage medium.
Background
The video splicing means that signals of a plurality of display units are output to the plurality of display units, and the plurality of display units are spliced to form a complete image.
In a video splicing system, each display unit needs to ensure that different contents of the same frame of video are displayed simultaneously, so that the problems of tearing, dislocation and the like of the whole picture can be avoided.
Disclosure of Invention
In view of the above, the present application provides a video splicing method, an apparatus, an electronic device and a readable storage medium.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided a video splicing method, including:
determining a synchronization time scale of an unsynchronized subsystem based on a synchronized subsystem of a video splicing system;
when a video stream is acquired, adding a current synchronization time mark in the acquired video stream to obtain a video stream comprising a first synchronization time mark;
and sending the video stream including the first synchronization time scale to a target output end so that the target output end carries out video output based on the first synchronization time scale and a local second synchronization time scale included in the video stream, wherein the target output end includes a plurality of output ends respectively corresponding to each display unit of a target video wall of the video splicing system.
According to a second aspect of the embodiments of the present application, there is provided a video stitching apparatus, including:
the determining unit is used for determining the synchronization time mark of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system;
the adding unit is used for adding a current synchronization time mark in the obtained video stream to obtain a video stream comprising a first synchronization time mark when the video stream is obtained;
and the sending unit is used for sending the video stream comprising the first synchronization time scale to a target output end so as to enable the target output end to carry out video output based on the first synchronization time scale and a local second synchronization time scale, and the target output end comprises a plurality of output ends which respectively correspond to each display unit of a target video wall of the video splicing system.
According to a third aspect of the embodiments of the present application, there is provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the video splicing method when executing the program stored in the memory.
According to a fourth aspect of the embodiments of the present application, there is provided a computer-readable storage medium having a computer program stored therein, where the computer program is executed by a processor to implement the above-mentioned video stitching method.
According to the video splicing method, the synchronization time scale of the unsynchronized subsystem is determined through the synchronized subsystem based on the video splicing system; when the video stream is acquired, adding a current synchronization time scale in the acquired video stream to obtain the video stream comprising the first synchronization time scale, and sending the video stream comprising the first synchronization time scale to a target output end, so that the target output end carries out video output based on the first synchronization time scale and a local second synchronization time scale in the video stream, and different contents of the same frame of video are ensured to be simultaneously displayed by each display unit in the video wall.
Drawings
Fig. 1 is a schematic flow chart of a video stitching method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another video stitching method according to yet another exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating a specific application scenario according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural diagram of a video stitching apparatus according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of another video stitching apparatus according to another exemplary embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic flowchart of a video stitching method according to an embodiment of the present disclosure is shown in fig. 1, where the video stitching method may include the following steps:
it should be noted that, in the embodiment of the present application, a video wall includes a plurality of display units (e.g., display screens), the plurality of display units correspond to a plurality of output terminals and a video obtaining device, the video obtaining device is used for obtaining video data, the plurality of output terminals are used for performing video output (referred to as target output terminals, and one target output terminal is used for outputting video data to one or more display units).
For example, a video capture device for capturing video data may also be used for video output, that is, the video capture device may be an output terminal with a video capture function.
The main execution subject of steps S100 to S120 is for a video acquisition apparatus.
And S100, determining a synchronization time scale of an unsynchronized subsystem based on a synchronized subsystem of the video splicing system.
In the embodiment of the application, in order to ensure that each display unit in the video wall displays different contents of the same frame of video, the synchronization time stamp of the unsynchronized subsystem may be determined based on the synchronized subsystem of the video splicing system.
In one example, the determining the synchronization timestamp of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system in step S100 may include:
and when the output synchronization of each target output end in the video splicing system is finished, counting and accumulating are carried out based on the synchronized video output interruption so as to determine the synchronization time scale.
For example, in order to implement video output synchronization, output synchronization needs to be performed between target output ends corresponding to the same video wall to ensure that video output of the target output ends is interrupted and synchronized, that is, synchronized subsystems in the video splicing system include the target output ends corresponding to the video wall.
The specific implementation of output synchronization between the target output ends may refer to related implementations in the existing related processes, which is not described in this application.
Considering that the frame rates of the display units in the same video wall are the same, and each frame of video image is output by the target output terminal, a video output interrupt is generated, so that when the synchronization of the video output interrupt is completed between the target output terminals, a synchronization time stamp can be determined by counting and accumulating the video output interrupt, and the synchronization time stamp can be used for the synchronization between the video acquisition device (unsynchronized subsystem) and the synchronized subsystem.
When the target outputs complete output synchronization, count accumulation may be performed based on the synchronized video output interrupts to determine a synchronization timestamp.
It should be noted that, in the embodiment of the present application, a specific implementation of determining the synchronization time scale of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system is not limited to the manner described in the foregoing embodiment, for example, when both the synchronized subsystem and the unsynchronized subsystem are implemented by an FPGA, one pin of an FPGA chip may be set as a synchronization signal pin, the synchronized subsystem and the unsynchronized subsystem may establish a synchronization signal channel through the synchronization signal pin, the synchronized subsystem may synchronize the synchronized clock beat signal to the unsynchronized subsystem through the synchronization signal channel, so that the clock beats of the unsynchronized subsystem and the synchronized subsystem are consistent, and further, the synchronized subsystem may determine the synchronization time scale based on the received clock beat signal.
Step S110, when the video stream is obtained, adding a current synchronization time mark in the obtained video stream to obtain a code stream comprising a first synchronization time mark.
In this embodiment, in order to ensure that each target output end can output different contents of the same frame of video at the same time, for an acquired video stream, for example, a video stream acquired from a network video source (e.g., an IPC (Internet Protocol Camera)), when the acquired video stream is sent to each target output end, a synchronization time stamp may be added to the video stream, so that each target output end may output a video image based on a synchronization time stamp in the video stream (the synchronization time stamp carried in the video stream is referred to as a first synchronization time stamp herein), and output synchronization is ensured.
In one example, when the obtained video stream is subjected to transcoding encapsulation/decoding, that is, the video stream is decapsulated and then encapsulated into a code stream of another encapsulation format or the video stream is decoded and then encoded into a code stream of another encoding format, a synchronization time stamp may be added to the code stream obtained by transcoding encapsulation/decoding.
In one example, in step S110, adding the current synchronization timestamp in the acquired video stream may include:
decapsulate/decode the acquired video stream;
and for any video frame obtained by de-encapsulation/decoding, performing encapsulation/encoding on the video frame in another encapsulation format/encoding format based on the acquired local synchronous time stamp of the video frame.
Illustratively, the captured video stream may be decapsulated/decoded to obtain video data.
For any video frame obtained by decapsulation/decoding, the video frame may be encapsulated/encoded in another encapsulation format/encoding format based on the local synchronization time stamp obtained when the video frame is obtained, so as to obtain a code stream with the first synchronization time stamp.
For example, if the local synchronization time scale is T1 when the video obtaining device obtains the video frame a, when the video obtaining device encapsulates the video frame a, the T1 may be carried in the code stream corresponding to the video frame a.
For any video frame, the local synchronization time stamp when the video frame is acquired may be the local synchronization time stamp of the video acquisition device when the video acquisition device acquires the code stream corresponding to the video frame.
When the local synchronization time scale of the video acquisition equipment is updated and the video stream is acquired synchronously, the synchronization time scale updated when the video stream is acquired can be determined as the local synchronization time scale of the video acquisition equipment when the video stream is acquired; when the local synchronization time stamp of the video acquiring device is updated and the acquired video stream is not synchronized, the synchronization time stamp obtained by updating the local synchronization time stamp closest to the time of acquiring the video stream may be determined as the local synchronization time stamp of the video acquiring device when the video stream is acquired.
It should be noted that, in this embodiment of the present application, when the video input and output formats are consistent, and when the video acquiring device acquires video data, the video acquiring device may not decapsulate/decode and encapsulate/encode the acquired video stream, but after adding a synchronization time stamp to the acquired video stream, send the video stream carrying the synchronization time stamp to the output end.
Step S120, sending the video stream including the first synchronization time stamp to a target output end so that the target output end outputs the video based on the first synchronization time stamp and a local second synchronization time stamp included in the video stream; the target output end comprises a plurality of output ends which respectively correspond to each display unit of a target video wall of the video splicing system.
In the embodiment of the present application, when the video stream including the first synchronization time stamp is obtained in the manner described in step S110, the video stream including the first synchronization time stamp may be sent to the target output terminal.
When the target output receives the video stream including the first synchronization timestamp, the video output may be based on the first synchronization timestamp included in the video stream and a local synchronization timestamp (referred to herein as a second synchronization timestamp).
It should be noted that, because the first synchronization time stamp included in the video stream is determined by the video obtaining device according to the local synchronization time stamp when the video stream is obtained, and the local synchronization time stamp of the video obtaining device is always in an updated state, the first synchronization time stamps included in the code streams corresponding to different video frames of the same network video source are different.
Similarly, the second synchronization timestamp local to the destination output is always updated.
It can be seen that, in the method flow shown in fig. 1, the synchronized subsystem based on the video splicing system determines the synchronization time scale of the unsynchronized subsystem, and when the video stream is acquired, the current synchronization time scale is added to the acquired video stream, so as to ensure that each target output end can perform video output based on the synchronization time scale and the local synchronization time scale in the video stream, and it can be ensured that each display unit in the video wall simultaneously displays different contents of the same frame of video.
In a possible embodiment, as shown in fig. 2, the video stitching method may further include the following steps:
step S200, when receiving the video stream including the first synchronization time stamp, decoding the video stream to obtain a video image and the first synchronization time stamp.
And step S210, outputting the decoded video image when the local second synchronization time stamp is consistent with the first synchronization time stamp included in the video stream.
Illustratively, when the video capture device is also for video output, the video capture device will also receive a video stream that includes the first synchronization timestamp.
When a video stream including a first synchronization timestamp is received, the video stream may be decoded to obtain a corresponding video image and the first synchronization timestamp included in the video stream.
The video capture device may obtain the video image and a corresponding first synchronization timestamp, and may compare the first synchronization timestamp with a second synchronization timestamp local to the video capture device.
When the local second synchronization time stamp is consistent with the first synchronization time stamp, the decoded video image can be output.
In one example, in step S210, outputting the decoded video image may include:
based on the first synchronization time scale, performing specific processing on the received video images of different network video sources, and determining splicing parameters;
and splicing the specifically processed video image and the video image of the local video source based on the splicing parameters, and outputting the spliced video image.
For example, when the video obtaining device decodes to obtain a video image and a corresponding first synchronization timestamp, the video obtaining device may perform specific processing on the received video images of different network video sources based on the first synchronization timestamp, and determine a splicing parameter.
Illustratively, the particular process may include, but is not limited to, one or more of a dig, a zoom, and a synchronous stitch.
For example, taking an example that one output end corresponds to one display unit, the output end may determine, based on a position of the corresponding display unit in the video wall, a position of a video image displayed by the display unit in the entire frame of video image, perform mapping from the video image based on the position, and further scale the mapped image based on a resolution of the display unit and a resolution of the mapped image.
When the video capture device receives a plurality of video streams (each corresponding to a different network video source) including the same first synchronization time stamp, the output end may scale and synchronously splice video images corresponding to the plurality of video streams including the same first synchronization time stamp based on a pre-configured policy.
For example, the video obtaining device obtains, through decoding, a video image a and a video image B (respectively from a network video source a and a network video source B, where the video stream may carry source information of the video image) corresponding to the same first synchronization time scale, and the preconfigured policy is that the video image of the network video source B is superimposed on the video image of the network video source a in a manner of aligning the upper left corner, and the former size is 3/4 of the latter size, the video obtaining device may dig images from the video image a and the video image B respectively based on the position of the corresponding display unit in the video wall, and perform scaling and synchronous stitching on the dug images based on the resolution of the display unit and the preconfigured policy, so that the dug image size in the video image B is 3/4 of the dug size of the video image a, and is superimposed on the dug image of the video image a in a manner of aligning the upper left corner.
For example, when the video obtaining device obtains the specifically processed video image and the splicing parameter, the specifically processed video image and the video image of the local video source may be spliced based on the splicing parameter, that is, the specifically processed video image and the local video source are superimposed according to a preconfigured policy.
It should be noted that the specifically processed video image for splicing and the video image of the local video source need to be video images whose corresponding synchronized timestamps are consistent.
The synchronization time scale corresponding to the video image after the specific processing is a first synchronization time scale included in the video stream corresponding to the synchronization time scale; and the synchronization time mark corresponding to the video image of the local video source is a local second synchronization time mark when the video image is acquired.
Illustratively, when the splicing of the specifically processed video image and the video image of the local video source is completed based on the splicing parameter, the spliced video image may be output.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
Please refer to fig. 3, which is a schematic structural diagram of a video capture device according to an embodiment of the present disclosure, in which the video capture device is also used for outputting video, as shown in fig. 3, the video capture device may include a front-stage module, a rear-stage module, and a control module; the front-stage module is used for processing video data of a network video source, the rear-stage module is used for processing video data of a local video source and outputting video, and the control module is used for acquiring, sending and splicing the video of the network video source.
The method comprises the steps that output synchronization is carried out on rear-stage modules of output ends (video acquisition equipment is also used as one output end) corresponding to the same video wall so as to realize video output interruption synchronization, counting and accumulating are carried out on the basis of the video output interruption of the synchronized rear-stage modules (namely synchronized subsystems in a video splicing system) so as to determine synchronization time scales, and meanwhile, the video output interruption is sent to a front-stage module (unsynchronized subsystem); when the front-stage module receives video output interruption, the synchronization time scale is read from the rear-stage module, the read synchronization time scale is used as a reference of subsequent synchronization, and when the video stream acquired by the control module from the network video source is subjected to trans-encapsulation, the synchronization time scale is added into the code stream and is sent to each target output end through the network module in the control module.
When the target output end receives the code stream comprising the synchronous time mark, the received code stream is decoded to obtain a video image and the synchronous time mark, the synchronous time mark (namely the first synchronous time mark in the above) included in the code stream is compared with the local synchronous time mark (namely the second synchronous time mark in the above), and when the synchronous time mark and the local synchronous time mark are consistent, the video image is output.
For video data of different network video sources, the front-stage module can synchronize based on synchronous time scales included in code streams, dig, zoom and synchronously splice video images with the same corresponding synchronous time scales, determine splicing parameters, and send the processed video images and the splicing parameters to the rear-stage module; and finally, the post-stage module performs post-stage splicing and output on the processed video image and the video image of the local video source according to the splicing parameters, and the specific implementation flow is as follows:
1. the front module receives the rear module synchronization: after the synchronization of the rear-stage modules of each output end is finished, the front-stage modules are controlled to read the synchronization time marks through video output interruption, and the synchronization time marks are used as the synchronization reference of the subsequent front-stage modules;
2. adding the synchronous time scale read in the step 1 into code stream packaging through a package conversion module;
3. the code stream of the step 2 is decoded to obtain a video image and a synchronous time mark, and the video image is synchronized according to the synchronous time mark;
4. splicing the video images synchronized in the step 3 and sending the video images to a rear-stage module, and simultaneously generating splicing parameters;
5. and (4) receiving the video image and the splicing parameter in the step (4) by the rear-stage module, splicing the video image and the video image of the local video source of the rear-stage module, and outputting the spliced video image and the video image to a display unit for display.
For example, in practical applications, the front module may be implemented by an ASIC (Application Specific Integrated Circuit) chip, and the rear module may be implemented by an FPGA (Field-Programmable Gate Array) chip.
When the preceding-stage module acquires the video stream, an RTP (Real-time Transport Protocol) code stream including a synchronous time scale can be obtained by trans-encapsulation, and a video image obtained by decoding the RTP code stream can be YUV (color coding method) video data.
In the embodiment of the application, a synchronized subsystem based on a video splicing system is used for determining a synchronization time scale of an unsynchronized subsystem; when the video stream is acquired, adding a current synchronization time scale in the acquired video stream to obtain the video stream comprising the first synchronization time scale, and sending the video stream comprising the first synchronization time scale to a target output end, so that the target output end carries out video output based on the first synchronization time scale and a local second synchronization time scale in the video stream, and different contents of the same frame of video are ensured to be simultaneously displayed by each display unit in the video wall.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 4, a schematic structural diagram of a video splicing apparatus according to an embodiment of the present disclosure is shown in fig. 4, where the video splicing apparatus may include:
a determining unit 410, configured to determine a synchronization timestamp of an unsynchronized subsystem based on a synchronized subsystem of the video splicing system;
an adding unit 420, configured to, when a video stream is acquired, add a current synchronization time stamp to the acquired video stream to obtain a video stream including a first synchronization time stamp;
a sending unit 430, configured to send the video stream including the first synchronization time stamp to a target output end, so that the target output end performs video output based on the first synchronization time stamp included in the video stream and a local second synchronization time stamp, where the target output end includes a plurality of output ends that respectively correspond to display units of a target video wall of the video splicing system.
In an alternative embodiment, the determining unit 410 determines the synchronization timestamp of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system, and includes:
and when the output synchronization of each target output end in the video splicing system is finished, counting and accumulating are carried out based on the synchronized video output interruption so as to determine a synchronization time scale.
In an optional embodiment, the adding unit 420 adds the current synchronization timestamp in the acquired video stream, including:
decapsulate/decode the acquired video stream;
and for any video frame obtained by decapsulation/decoding, performing encapsulation/encoding of another encapsulation format/encoding format on the video frame based on the local synchronous time stamp when the video frame is acquired.
In an alternative embodiment, as shown in fig. 5, the apparatus further comprises:
an output unit 440, configured to, when receiving a video stream including a first synchronization time stamp, decode the video stream to obtain a video image and the first synchronization time stamp;
and when the local second synchronization time stamp is consistent with the first synchronization time stamp included in the video stream, outputting the decoded video image.
In an alternative embodiment, the output unit 440 outputs the decoded video image, and includes:
based on the first synchronous time scale, performing specific processing on the received video images of different network video sources, and determining splicing parameters;
splicing the specifically processed video image and the video image of the local video source based on the splicing parameter, and outputting the spliced video image;
wherein the specific treatment comprises one or more of:
digging, zooming and synchronous splicing.
Please refer to fig. 6, which is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may include a processor 601, a communication interface 602, a memory 603, and a communication bus 604. The processor 601, the communication interface 602, and the memory 603 communicate with each other via a communication bus 604. Wherein, the memory 603 is stored with a computer program; the processor 601 may perform the video stitching method described above by executing a program stored on the memory 603.
The memory 603 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the memory 602 may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Embodiments of the present application also provide a machine-readable storage medium, such as the memory 603 in fig. 6, storing a computer program, which can be executed by the processor 601 in the electronic device shown in fig. 6 to implement the video splicing method described above.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. The utility model provides a video stitching method, is applied to video acquisition equipment, and its characterized in that, video acquisition equipment includes preceding stage module, back level module and control module, preceding stage module is used for carrying out the video data processing of network video source, and back level module is used for the video data processing and the video output processing of local video source, control module is used for the video acquisition and the sending of network video source, the method includes:
determining a synchronization time scale of an unsynchronized subsystem based on a synchronized subsystem of a video splicing system; the synchronized subsystem comprises a rear-stage module of each target output end which finishes video output interruption and a rear-stage module of the video acquisition equipment; the unsynchronized subsystem comprises a preceding stage module of the video acquisition equipment, and the synchronization time scale is determined in a counting accumulation mode based on synchronized video output interruption;
when a control module of the video acquisition equipment acquires a network video stream, a preceding-stage module of the video acquisition equipment adds a current synchronization time scale to the acquired video stream to obtain a video stream comprising a first synchronization time scale; the synchronous time scale is obtained by reading from a back-stage module of the video acquisition equipment when the front-stage module receives video output interruption;
and the control module of the video acquisition equipment sends the video stream comprising the first synchronization time scale to a target output end so that the target output end carries out video output based on the first synchronization time scale and a local second synchronization time scale which are included in the video stream, and the target output end comprises a plurality of output ends which respectively correspond to each display unit of a target video wall of the video splicing system.
2. The method of claim 1, wherein determining the synchronization timestamp of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system comprises:
and when the output synchronization of each target output end in the video splicing system is finished, counting and accumulating are carried out based on the synchronized video output interruption so as to determine a synchronization time scale.
3. The method of claim 1, wherein adding the current synchronization time stamp to the captured video stream comprises:
decapsulate/decode the acquired video stream;
and for any video frame obtained by decapsulation/decoding, performing encapsulation/encoding of another encapsulation format/encoding format on the video frame based on the local synchronous time stamp when the video frame is acquired.
4. The method of claim 1, further comprising:
when a video stream including a first synchronization time stamp is received, decoding the video stream to obtain a video image and the first synchronization time stamp;
and when the local second synchronization time stamp is consistent with the first synchronization time stamp included in the video stream, outputting the decoded video image.
5. The method of claim 4, wherein outputting the decoded video image comprises:
based on the first synchronization time scale, performing specific processing on the received video images of different network video sources, and determining splicing parameters;
splicing the specifically processed video image and the video image of the local video source based on the splicing parameter, and outputting the spliced video image;
wherein the specific treatment comprises one or more of:
digging, zooming and synchronous splicing.
6. The utility model provides a video stitching device, is applied to video acquisition equipment, its characterized in that, video acquisition equipment includes preceding stage module, back level module and control module, preceding stage module is used for carrying out the video data processing of network video source, and back level module is used for the video data processing and the video output processing of local video source, control module is used for the video acquisition and the sending of network video source, the device includes:
the determining unit is used for determining the synchronization time mark of the unsynchronized subsystem based on the synchronized subsystem of the video splicing system; the synchronized subsystem comprises a rear-stage module of each target output end which finishes video output interruption and a rear-stage module of the video acquisition equipment; the unsynchronized subsystem comprises a preceding stage module of the video acquisition equipment, and the synchronization time mark is determined in a counting and accumulating mode based on the synchronized video output interruption;
the adding unit is used for adding a current synchronization time scale in the acquired video stream through a preceding-stage module of the video acquisition equipment to obtain a video stream comprising a first synchronization time scale when the control module of the video acquisition equipment acquires the network video stream; the synchronous time scale is obtained by reading from a back-stage module of the video acquisition equipment when the front-stage module receives video output interruption;
a sending unit, configured to send the video stream including the first synchronization time stamp to a target output end through a control module of the video acquisition device, so that the target output end performs video output based on the first synchronization time stamp included in the video stream and a local second synchronization time stamp, where the target output end includes a plurality of output ends respectively corresponding to display units of a target video wall of the video splicing system.
7. The apparatus of claim 6, wherein the determining unit determines the synchronization timestamp of the unsynchronized subsystem based on a synchronized subsystem of the video splicing system, and comprises:
and when the output synchronization of each target output end in the video splicing system is finished, counting and accumulating are carried out based on the synchronized video output interruption so as to determine a synchronization time scale.
8. The apparatus according to claim 6, wherein the adding unit adds a current synchronization time stamp in the acquired video stream, and includes:
decapsulate/decode the acquired video stream;
and for any video frame obtained by decapsulation/decoding, performing encapsulation/encoding of another encapsulation format/encoding format on the video frame based on the local synchronous time stamp when the video frame is acquired.
9. The apparatus of claim 6, further comprising:
an output unit, configured to, when a video stream including a first synchronization time stamp is received, decode the video stream to obtain a video image and the first synchronization time stamp;
and when the local second synchronization time stamp is consistent with the first synchronization time stamp included in the video stream, outputting the decoded video image.
10. The apparatus according to claim 9, wherein the output unit outputs the decoded video image, and comprises:
based on the first synchronous time scale, performing specific processing on the received video images of different network video sources, and determining splicing parameters;
based on the splicing parameters, splicing the specifically processed video image and the video image of the local video source, and outputting the spliced video image;
wherein the specific treatment comprises one or more of:
digging, zooming and synchronous splicing.
11. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 5 when executing a program stored in the memory.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 5.
CN202010273831.1A 2020-04-09 2020-04-09 Video splicing method and device, electronic equipment and readable storage medium Active CN113518158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010273831.1A CN113518158B (en) 2020-04-09 2020-04-09 Video splicing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010273831.1A CN113518158B (en) 2020-04-09 2020-04-09 Video splicing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113518158A CN113518158A (en) 2021-10-19
CN113518158B true CN113518158B (en) 2023-03-24

Family

ID=78060351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010273831.1A Active CN113518158B (en) 2020-04-09 2020-04-09 Video splicing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113518158B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101162575A (en) * 2006-10-12 2008-04-16 佳能株式会社 Display control equipment and method, display device and processing method, multi-display system
CN104375789A (en) * 2013-08-14 2015-02-25 杭州海康威视数字技术股份有限公司 Synchronous displaying method and system of tiled display screen
CN108234901A (en) * 2016-12-21 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of video-splicing method and video control apparatus
CN108737689A (en) * 2018-04-27 2018-11-02 浙江大华技术股份有限公司 A kind of splicing display method and display control apparatus of video
CN110662094A (en) * 2018-06-29 2020-01-07 英特尔公司 Timing synchronization between content source and display panel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2502578B (en) * 2012-05-31 2015-07-01 Canon Kk Method, device, computer program and information storage means for transmitting a source frame into a video display system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101162575A (en) * 2006-10-12 2008-04-16 佳能株式会社 Display control equipment and method, display device and processing method, multi-display system
CN104375789A (en) * 2013-08-14 2015-02-25 杭州海康威视数字技术股份有限公司 Synchronous displaying method and system of tiled display screen
CN108234901A (en) * 2016-12-21 2018-06-29 杭州海康威视数字技术股份有限公司 A kind of video-splicing method and video control apparatus
CN108737689A (en) * 2018-04-27 2018-11-02 浙江大华技术股份有限公司 A kind of splicing display method and display control apparatus of video
CN110662094A (en) * 2018-06-29 2020-01-07 英特尔公司 Timing synchronization between content source and display panel

Also Published As

Publication number Publication date
CN113518158A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN101500128B (en) Method and apparatus for loading additional information on display image of network camera device terminal
KR20120068024A (en) An apparatus
KR20110091378A (en) Method and apparatus for processing and producing camera video
CN109714623B (en) Image display method and device, electronic equipment and computer readable storage medium
CN108965819B (en) Synchronous signal processing method and device and video transmission system
CN111818295B (en) Image acquisition method and device
CN114125258B (en) Video processing method and electronic equipment
CN105306837A (en) Multi-image splicing method and device
EP3991443A1 (en) Method and apparatus for encapsulating panorama images in a file
KR20040016414A (en) Image processing device and image processing method, recording medium, and program
CN113518158B (en) Video splicing method and device, electronic equipment and readable storage medium
CN116708892A (en) Sound and picture synchronous detection method, device, equipment and storage medium
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN112235600A (en) Method, device and system for processing video data and video service request
US20230217084A1 (en) Image capture apparatus, control method therefor, image processing apparatus, and image processing system
CN115695883A (en) Video data processing method, device, equipment and storage medium
CN107959769A (en) A kind of video camera
CN113938617A (en) Multi-channel video display method and equipment, network camera and storage medium
CN111083416B (en) Data processing method and device, electronic equipment and readable storage medium
CN115952315B (en) Campus monitoring video storage method, device, equipment, medium and program product
CN115529481B (en) Video synchronous display system and method based on fusion signal source and input equipment
GB2573096A (en) Method and apparatus for encapsulating images with proprietary information in a file
CN114461165B (en) Virtual-real camera picture synchronization method, device and storage medium
US20230054344A1 (en) Image processing apparatus, control method, and storage medium
CN115334322B (en) Video frame synchronization method, terminal, server, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant