CN114189730B - Video recording method, video playback method and device - Google Patents

Video recording method, video playback method and device Download PDF

Info

Publication number
CN114189730B
CN114189730B CN202111297709.9A CN202111297709A CN114189730B CN 114189730 B CN114189730 B CN 114189730B CN 202111297709 A CN202111297709 A CN 202111297709A CN 114189730 B CN114189730 B CN 114189730B
Authority
CN
China
Prior art keywords
video
coding data
video coding
data
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111297709.9A
Other languages
Chinese (zh)
Other versions
CN114189730A (en
Inventor
杨继业
余文进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tricolor Technology Co ltd
Original Assignee
Beijing Tricolor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tricolor Technology Co ltd filed Critical Beijing Tricolor Technology Co ltd
Priority to CN202111297709.9A priority Critical patent/CN114189730B/en
Publication of CN114189730A publication Critical patent/CN114189730A/en
Application granted granted Critical
Publication of CN114189730B publication Critical patent/CN114189730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video recording and video playback method and a device, wherein the video recording method comprises the following steps: receiving a video recording instruction sent by a client; responding to a video recording instruction, and sending a video acquisition request to first splicing equipment with N1 output units; receiving N1 video coding data and first identifiers corresponding to the video coding data, which are sent by first splicing equipment in response to a video acquisition request, wherein N1 output units are in one-to-one correspondence with the N1 video coding data, the N1 video coding data correspond to the same frame of display picture of a first splicing screen, and the first identifiers corresponding to the video coding data are the same; and writing the N1 video coding data and the first identification corresponding to each video coding data into the video file. According to the scheme, the data output by the output units of the splicing equipment can be recorded respectively, so that the resolution of video data cannot be compressed, and the image quality can be ensured to be clearer under the condition of synchronous playback on splicing screens with different sizes.

Description

Video recording method, video playback method and device
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a video recording method, a video playback method, and an apparatus.
Background
With the high-speed development of science and technology and communication, the spliced screen for delivering videos can meet meeting application scenes with a large number of people. However, people's memory is limited and they do not need to record video during delivery in preparation for possible future playback.
In the prior art, the spliced screen is recorded, namely, the whole picture of the spliced screen is reduced to a picture with 1080P or 4K resolution and then stored, and a universal player is used for playing during playback. Because the resolution of the video data is compressed, when the video data is played back on the spliced screen, the playback picture on the spliced screen is not clear enough.
Disclosure of Invention
The invention provides a video recording method, a video playback method and a video playback device, which are beneficial to improving the definition of playback pictures on a spliced screen.
In a first aspect, an embodiment of the present invention provides a video recording method, applied to a video recording server, where the method includes:
receiving a video recording instruction sent by a client;
responding to the video recording instruction, sending a video acquisition request to first splicing equipment, wherein the first splicing equipment is provided with N1 output units, and N1 is larger than 1;
Receiving N1 video coding data and first identifiers corresponding to the video coding data, which are sent by the first splicing equipment in response to the video acquisition request, wherein the N1 output units are in one-to-one correspondence with the N1 video coding data, the N1 video coding data correspond to the same frame of display picture of a first splicing screen, and the first identifiers corresponding to the video coding data are the same;
and writing the N1 video coding data and the first identification corresponding to each video coding data into a video file.
In one possible embodiment, the method further comprises:
responding to the video instructions, and sending an address acquisition request to the first splicing equipment, wherein the address acquisition request is used for requesting to acquire the coding address corresponding to each output unit;
receiving the coding address corresponding to each output unit sent by the first splicing equipment in response to the address acquisition request;
the sending a video acquisition request to the first splicing device includes:
and sending a video acquisition request to the first splicing equipment based on the coding address corresponding to each output unit.
In one possible implementation manner, the writing the N1 video coding data and the first identifier corresponding to each video coding data into the video file includes:
And writing the N1 video coding data and the first identifications corresponding to the video coding data into a video file sequentially based on the arrangement sequence of the output units.
In one possible embodiment, the method further comprises:
establishing N1 buffer areas, wherein N1 output units are in one-to-one correspondence with the N1 buffer areas;
storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas respectively;
the writing the N1 video coding data and the first identifiers corresponding to the video coding data into the video file sequentially based on the arrangement sequence of the output units includes:
after the N1 video coding data and the first identifiers corresponding to the video coding data are stored in the corresponding buffer areas, the video coding data and the first identifiers in the N1 buffer areas are written into the video file in sequence based on the arrangement sequence of the output units.
In a possible implementation manner, the output unit arrangement order is set by a user through a client.
In one possible embodiment, the first identifier is a timestamp.
In a second aspect, an embodiment of the present invention provides a video recording method, applied to a first splicing device, where the first splicing device includes N1 output units, and N1 is greater than 1, where the method includes:
Receiving a video acquisition request sent by a video server;
responding to the video acquisition request, and encoding N1 video data output by the N1 output units to obtain N1 video coding data, wherein the N1 video coding data correspond to the same frame of display picture of a first spliced screen;
generating the same first identification for each video coding data;
and sending the N1 video coding data and the first identifications corresponding to the video coding data to the video recording server.
In one possible implementation manner, before the first splicing device receives the video acquisition request sent by the video server, the method further includes:
receiving an address acquisition request sent by the video server, wherein the address acquisition request is used for requesting to acquire a coding address corresponding to each output unit;
and responding to the address acquisition request, and sending the coding address corresponding to each output unit to the video server.
In one possible embodiment, the first identifier is a timestamp.
In a third aspect, an embodiment of the present invention provides a video playback method, applied to a video server, where the method includes:
receiving a video playback instruction sent by a client and used for playback on a second splicing screen, wherein the video playback instruction carries an identifier of a video file and a playback starting moment;
Reading N1 video coding data and first identifiers corresponding to the video coding data from the video file after the playback starting time, wherein the N1 video coding data correspond to the same frame display picture of a first splicing screen, the N1 video coding data correspond to N1 output units of first splicing equipment corresponding to the first splicing screen one by one, and the first identifiers corresponding to the video coding data are the same;
and transmitting the N1 video coding data and the first identification corresponding to each video coding data to a second splicing device corresponding to the second splicing screen, wherein N1 playing windows of the second splicing screen are in one-to-one correspondence with the N1 video coding data, and the video coding data and the first identification corresponding to the playing windows are transmitted to an output unit of the second splicing device corresponding to the playing windows.
In one possible implementation manner, before the reading, from the video file, N1 pieces of video encoded data after the playback start time and the first identifier corresponding to each piece of video encoded data, the method further includes:
sending a first notification to the second splicing device, wherein the first notification is used for notifying the playback starting moment;
And receiving a data acquisition request sent by the second splicing equipment, wherein the data acquisition request is used for requesting to acquire video coding data and a first identifier corresponding to the video coding data after the playback starting moment in the video file.
In one possible implementation manner, the N pieces of video coding data and the first identifier corresponding to each piece of video coding data are written into the video file based on the arrangement order of the output units;
the reading, from the video file, the N1 pieces of video encoded data after the playback start time and the first identifier corresponding to each piece of video encoded data includes:
and sequentially reading N1 video coding data after the playback starting moment and first identifiers corresponding to the video coding data from the video file based on the arrangement sequence of the output units.
In a possible implementation manner, the output unit arrangement order is set by a user through the client.
In one possible embodiment, the first identifier is a timestamp.
In a fourth aspect, an embodiment of the present invention provides a video playback method applied to a second splicing device, where the second splicing device includes N2 output units, where N2 is greater than 1, and the method includes:
Setting N1 play windows on the second spliced screen, wherein each play window corresponds to one or more output units of the second spliced device, and N1 is larger than 1;
receiving N1 video coding data and first identifications corresponding to the video coding data sent by a video server, wherein the N1 video coding data correspond to the same frame of display picture of a first splicing screen, the N1 video coding data correspond to N1 output units of first splicing equipment corresponding to the first splicing screen one by one, and the first identifications corresponding to the video coding data are the same; the N1 playing windows are in one-to-one correspondence with the N1 video coding data, and the video coding data and the first identification corresponding to the playing windows are sent to the output units corresponding to the playing windows;
decoding video coding data through the output units corresponding to the playing windows to obtain video decoding data corresponding to the playing windows;
and outputting the video decoding data with the same first identification to the second spliced screen through the output units corresponding to the playing windows at the same time for display.
In one possible embodiment, the method further comprises:
Establishing N1 buffer areas, wherein N1 playing windows are in one-to-one correspondence with the N1 buffer areas;
storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas respectively;
the decoding of the video encoded data by the output unit corresponding to each play window to obtain video decoded data corresponding to each play window includes:
and after the N1 video coding data and the first identifications corresponding to the video coding data are stored in the corresponding buffer areas, decoding the video coding data in the corresponding buffer areas through the output units corresponding to the playing windows to obtain video decoding data corresponding to the playing windows.
In one possible implementation manner, before the receiving N1 video encoded data sent by the video server and the first identifier corresponding to each video encoded data, the method further includes:
receiving a first notification sent by the video server, wherein the first notification is used for notifying the playback starting moment;
and sending a data acquisition request to the video server, wherein the data acquisition request is used for requesting to acquire video coding data and a first identifier corresponding to the video coding data after the playback starting moment in the video file.
In one possible embodiment, the first identifier is a timestamp.
In a fifth aspect, an embodiment of the present invention provides a video recording apparatus, where the video recording apparatus may be a video recording server, a device in the video recording server, or a device that can be used in a matching manner with the video recording server. The video recording device can also be a chip system. The video recording apparatus may perform the method of the first aspect. The functions of the video recording device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the functions described above. The unit or module may be software and/or hardware. The operations and advantages performed by the video recording apparatus may be referred to the method and advantages described in the first aspect.
In a sixth aspect, an embodiment of the present invention provides a video recording apparatus, where the video recording apparatus may be a splicing device, or may be a device in the splicing device, or may be a device that is capable of being used in a matching manner with the splicing device. The video recording device can also be a chip system. The video recording apparatus may perform the method of the second aspect. The functions of the video recording device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the functions described above. The unit or module may be software and/or hardware. The operations and advantages performed by the video recording apparatus may be seen in the method and advantages described in the second aspect above.
In a seventh aspect, an embodiment of the present invention provides a video playback device, where the video playback device may be a video server, a device in the video server, or a device that can be used in a matching manner with the video server. The video playback device may also be a chip system. The video playback device may perform the method of the third aspect. The functions of the video playback device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the functions described above. The unit or module may be software and/or hardware. The operations and advantages performed by the video playback device may be seen from the method and advantages described in the third aspect above.
In an eighth aspect, an embodiment of the present invention provides a video playback apparatus, where the video playback apparatus may be a splicing device, or may be an apparatus in a splicing device, or may be an apparatus that is capable of being used in a matching manner with the splicing device. The video playback device may also be a chip system. The video playback device may perform the method of the fourth aspect. The functions of the video playback device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more units or modules corresponding to the functions described above. The unit or module may be software and/or hardware. The operations and advantages performed by the video playback device may be seen in the method and advantages described in the fourth aspect above.
In a ninth aspect, an embodiment of the invention provides an electronic device comprising a processor and a memory, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions, to perform a method as in the first aspect, or to perform a method as in the second aspect, or to perform a method as in the third aspect, or to perform a method as in the fourth aspect.
In a tenth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform a method as in the first aspect, or to perform a method as in the second aspect, or to perform a method as in the third aspect, or to perform a method as in the fourth aspect.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described.
Fig. 1 is a schematic diagram of a communication system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a splicing device and a spliced screen according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a video recording method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a video playback method provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of a second splicing device provided by an embodiment of the present invention to provide a play window;
fig. 6 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a video playback apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The terms first and second and the like in the description, in the claims and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
In order to better understand the present application, the following describes a communication system provided in an embodiment of the present application:
Referring to fig. 1, fig. 1 is a schematic diagram of a communication system according to an embodiment of the invention. The communication system includes: video server, customer end, network switch, first splicing equipment, first concatenation screen, second splicing equipment, second concatenation screen. The network switch is connected with the video server, the client, the first splicing equipment and the second splicing equipment, and is used for information interaction of the four. The client is used for controlling and sending a video command and starting a video or playback function. Clients include, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, and the like. The video server is used for recording and storing the spliced video displayed on the first spliced screen, and is also used for playing back the video file on the second spliced screen.
The first splicing equipment comprises N1 output units, namely an output unit 1 to an output unit N1. The first spliced screen comprises N1 sub-screens, namely sub-screen 1 to sub-screen N1. N1 output units of the first splicing device are in one-to-one correspondence with N1 sub-screens of the first splicing screen. Wherein N1 is greater than 1. For example, when N1 is 9, the size of the first stitched screen is 3*3. When N1 is 16, the size of the first spliced screen is 4*4. Each output unit corresponds to one path of video data, the first splicing equipment can synchronously send multiple paths of video data to the first splicing screen to form a complete spliced video, and the spliced video can be sent to the video recording server for video recording through the output units.
For example, N1 is 4 as shown in fig. 2. The first splicing device 201 in fig. 2 includes four output units: output unit 1, output unit 2, output unit 3, and output unit 4. The first tiled screen 202 in FIG. 2 includes four sub-screens: a sub-screen 1, a sub-screen 2, a sub-screen 3, and a sub-screen 4. Four output units of the first splicing device are in one-to-one correspondence with four sub-screens of the first splicing screen: the output unit 1 corresponds to the sub-screen 1, the output unit 2 corresponds to the sub-screen 2, the output unit 3 corresponds to the sub-screen 3, and the output unit 4 corresponds to the sub-screen 4. The data output by the output unit 1 may be displayed on the sub-screen 1, the data output by the output unit 2 may be displayed on the sub-screen 2, the data output by the output unit 3 may be displayed on the sub-screen 3, and the data output by the output unit 4 may be displayed on the sub-screen 4.
The second splicing equipment comprises N2 output units, namely an output unit 1 to an output unit N2. The second spliced screen comprises N2 sub-screens, namely sub-screen 1 to sub-screen N2. N2 output units of the second splicing device are in one-to-one correspondence with N2 sub-screens of the second splicing screen. The second splicing device can display the video data output by the output unit on the corresponding sub-screen. Wherein N2 is greater than 1. For example, when N2 is 4, the size of the second stitched screen is 2×2. And when N2 is 9, the size of the second spliced screen is 3*3. And when N2 is 16, the size of the second spliced screen is 4*4. N2 and N1 may be the same or different.
The correspondence between the output unit in the second splicing device and the sub-screen in the second splicing screen is similar to the correspondence between the output unit in the first splicing device and the sub-screen in the first splicing screen, and the correspondence between the output unit in the second splicing device and the sub-screen in the second splicing screen is not exemplified here.
In one possible implementation, the first splicing device and the second splicing device may be the same device and the first splicing screen and the second splicing screen may be the same splicing screen. That is, after recording a video on a certain splicing device and splicing screen, the video server may play back video on the same splicing device and splicing screen.
In another possible implementation, the first and second splicing devices may be different devices and the first and second spliced screens may be different spliced screens. That is, after recording a video on one splicing device and splicing screen, the video server may play back video on another splicing device and splicing screen.
The video recording method, the video playback method and the related devices provided by the embodiment of the invention are respectively described in detail below.
Referring to fig. 3, fig. 3 is a schematic flowchart of a video recording method according to an embodiment of the present invention. The video recording method provided by the embodiment of the invention can comprise the following steps S301 to S306.
S301, the client sends a video recording instruction to the video recording server.
In the embodiment of the application, the user may click the video button on the client. After detecting that the user clicks the video button, the client sends a video command to the video server.
Alternatively, the client may automatically send a video command to the video server. For example, the user may set a time on the client that the client automatically transmits a video instruction to the video server, and when the time arrives, the client automatically transmits a video instruction to the video server.
Or, the client may also automatically send a video recording instruction to the video recording server when detecting that the first splicing device starts video playing. Optionally, the user may set a splicing device that needs to automatically record video at the client, so that when the subsequent client detects that the splicing device starts video playing, a video recording instruction is automatically sent to the video recording server to record the video played by the splicing device.
S302, the video server responds to the video command and sends a video acquisition request to the first splicing equipment.
In this embodiment of the present application, after receiving a video recording instruction, a video recording server responds to the video recording instruction and sends a video acquisition request to a first splicing device.
Optionally, the video recording instruction may further indicate a first splicing device to be recorded. Optionally, the recording server may indicate that the recording server needs to record the video of the first splicing device by carrying an identifier of the first splicing device in the video recording instruction. Alternatively, the identifier of the first splicing device may be a model number, a number, an address, or the like of the first splicing device.
Alternatively, the first splicing device may be a splicing device selected by the user at the client. For example, the client may display a list of splice devices. The user may select a splice device in a list of splice devices displayed by the client. The video recording instruction sent by the client to the video recording server can carry the identification of the splicing equipment selected by the user, so that the video recording server can record the video of the splicing equipment selected by the user.
Alternatively, the first splicing device to be recorded may not be indicated in the recording instruction, and the first splicing device may be a default splicing device.
S303, the first splicing equipment encodes N1 video data output by the N1 output units to obtain N1 video encoded data.
The N1 video coding data correspond to the same frame display picture of the first spliced screen. I.e. N1 video data encoded to obtain the N1 video encoded data are for simultaneous display on the first stitched screen.
The encoding protocol may be H264 or H265, or may be other protocols.
For example, taking N1 as 4 as an example, the first splicing device includes output units 1 to 4. The output unit 1 encodes the video data 1 to obtain video encoded data 1; the output unit 2 encodes the video data 2 to obtain video encoded data 2; the output unit 3 encodes the video data 3 to obtain video encoded data 3; the output unit 4 encodes the video data 4 to obtain video encoded data 4. The video data 1 to the video data 4 correspond to the same frame display picture of the first spliced screen, namely, the video data 1 to the video data 4 are displayed on the first spliced screen at the same time.
S304, the first splicing equipment generates the same first identification for each video coding data.
For example, the first splicing device may generate a first identification a for the video encoded data 1; the first splicing device may generate a first identification a for the video encoded data 2; the first splicing device may generate a first identification a for the video encoded data 3; the first splicing device may generate a first identification a for the video encoded data 4. It can be seen that the first identifiers corresponding to the video encoding data 1 to the video encoding data 4 are the same. The same first identifier is generated for the video coding data corresponding to the same frame of display picture of the first spliced screen, so that the video coding data corresponding to the same frame of display picture of the first spliced screen can be distinguished, and video playback is facilitated.
Optionally, the first identifier is a timestamp. And the first splicing equipment uses the synchronous clock signal to synchronize N1 video coding data corresponding to the display pictures of the spliced screen of the same frame. The first splicing device can then timestamp the N1 video encoded data the same. For example, the time interval between two adjacent frames of the tiled screen display is 1 millisecond. The first splicing device may timestamp the same N1 video encoded data for 01 ms at time 14 for 12 minutes 37 seconds, which may be 14:12:37:01. The first splicing device may timestamp the same N1 video encoded data for 02 ms at time 14 for 12 minutes 37 seconds, the timestamp may be 14:12:37:02, and so on.
Alternatively, the first identifier may be a number, a serial number, or a label. For example, the time interval between two adjacent frames of the tiled screen display is 1 millisecond. The first splicing device may assign the same sequence number to N1 video encoded data of 01 ms at time 14, which may be sequence number 1, for example, for 12 minutes, 37 seconds. The first splicing device may assign the same sequence number to N1 video encoded data of 02 ms at time 14, which may be sequence number 2, for example, for 12 minutes, 37 seconds.
It should be noted that the timestamp is not necessarily in the form of Beijing time, but may also be in the form of a timer (e.g. 00:00.01, 00:00.02, 00:00.03) or other time forms, which is only taken as an example in the form of Beijing time in this embodiment, and does not represent that the timestamp is necessarily in the form. The particular form of the timestamp is not limited herein.
S305, the first splicing equipment sends N1 pieces of video coding data and a first identifier corresponding to each piece of video coding data.
In one possible implementation, after receiving the video command, the video server may further respond to the video command and send an address acquisition request to the first splicing device, where the address acquisition request is used to request to acquire the coding address corresponding to each output unit; after receiving an address acquisition request sent by a video server, the first splicing device responds to the address acquisition request and sends coding addresses corresponding to output units to the video server; after receiving the coded addresses corresponding to the output units sent by the first splicing device in response to the address acquisition request, the video server can specifically send a video acquisition request to the first splicing device based on the coded addresses corresponding to the output units.
Alternatively, the coded address may be a uniform resource locator (Uniform Resource Locator, URL) address, and each output unit corresponds to a separate URL address.
For example, assuming that the first splicing device includes output units 1 to 4, the four output units correspond to URL addresses 1 to 4, respectively. The four URL addresses are respectively:
rtsp://192.168.1.100:122/main,
rtsp://192.168.1.100:145/main,
rtsp://192.168.1.100:187/main,
rtsp://192.168.1.100:265/main,
and after receiving the video instruction, the video server sends an address acquisition request to the first splicing equipment. After receiving the address acquisition request, the first splicing device sends the four URL addresses to the video server.
After the video server receives the four URL addresses, it may specifically send a video acquisition request to the output unit 1 based on the URL address corresponding to the output unit 1, and send a video acquisition request to the output unit 2 based on the URL address corresponding to the output unit 2, where the URL address corresponding to the output unit 3 sends a video acquisition request to the output unit 3, and the URL address corresponding to the output unit 4 sends a video acquisition request to the output unit 4. Accordingly, after receiving the video acquisition request, the output unit 1 sends the video coding data 1 corresponding to the output unit 1 and the first identifier a corresponding to the video coding data 1 to the video recording server. After receiving the video acquisition request, the output unit 2 sends the video coding data 2 corresponding to the output unit 2 and the first identifier a corresponding to the video coding data 2 to the video recording server. After receiving the video acquisition request, the output unit 3 sends the video coding data 3 corresponding to the output unit 3 and the first identifier a corresponding to the video coding data 3 to the video recording server. After receiving the video acquisition request, the output unit 4 sends the video coding data 4 corresponding to the output unit 4 and the first identifier a corresponding to the video coding data 4 to the video recording server.
S306, the video server writes the N1 video coding data and the first identification corresponding to each video coding data into the video file.
In this embodiment of the present application, after receiving N1 video encoded data and first identifiers corresponding to each video encoded data sent by a first splicing device in response to a video acquisition request, a video server writes the N1 video encoded data and the first identifiers corresponding to each video encoded data into a video file.
Therefore, based on the method described in fig. 3, by encoding and recording each sub-screen, the resolution of the video data is ensured not to be compressed, so that the image quality is clearer when the video data are synchronously played back on the spliced screens with different sizes.
In one possible implementation, the specific implementation manner of writing N1 video coding data and the first identifier corresponding to each video coding data into the video file by the video server is as follows: and writing the N1 video coding data and the first identifications corresponding to the video coding data into the video file sequentially based on the arrangement sequence of the output units. N1 video coding data with the same first identification (same moment) are written into the video file according to the arrangement sequence of the output units.
The output units may be arranged in any order, for example, from 1 to N1, from N1 to 1, or the like. For example, the N1 output units are ordered from 1 to N1, the video coding data corresponding to the output unit 1 is written into the video file, the video coding data corresponding to the output unit 2 is written into the video file, the video coding data corresponding to the output unit 3 is written into the video file, and so on, and the video coding data corresponding to the output unit N1 is written into the video file. For another example, the N1 output units are ordered from N1 to 1, the video coding data corresponding to the output unit N1 is written into the video file, the video coding data corresponding to the output unit 3 is written into the video file, the video coding data corresponding to the output unit 2 is written into the video file, and so on, and the video coding data corresponding to the output unit 1 is written into the video file.
In one possible implementation, the output unit arrangement order is set by the user through the client. For example, it is assumed that the first splicing device includes 4 output units, respectively output unit 1 to output unit 4. The user can set the output unit arrangement sequence as follows through the client: output unit 1, output unit 2, output unit 3, output unit 4. Alternatively, the user may set the output unit arrangement order by the client as: output unit 4, output unit 3, output unit 2, output unit 1. Or, the user may set the output unit arrangement sequence as follows through the client: output unit 3, output unit 4, output unit 1, output unit 2.
N1 video coding data are written into the video file according to the arrangement sequence of the output units, and when the video file is played back, the video server can distinguish which video coding data come from which output unit according to the arrangement sequence of the output units, and then the video server can display pictures corresponding to the video coding data in a play window corresponding to the video coding data.
Optionally, the video server may establish N1 buffers, where N1 output units of the first splicing device are in one-to-one correspondence with the N1 buffers; the video server stores the received N1 video coding data and the first identification corresponding to each video coding data into the corresponding buffer area respectively; the specific implementation mode of the video recording server for sequentially writing the N1 video coding data and the first identification corresponding to each video coding data into the video recording file based on the arrangement sequence of the output units is as follows: after N1 video coding data and the first identifications corresponding to the video coding data are stored in the corresponding buffer areas, the video coding data and the first identifications in the N1 buffer areas are sequentially written into the video file based on the arrangement sequence of the output units.
That is, one output unit corresponds to one buffer, N1 output units correspond to N1 buffers, and the video server stores N1 video encoded data with the same first identifier (same time) received from the first splicing device into the corresponding buffers. When the video server determines that all N1 video coding data with the same first identifier are located in the buffer area, the N1 video coding data are sequentially written into the video file according to the arrangement sequence of the output units.
Illustratively, the splicing apparatus includes four output units: output unit 1, output unit 2, output unit 3, and output unit 4. The video server establishes 4 buffers according to four output units: buffer 1, buffer 2, buffer 3, and buffer 4. Wherein the output unit 1 corresponds to the buffer 1, the output unit 2 corresponds to the buffer 2, the output unit 3 corresponds to the buffer 3, and the output unit 4 corresponds to the buffer 4. The video encoding method includes storing video encoding data 1 and a first flag a received from an output unit 1 in a buffer 1, storing video encoding data 2 and a first flag a received from an output unit 2 in a buffer 2, storing video encoding data 3 and a first flag a received from an output unit 3 in a buffer 3, and storing video encoding data 4 and a first flag a received from an output unit 4 in a buffer 4. And writing the four video coding data with the first identifier A into the video file according to the arrangement sequence of the output units after the four video coding data with the first identifier A are stored into the buffer area.
In the network transmission process, network delay may occur, and when network delay occurs, video coding data of the same first identifier of different output units may not arrive at the same time. By establishing buffer areas for different output units in the video server, the received video coding data can be buffered, so that after all video coding data of the same first identifier are received, video files can be written in according to the arrangement sequence of the output units.
Referring to fig. 4, fig. 4 is a schematic flowchart of a video playback method according to an embodiment of the present invention. The video playback method provided by the embodiment of the invention may include the following steps S401 to S406.
S401, setting N1 play windows on a second splicing screen by the second splicing device, wherein each play window corresponds to one or more output units of the second splicing device, and N1 is larger than 1.
The second splicing equipment is splicing equipment for playback, and the second splicing screen is a splicing screen for displaying playback video. The second splicing device for playback and the first splicing device for video recording may be the same device, may be different devices with the same size, or may be different devices with different sizes, which is not limited herein. For example, the video coding data is obtained from 2×2 splicing screens through video recording operation and then written into a video file of a video server, and the video coding data can be played back on a screen on 2×2 or 3*3 by way of opening a play window.
Fig. 5 is a schematic diagram of a second splicing device provided by an embodiment of the present invention to open a play window. As shown in fig. 5, the second split screen 501 in fig. 5 is a split screen of 3*3, and includes 9 sub-screens in total. In fig. 5 503 is a second splicing device comprising 9 output units. Suppose that a video file obtained by video recording a first stitched screen with a size of 2 x 2 is to be played back on a second stitched screen. A 2 x 2 play window 502 needs to be opened on the second spliced screen 501 (the play window 502 includes windows 1 to 4). The window 1 in the playing window 502 shown in fig. 5 is composed of four sub-screens of the second splicing screen 501, where the four sub-screens of the second splicing screen correspond to four output units in the second splicing device 503, and the window 1 also corresponds to four output units in the second splicing device 503. If it is assumed that window 1 corresponds to four sub-screens of the second stitched screen 501: a sub-screen 1, a sub-screen 2, a sub-screen 4, and a sub-screen 5. Window 1 corresponds to four output units of the second stitching device 503: output unit 1, output unit 2, output unit 4, and output unit 5. If it is assumed that window 2 corresponds to four sub-screens of the second stitched screen 501: a sub-screen 2, a sub-screen 3, a sub-screen 5, and a sub-screen 6. Window 2 corresponds to four output units of the second stitching device 503: output unit 2, output unit 3, output unit 5, and output unit 6. That is, the output unit corresponding to each window in the playing window is the output unit corresponding to one or more sub-screens constituting the window. Other windows are similar to the corresponding relation of the output unit, and are not described in detail herein.
S402, the client sends a video playback instruction for playback on the second spliced screen to the video server. Accordingly, the video server may receive the video playback instruction sent by the client.
The video playback instruction carries an identifier of the video file and a playback starting time.
Illustratively, a user selects a video file to be played back from a video file list by using a client, and determines a playback start time in the video file. The video file that the user selected to play back can be: the name of the naming of the video file is input for selection, or the date and time when the video file is specifically stored is input for selection. The playback start time may be user-entered or the user may drag a timeline of the video file to enter the playback start time.
S403, the video server reads N1 video coding data after the playback starting time from the video file, and the first identification corresponds to each video coding data.
The description of the first identifier may be referred to the description in the video recording method embodiments S301 to S306, which are not repeated herein.
The N1 video coding data correspond to the same frame of display picture of the first splicing screen, the N1 video coding data correspond to N1 output units of the first splicing device corresponding to the first splicing screen one by one, and the first identifications corresponding to the video coding data are the same.
In one possible implementation, before reading N1 pieces of video encoded data after the playback start time and the first identifier corresponding to each piece of video encoded data from the video file, the method further includes: the video server sends a first notification to the second splicing equipment, wherein the first notification is used for notifying the playback starting moment; after receiving the first notification, the second splicing equipment sends a data acquisition request to the video server, wherein the data acquisition request is used for requesting to acquire video coding data and a first identifier corresponding to the video coding data after the playback starting time in the video file; optionally, after the second splicing device receives the first notification, the second splicing device may send a data acquisition request to the video server through an output unit corresponding to each window. After receiving the data acquisition request sent by the second splicing device, the video server executes step S403.
In one possible implementation, the specific implementation manner of reading, by the video server, N1 video encoded data after the playback start time and the first identifier corresponding to each video encoded data from the video file is: n1 video coding data after the playback starting moment are sequentially read from the video file based on the arrangement sequence of the output units, and the first identification corresponding to each video coding data is displayed. The arrangement sequence of the output units of the N1 video coding data after the playback starting moment is read and the arrangement sequence of the output units written into the video file during video recording is the same sequence.
The first splicing device comprises, for example, 4 output units, respectively output unit 1 to output unit 4. In the recording process, the recording server receives 4 video coding data { Q4, Q3, Q2, Q1}, respectively, from the first splicing device. Q1 from output unit 1, Q2 from output unit 2, Q3 from output unit 3, Q4 from output unit 4, and Q1, Q2, Q3, and Q4 correspond to the same frame display of the first stitched screen. The output unit arrangement sequence is as follows: the first row of output units 1, the second row of output units 2, the third row of output units 3 and the fourth row of output units 4. The order in which the 4 video encoded data { Q1, Q3, Q4, Q2} are written to the video file in the output unit arrangement is { Q1, Q2, Q3, Q4}, i.e., Q1 is written first, Q2 is written then, Q3 is written, and Q4 is written last. During playback, the video server reads the 4 video encoded data according to { Q1, Q2, Q3, Q4} sequence, i.e., reads Q1 first, then Q2, then Q3, and finally Q4. And then reading the video coding data corresponding to the display picture of the next frame.
S404, the video server sends N1 video coding data and a first identifier corresponding to each video coding data to a second splicing device corresponding to the second splicing screen.
The N1 playing windows of the second splicing screen are in one-to-one correspondence with the N1 video coding data, and the video coding data corresponding to the playing windows and the first identification are sent to an output unit of the second splicing device corresponding to the playing windows. The number of windows arranged on the second spliced screen for playback is the same as the number of sub-screens of the first spliced screen for video recording, and the windows are in one-to-one correspondence.
For example, the first split screen is a 2×2 split screen, and the first split screen includes four sub-screens: a sub-screen 1, a sub-screen 2, a sub-screen 3, and a sub-screen 4. The video file obtained from the video on the first spliced screen is played back on the second spliced screen of 3*3, and then the second spliced screen of 3*3 needs to be provided with four windows, and the layout of the four windows is 2 x 2 as the layout of the first spliced screen. For example, as shown in fig. 5, a play window 502 of 2×2 is opened on the second spliced screen 501 (the play window 502 includes windows 1 to 4). Referring to step S401, it can be known that the output unit corresponding to each window in the play window is the output unit corresponding to one or more sub-screens that compose the window. It is assumed that the window 1 shown in fig. 5 corresponds to the output unit 1, the output unit 2, the output unit 4, and the output unit 5. The video server transmits video encoded data corresponding to the window 1 to the output unit 1, the output unit 2, the output unit 4, and the output unit 5 in the second splicing apparatus 503. The video encoding data corresponding to the window 1 depends on the picture displayed on the sub-screen corresponding to the window 1 on the first split joint screen. The other windows are the same and are not described in detail herein.
S405, the second splicing equipment decodes the video coding data through the output units corresponding to the playing windows to obtain video decoding data corresponding to the playing windows.
For example, the video server transmits video encoded data corresponding to the playback window 1 and a first identifier corresponding to the video encoded data to the output units 1 and 2, respectively, in the output units 1 and 2 corresponding to the playback window 1.
S406, the second splicing equipment outputs the video decoding data with the same first identification to the second splicing screen through the corresponding output units of the playing windows at the same time so as to display the video decoding data.
In one possible implementation, the second splicing device establishes N1 buffers, where the N1 playing windows are in one-to-one correspondence with the N1 buffers; storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas respectively; the second splicing device decodes the video coding data through the output unit corresponding to each playing window, and the specific implementation mode of the video decoding data corresponding to each playing window is as follows: after N1 video coding data and the first identification corresponding to each video coding data are stored in the corresponding buffer area, decoding the video coding data in the corresponding buffer area through the output unit corresponding to each playing window to obtain video decoding data corresponding to each playing window.
Illustratively, the 4 sets of video encoded data for the same frame (the first identification is the same) include { Q1, Q2, Q3, Q4}. As shown in fig. 5, the second spliced screen is a spliced screen of 3*3, and the second spliced device includes 9 output units. And four playing windows with 2 x 2 layout are arranged on the second spliced screen. The second splicing device establishes 4 buffer areas, wherein the buffer area 1 corresponds to the playing window 1, the buffer area 2 corresponds to the playing window 2, the buffer area 3 corresponds to the playing window 3 and the buffer area 4 corresponds to the playing window 4. After the data Q1-Q4 are stored in the corresponding buffer areas, the video coding data Q1-Q4 are decoded through 9 output units corresponding to the four playing windows, so that video decoding data corresponding to the four playing windows are obtained. And the second splicing equipment sends the four decoding data to the sub-screen corresponding to the playing window of the second splicing screen for displaying.
The network delay may occur in the network transmission process, and when the network delay occurs, video coding data of the same first identifier of different windows sent by the video server may not arrive at the same time. By establishing the buffer area for different playing windows in the second splicing equipment, the received video coding data can be buffered, so that the second splicing equipment can decode the video coding data after receiving all video coding data with the same first identifier and then simultaneously send video decoding data to the second splicing screen. Thereby enabling the playing window on the second spliced screen to be displayed simultaneously.
Referring to fig. 6, the video recording apparatus shown in fig. 6 may be used to perform some or all of the functions of the recording server in the embodiment of the method described above with reference to fig. 3. The device can be a video server, a device in the video server or a device which can be matched with the video server for use. The video recording device can also be a chip system. The video recording apparatus shown in fig. 6 may include a communication unit 601 and a processing unit 602. The processing unit 602 is configured to perform data processing. The communication unit 601 integrates a receiving unit and a transmitting unit. The communication unit 601 may also be referred to as a transceiving unit. Alternatively, the communication unit 601 may be split into a receiving unit and a transmitting unit. Wherein:
a communication unit 601, configured to receive a video recording instruction sent by a client;
the communication unit 601 is further configured to respond to a video recording instruction, and send a video acquisition request to a first splicing device, where the first splicing device has N1 output units, and N1 is greater than 1; receiving N1 video coding data and first identifiers corresponding to the video coding data, which are sent by first splicing equipment in response to a video acquisition request, wherein N1 output units are in one-to-one correspondence with the N1 video coding data, the N1 video coding data correspond to the same frame of display picture of a first splicing screen, and the first identifiers corresponding to the video coding data are the same;
The processing unit 602 is configured to write N1 video encoded data and a first identifier corresponding to each video encoded data into a video file.
In a possible implementation, the communication unit 601 is further configured to send, in response to a video recording instruction, an address acquisition request to the first splicing device, where the address acquisition request is used to request to acquire a coded address corresponding to each output unit; and the first splicing equipment is also used for receiving the coding address corresponding to each output unit sent by the first splicing equipment in response to the address acquisition request; the manner in which the communication unit 601 sends the video acquisition request to the first splicing device is specifically: and sending a video acquisition request to the first splicing equipment based on the coding address corresponding to each output unit.
In one possible implementation, the manner in which the processing unit 602 writes the N1 video encoded data and the first identifier corresponding to each of the video encoded data into the video file is specifically: and writing the N1 video coding data and the first identifications corresponding to the video coding data into a video file sequentially based on the arrangement sequence of the output units.
In one possible implementation, the processing unit 602 is further configured to establish N1 buffers, where N1 output units are in one-to-one correspondence with N1 buffers; the method is also used for respectively storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas; the manner in which the processing unit 602 sequentially writes the N1 video encoded data and the first identifier corresponding to each video encoded data into the video file based on the output unit arrangement order is specifically as follows: after N1 video coding data and the first identifications corresponding to the video coding data are stored in the corresponding buffer areas, the video coding data and the first identifications in the N1 buffer areas are sequentially written into the video file based on the arrangement sequence of the output units.
Referring to fig. 6, the video recording apparatus shown in fig. 6 may be used to perform part or all of the functions of the first splicing device in the embodiment of the method described above with reference to fig. 3. The device can be a first splicing device, a device in the first splicing device or a device which can be matched with the first splicing device for use. The video recording device can also be a chip system. The video recording apparatus shown in fig. 6 may include a communication unit 601 and a processing unit 602. The processing unit 602 is configured to perform data processing. The communication unit 601 integrates a receiving unit and a transmitting unit. The communication unit 601 may also be referred to as a transceiving unit. Alternatively, the communication unit 601 may be split into a receiving unit and a transmitting unit. Wherein:
a communication unit 601, configured to receive a video acquisition request sent by a video server;
the processing unit 602 is configured to respond to the video acquisition request, and encode N1 video data output by the N1 output units to obtain N1 video encoded data, where the N1 video encoded data corresponds to the same frame of display picture of the first mosaic screen; generating the same first identifier for each video encoding data;
the communication unit 601 is further configured to send N1 pieces of video encoded data and a first identifier corresponding to each piece of video encoded data to the video server.
In a possible implementation, the communication unit 601 is further configured to receive an address acquisition request sent by the video server, where the address acquisition request is used to request to acquire a coded address corresponding to each output unit;
the communication unit 601 is further configured to send, in response to the address acquisition request, the encoded address corresponding to each output unit to the video server.
Referring to fig. 7, the video playback apparatus shown in fig. 7 may be used to perform some or all of the functions of the video server in the embodiment of the method described above with reference to fig. 4. The device can be a video server, a device in the video server or a device which can be matched with the video server for use. The video playback device may also be a chip system. The video playback apparatus shown in fig. 7 may include a communication unit 701 and a processing unit 702. The processing unit 702 is configured to perform data processing. The communication unit 701 is integrated with a receiving unit and a transmitting unit. The communication unit 701 may also be referred to as a transceiving unit. Alternatively, the communication unit 701 may be split into a receiving unit and a transmitting unit. The processing unit 702 and the communication unit 701 are the same, and will not be described in detail. Wherein:
the communication unit 701 is configured to receive a video playback instruction sent by the client and used for playback on the second splicing screen, where the video playback instruction carries an identifier of a video file and a playback start time;
The processing unit 702 is configured to read, from the video file, N1 pieces of video coding data after a playback start time and first identifiers corresponding to the video coding data, where the N1 pieces of video coding data correspond to a same frame of display picture of a first splicing screen, the N1 pieces of video coding data correspond to N1 output units of a first splicing device corresponding to the first splicing screen one by one, and the first identifiers corresponding to the video coding data are the same;
the communication unit 701 is further configured to send N1 video encoded data and a first identifier corresponding to each video encoded data to a second splicing device corresponding to the second splicing screen, where N1 playing windows of the second splicing screen are in one-to-one correspondence with the N1 video encoded data, and the video encoded data and the first identifier corresponding to the playing windows are sent to an output unit of the second splicing device corresponding to the playing windows.
In a possible implementation, the communication unit 701 is further configured to send a first notification to the second splicing device, where the first notification is used to notify the playback start time;
the communication unit 701 is further configured to receive a data acquisition request sent by the second splicing device, where the data acquisition request is used to request to acquire video encoded data after a playback start time in the video file and a first identifier corresponding to the video encoded data.
In one possible implementation, the processing unit 702 is specifically configured to sequentially read, from the video file, N1 pieces of video encoded data after the playback start time and the first identifier corresponding to each piece of video encoded data in the output unit arrangement order.
Referring to fig. 7, the video playback apparatus shown in fig. 7 may be used to perform some or all of the functions of the second splicing device in the method embodiment described above with reference to fig. 4. The device can be a second splicing device, a device in the second splicing device or a device which can be matched with the second splicing device for use. The video playback device may also be a chip system. The video playback apparatus shown in fig. 7 may include a communication unit 701 and a processing unit 702. The processing unit 702 is configured to perform data processing. The communication unit 701 is integrated with a receiving unit and a transmitting unit. The communication unit 701 may also be referred to as a transceiving unit. Alternatively, the communication unit 701 may be split into a receiving unit and a transmitting unit. The processing unit 702 and the communication unit 701 are the same, and will not be described in detail. Wherein:
the processing unit 702 is configured to set N1 play windows on the second splicing screen, where each play window corresponds to one or more output units of the second splicing device, and N1 is greater than 1;
The communication unit 701 is configured to receive N1 video coding data and first identifiers corresponding to the video coding data sent by the video server, where the N1 video coding data corresponds to a same frame of display picture of a first splicing screen, the N1 video coding data corresponds to N1 output units of a first splicing device corresponding to the first splicing screen one by one, and the first identifiers corresponding to the video coding data are the same; n1 playing windows are in one-to-one correspondence with N1 video coding data, and the video coding data corresponding to the playing windows and the first identification are sent to an output unit corresponding to the playing windows;
the communication unit 701 is further configured to decode the video encoded data through the output unit corresponding to each play window, so as to obtain video decoded data corresponding to each play window;
the communication unit 701 is further configured to output, to the second splicing screen, video decoding data with the same first identifier to display the video decoding data through the output unit corresponding to each play window.
In one possible implementation, the processing unit 702 is further configured to establish N1 buffers, where N1 play windows are in one-to-one correspondence with N1 buffers;
the processing unit 702 is further configured to store the received N1 video encoded data and the first identifiers corresponding to the respective video encoded data into corresponding buffer areas respectively;
The processing unit 702 is further configured to decode the video encoded data through the output unit corresponding to each play window, and the manner of obtaining the video decoded data corresponding to each play window specifically includes: after N1 video coding data and the first identification corresponding to each video coding data are stored in the corresponding buffer area, decoding the video coding data in the corresponding buffer area through the output unit corresponding to each playing window to obtain video decoding data corresponding to each playing window.
In a possible implementation, the communication unit 701 is further configured to receive a first notification sent by the video server, where the first notification is used to notify a playback start time;
the communication unit 701 is further configured to send a data acquisition request to the video server, where the data acquisition request is used to request to acquire video encoded data and a first identifier corresponding to the video encoded data after a playback start time in the video file.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device may be a video server, a first splicing device or a second splicing device. As shown in fig. 8, the terminal device in the present embodiment may include: one or more processors 801, one or more memories 802, and one or more transceivers 803. The processor 801, the memory 802, and the transceiver 803 are connected through a bus 804. The memory 802 is used for storing a computer program comprising program instructions, and the processor 801 is used for executing the program instructions stored in the memory 802 for performing data processing operations of the electronic device. The transceiver 803 is used to perform data transceiving operations of the electronic device.
For example, when the electronic device is a video server, the transceiver 803 may perform the transceiving operation of the video server in the video recording embodiment. The processor 801 may perform step S306 in the video recording embodiment. And/or the transceiver 803 may perform the transceiving operation of the video server in the video playback embodiment. The processor 801 may perform step S403 in the video playback embodiment.
For example, when the electronic device is the first splicing device, the transceiver 803 may perform the transceiving operation of the first splicing device in the video recording embodiment. The processor 801 may perform steps S303 and S304 in the video recording embodiment.
For example, when the electronic device is the second splicing device, the transceiver 803 may perform the transceiving operation of the second splicing device in the video playback embodiment. The processor 801 may perform steps S405 and S406 in the video recording embodiment.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some acts may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
The descriptions of the embodiments provided in the present application may be referred to each other, and the descriptions of the embodiments are focused on, and for the part that is not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments. For convenience and brevity of description, for example, reference may be made to the related descriptions of the method embodiments of the present application for the functions and operations performed by the devices and apparatuses provided by the embodiments of the present application, and reference may also be made to each other, combined or cited between the method embodiments, and between the device embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (19)

1. A video recording method, applied to a recording server, comprising:
Receiving a video recording instruction sent by a client;
responding to the video recording instruction, sending a video acquisition request to first splicing equipment, wherein the first splicing equipment is provided with N1 output units, and N1 is larger than 1;
receiving N1 video coding data and first identifiers corresponding to the video coding data, which are sent by the first splicing equipment in response to the video acquisition request, wherein the N1 output units are in one-to-one correspondence with the N1 video coding data, the N1 video coding data correspond to the same frame of display picture of a first splicing screen, and the first identifiers corresponding to the video coding data are the same;
establishing N1 buffer areas, wherein N1 output units are in one-to-one correspondence with the N1 buffer areas;
storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas respectively;
after the N1 video coding data and the first identifiers corresponding to the video coding data are stored in the corresponding buffer areas, the video coding data and the first identifiers in the N1 buffer areas are written into the video file in sequence based on the arrangement sequence of the output units.
2. The method according to claim 1, wherein the method further comprises:
Responding to the video instructions, and sending an address acquisition request to the first splicing equipment, wherein the address acquisition request is used for requesting to acquire the coding address corresponding to each output unit;
receiving the coding address corresponding to each output unit sent by the first splicing equipment in response to the address acquisition request;
the sending a video acquisition request to the first splicing device includes:
and sending a video acquisition request to the first splicing equipment based on the coding address corresponding to each output unit.
3. The method of claim 1, wherein the output unit arrangement order is set by a user through a client.
4. A method according to any one of claims 1 to 3, wherein the first identifier is a time stamp.
5. A video recording method, applied to a first splicing device, the first splicing device comprising N1 output units, the N1 being greater than 1, the method comprising:
receiving a video acquisition request sent by a video server;
responding to the video acquisition request, and encoding N1 video data output by the N1 output units to obtain N1 video coding data, wherein the N1 video coding data correspond to the same frame of display picture of a first spliced screen;
Generating the same first identification for each video coding data;
the method comprises the steps that N1 video coding data and first identifiers corresponding to the video coding data are sent to a video recording server, the video recording server establishes N1 cache areas, N1 output units are in one-to-one correspondence with the N1 cache areas, the video recording server stores the received N1 video coding data and the first identifiers corresponding to the video coding data into the corresponding cache areas respectively, and after the N1 video coding data and the first identifiers corresponding to the video coding data are stored into the corresponding cache areas, the video recording server sequentially writes the video coding data and the first identifiers in the N1 cache areas into a video recording file based on the arrangement sequence of the output units.
6. The method of claim 5, wherein before the first splicing device receives the video acquisition request sent by the video server, the method further comprises:
receiving an address acquisition request sent by the video server, wherein the address acquisition request is used for requesting to acquire a coding address corresponding to each output unit;
and responding to the address acquisition request, and sending the coding address corresponding to each output unit to the video server.
7. The method of claim 5 or 6, wherein the first identifier is a timestamp.
8. A video playback method for use with a video server, the method comprising:
receiving a video playback instruction sent by a client and used for playback on a second splicing screen, wherein the video playback instruction carries an identifier of a video file and a playback starting moment;
sequentially reading N1 video coding data and first identifiers corresponding to the video coding data after the playback starting time from the video file based on the arrangement sequence of the output units, wherein the N1 video coding data correspond to the same frame display picture of a first splicing screen, the N1 video coding data correspond to N1 output units of first splicing equipment corresponding to the first splicing screen one by one, the first identifiers corresponding to the video coding data are the same, and the N1 video coding data and the first identifiers corresponding to the video coding data are written into the video file based on the arrangement sequence of the output units;
and transmitting the N1 video coding data and the first identification corresponding to each video coding data to a second splicing device corresponding to the second splicing screen, wherein N1 playing windows of the second splicing screen are in one-to-one correspondence with the N1 video coding data, and the video coding data and the first identification corresponding to the playing windows are transmitted to an output unit of the second splicing device corresponding to the playing windows.
9. The method of claim 8, wherein before reading N1 video encoded data from the video file after the playback start time and the first identifier corresponding to each of the video encoded data, the method further comprises:
sending a first notification to the second splicing device, wherein the first notification is used for notifying the playback starting moment;
and receiving a data acquisition request sent by the second splicing equipment, wherein the data acquisition request is used for requesting to acquire video coding data and a first identifier corresponding to the video coding data after the playback starting moment in the video file.
10. The method of claim 8, wherein the output unit arrangement order is set by a user through the client.
11. The method according to any of claims 8-10, wherein the first identification is a time stamp.
12. A video playback method applied to a second splicing device comprising N2 output units, the N2 being greater than 1, the method comprising:
setting N1 play windows on the second spliced screen, wherein each play window corresponds to one or more output units of the second spliced device, and N1 is larger than 1;
Receiving N1 video coding data and first identifications corresponding to the video coding data sent by a video server, wherein the N1 video coding data correspond to the same frame of display picture of a first splicing screen, the N1 video coding data correspond to N1 output units of first splicing equipment corresponding to the first splicing screen one by one, and the first identifications corresponding to the video coding data are the same; the N1 playing windows are in one-to-one correspondence with the N1 video coding data, and the video coding data and the first identification corresponding to the playing windows are sent to the output units corresponding to the playing windows;
decoding video coding data through the output units corresponding to the playing windows to obtain video decoding data corresponding to the playing windows;
and outputting the video decoding data with the same first identification to the second spliced screen through the output units corresponding to the playing windows at the same time for display.
13. The method according to claim 12, wherein the method further comprises:
establishing N1 buffer areas, wherein N1 playing windows are in one-to-one correspondence with the N1 buffer areas;
storing the received N1 video coding data and the first identifications corresponding to the video coding data into corresponding buffer areas respectively;
The decoding of the video encoded data by the output unit corresponding to each play window to obtain video decoded data corresponding to each play window includes:
and after the N1 video coding data and the first identifications corresponding to the video coding data are stored in the corresponding buffer areas, decoding the video coding data in the corresponding buffer areas through the output units corresponding to the playing windows to obtain video decoding data corresponding to the playing windows.
14. The method of claim 12, wherein prior to receiving the N1 video encoded data sent by the video server and the first identifier corresponding to each of the video encoded data, the method further comprises:
receiving a first notification sent by the video server, wherein the first notification is used for notifying the playback starting moment;
and sending a data acquisition request to the video server, wherein the data acquisition request is used for requesting to acquire video coding data and a first identifier corresponding to the video coding data after the playback starting moment in the video file.
15. The method according to any one of claims 12 to 14, wherein the first identifier is a time stamp.
16. A video recording apparatus comprising means for performing the method of any one of claims 1 to 4 or comprising means for performing the method of any one of claims 5 to 7.
17. A video playback device comprising means for performing the method of any one of claims 8 to 11, or comprising means for performing the method of any one of claims 12 to 15.
18. An electronic device comprising a processor and a memory, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions, to perform the method of any of claims 1-4, or to perform the method of any of claims 5-7, or to perform the method of any of claims 8-11, or to perform the method of any of claims 12-15.
19. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-4, or to perform the method of any one of claims 5-7, or to perform the method of any one of claims 8-11, or to perform the method of any one of claims 12-15.
CN202111297709.9A 2021-11-04 2021-11-04 Video recording method, video playback method and device Active CN114189730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111297709.9A CN114189730B (en) 2021-11-04 2021-11-04 Video recording method, video playback method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111297709.9A CN114189730B (en) 2021-11-04 2021-11-04 Video recording method, video playback method and device

Publications (2)

Publication Number Publication Date
CN114189730A CN114189730A (en) 2022-03-15
CN114189730B true CN114189730B (en) 2023-12-19

Family

ID=80540672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111297709.9A Active CN114189730B (en) 2021-11-04 2021-11-04 Video recording method, video playback method and device

Country Status (1)

Country Link
CN (1) CN114189730B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260011A (en) * 2013-05-16 2013-08-21 杭州巨峰科技有限公司 System and method for achieving monitoring of more paths by utilization of net-harddisk video recorder
CN104601863A (en) * 2013-09-12 2015-05-06 深圳锐取信息技术股份有限公司 IP matrix system for recording and playing
KR20180098850A (en) * 2017-02-27 2018-09-05 (주)진명아이앤씨 Method and apparatus for stitching uhd videos
US10079963B1 (en) * 2017-04-14 2018-09-18 Via Technologies, Inc. Display method and display system for video wall
CN108924582A (en) * 2018-09-03 2018-11-30 深圳市东微智能科技股份有限公司 Video recording method, computer readable storage medium and recording and broadcasting system
CN110007885A (en) * 2019-04-04 2019-07-12 广州驷骏精密设备股份有限公司 A kind of display control method and device based on mosaic screen
CN111182235A (en) * 2019-12-05 2020-05-19 浙江大华技术股份有限公司 Method, device, computer device and storage medium for recording spliced screen pictures
CN111327873A (en) * 2019-07-29 2020-06-23 杭州海康威视系统技术有限公司 Scene layout adjusting method and device, electronic equipment and storage medium
CN112188136A (en) * 2020-09-24 2021-01-05 高新兴科技集团股份有限公司 Method, system, storage medium and equipment for splicing and recording videos in real time in all-in-one mode
CN112422888A (en) * 2019-08-23 2021-02-26 浙江宇视科技有限公司 Video splicing method and device, electronic equipment and computer readable storage medium
CN112929583A (en) * 2021-05-10 2021-06-08 北京小鸟科技股份有限公司 Method and system for synchronously recording and replaying original signal source based on large-screen window
CN113157232A (en) * 2021-04-26 2021-07-23 青岛海信医疗设备股份有限公司 Multi-screen splicing display system and method
CN113438536A (en) * 2021-06-22 2021-09-24 北京飞讯数码科技有限公司 Video display method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2573238B (en) * 2017-02-03 2022-12-14 Tv One Ltd Method of video transmission and display
US11683861B2 (en) * 2020-01-06 2023-06-20 Koji Yoden Edge-based communication and internet communication for media distribution, data analysis, media download/upload, and other services

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103260011A (en) * 2013-05-16 2013-08-21 杭州巨峰科技有限公司 System and method for achieving monitoring of more paths by utilization of net-harddisk video recorder
CN104601863A (en) * 2013-09-12 2015-05-06 深圳锐取信息技术股份有限公司 IP matrix system for recording and playing
KR20180098850A (en) * 2017-02-27 2018-09-05 (주)진명아이앤씨 Method and apparatus for stitching uhd videos
US10079963B1 (en) * 2017-04-14 2018-09-18 Via Technologies, Inc. Display method and display system for video wall
CN108924582A (en) * 2018-09-03 2018-11-30 深圳市东微智能科技股份有限公司 Video recording method, computer readable storage medium and recording and broadcasting system
CN110007885A (en) * 2019-04-04 2019-07-12 广州驷骏精密设备股份有限公司 A kind of display control method and device based on mosaic screen
CN111327873A (en) * 2019-07-29 2020-06-23 杭州海康威视系统技术有限公司 Scene layout adjusting method and device, electronic equipment and storage medium
CN112422888A (en) * 2019-08-23 2021-02-26 浙江宇视科技有限公司 Video splicing method and device, electronic equipment and computer readable storage medium
CN111182235A (en) * 2019-12-05 2020-05-19 浙江大华技术股份有限公司 Method, device, computer device and storage medium for recording spliced screen pictures
CN112188136A (en) * 2020-09-24 2021-01-05 高新兴科技集团股份有限公司 Method, system, storage medium and equipment for splicing and recording videos in real time in all-in-one mode
CN113157232A (en) * 2021-04-26 2021-07-23 青岛海信医疗设备股份有限公司 Multi-screen splicing display system and method
CN112929583A (en) * 2021-05-10 2021-06-08 北京小鸟科技股份有限公司 Method and system for synchronously recording and replaying original signal source based on large-screen window
CN113438536A (en) * 2021-06-22 2021-09-24 北京飞讯数码科技有限公司 Video display method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Android的大屏幕拼接显示系统研究与实现;郭小琴;《中国优秀硕士学位论文全文数据库电子期刊》;全文 *
数字视频监控系统中的集中式存储解决方案应用;王莹;马富贵;;中国安防(12);全文 *

Also Published As

Publication number Publication date
CN114189730A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN103190092B (en) System and method for the synchronized playback of streaming digital content
CN109168021B (en) Plug flow method and device
WO2019134235A1 (en) Live broadcast interaction method and apparatus, and terminal device and storage medium
CN105430512A (en) Method and device for displaying information on video image
CN106131591A (en) Live broadcasting method, device and terminal
WO2018024231A1 (en) Method and apparatus for interconnecting spliced wall and mobile intelligent terminal
CN103327361A (en) Method, device and system for obtaining real-time video communication playback data flow
TWM366948U (en) Wireless digital photo frame with video streaming function
CN111050025A (en) Audio and video display control method, device and system and computer readable storage medium
CN104837046A (en) Multi-media file processing method and device
CN114710702A (en) Video playing method and device
CN114189730B (en) Video recording method, video playback method and device
CN102473088B (en) Media processing comparison system and techniques
CN102917260A (en) Information processing apparatus, information processing system, and program
CN114630101B (en) Display device, VR device and display control method of virtual reality application content
WO2001045411A1 (en) System and method for delivering interactive audio/visual product by server/client
CN113691815B (en) Video data processing method, device and computer readable storage medium
CN115361579A (en) Video transmitting and displaying method and device, electronic equipment and storage medium
CN113596583A (en) Video stream bullet time data processing method and device
JP2002262190A (en) Image processing unit and method, recording medium and program
CN112954483B (en) Data transmission method, system and non-volatile storage medium
JP4151962B2 (en) Telop image transmission device, telop image reception device, telop image transmission / reception system, encoding device, and decoding device
WO2019007161A1 (en) Video data display method and device
JP2005203948A5 (en)
CN117857872A (en) Memory processing method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant