CN116506643A - Video processing method, apparatus, device, storage medium, and computer program product - Google Patents

Video processing method, apparatus, device, storage medium, and computer program product Download PDF

Info

Publication number
CN116506643A
CN116506643A CN202210053938.4A CN202210053938A CN116506643A CN 116506643 A CN116506643 A CN 116506643A CN 202210053938 A CN202210053938 A CN 202210053938A CN 116506643 A CN116506643 A CN 116506643A
Authority
CN
China
Prior art keywords
video
video stream
target
stream
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210053938.4A
Other languages
Chinese (zh)
Inventor
余昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210053938.4A priority Critical patent/CN116506643A/en
Publication of CN116506643A publication Critical patent/CN116506643A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/65Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a video processing method, a device, equipment, a storage medium and a computer program product, which can be applied to various fields or scenes such as cloud technology, intelligent traffic systems, video monitoring systems, vehicle-mounted monitoring and the like. The method comprises the following steps: acquiring a target video stream of target video shooting equipment, and determining a forwarding video stream based on the target video stream; based on web page real-time communication WebRTC protocol, transmitting the forwarded video stream to a client, so that the client plays the video based on the forwarded video stream; when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard. According to the embodiment of the application, the transmission efficiency of the video stream can be ensured, and the time delay of video playing can be reduced.

Description

Video processing method, apparatus, device, storage medium, and computer program product
Technical Field
The present application relates to the field of computer technology, and in particular, to a video processing method, a video processing apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
For video monitoring application, after the video shooting equipment acquires the monitoring video stream, the monitoring video stream is required to be transmitted to the server after being processed, and then the monitoring video stream is transmitted to the client by the server, so that the client can view the monitoring video. How to process and transmit the surveillance video stream so that the surveillance video stream is transmitted in real time (or with low delay) and played smoothly is a problem to be solved at present.
Disclosure of Invention
The embodiment of the application provides a video processing method, a device, equipment, a storage medium and a computer program product, which can ensure the transmission efficiency of a video stream and reduce the time delay of video playing.
In one aspect, an embodiment of the present application provides a video processing method, including:
acquiring a target video stream of target video shooting equipment, and determining a forwarding video stream based on the target video stream;
based on web page real-time communication WebRTC protocol, transmitting the forwarded video stream to a client, so that the client plays the video based on the forwarded video stream;
When the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
In one aspect, an embodiment of the present application provides a video processing apparatus, including:
the processing unit is used for acquiring a target video stream of the target video shooting device and determining a forwarding video stream based on the target video stream;
the sending unit is used for sending the forwarding video stream to the client based on web page real-time communication WebRTC protocol so that the client plays the video based on the forwarding video stream;
when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
In one aspect, an embodiment of the present application provides a computer device, where the computer device includes a memory and a processor, and the memory stores a computer program, where the computer program when executed by the processor causes the processor to execute the video processing method described above.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program that, when read and executed by a processor of a computer device, causes the computer device to perform the video processing method described above.
In one aspect, embodiments of the present application provide a computer program product, or computer program, comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the video processing method described above.
In the embodiment of the application, a target video stream of target video shooting equipment is acquired first, and a forwarding video stream is determined based on the target video stream; then, based on web page real-time communication WebRTC protocol, the forwarding video stream is sent to the client so that the client plays the video based on the forwarding video stream; when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard. According to the embodiment of the application, on one hand, the H.265 video coding standard has the advantages of error correction, video coding quality improvement, video transmission reliability improvement and the like, so that after the video stream is coded by adopting the H.265 video coding standard, real-time (or low-delay) transmission of the video stream can be realized even in a region with weak signals, and the transmission efficiency of the video stream can be ensured; on the other hand, the server and the client communicate by adopting the web real-time communication WebRTC protocol, and the video stream is transmitted by adopting the WebRTC protocol without JavaScript decoding, so that the method has the characteristics of low time delay, high reliability and the like, and the time delay of video playing can be reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 3 is a flowchart of another video processing method according to an embodiment of the present application;
fig. 4 is a flowchart of yet another video processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of video stream processing corresponding to a first type of video capturing device according to an embodiment of the present application;
fig. 6 is a schematic diagram of video stream processing corresponding to a second type of video capturing device according to an embodiment of the present application;
fig. 7 is a flowchart of yet another video processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of video stream splicing according to an embodiment of the present application;
Fig. 9 is a flowchart of yet another video processing method according to an embodiment of the present application;
FIG. 10 is a large screen presentation interface provided by an embodiment of the present application;
FIG. 11 is a flowchart of yet another video processing method according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a video transmission device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be noted that the descriptions of "first," "second," and the like in the embodiments of the present application are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a technical feature defining "first", "second" may include at least one such feature, either explicitly or implicitly.
First, some terms related to embodiments of the present application are explained for easy understanding by those skilled in the art.
Real-time messaging protocol (Real Time Messaging Protocol, RTMP): the RTMP protocol is based on the transmission control protocol (Transmission Control Protocol, TCP) and is a family of protocols including the RTMP base protocol and the RTMPT/RTMPS/RTMPE and other varieties. RTMP is a network protocol designed for real-time data communication, and is mainly used for audio-video and data communication between Flash/AIR platform and streaming media/interactive server supporting RTMP protocol.
WebSocket protocol: is a Web communication protocol, is a long connection protocol based on data frames and built on top of the TCP protocol, and is full duplex. The WebSocket protocol can be used for transmitting text or binary data in a real-time bidirectional manner between the server and the client, so that server resources and bandwidth can be better saved, and real-time communication can be achieved.
WebRTC protocol: the Web communication protocol is mainly used for point-to-point (between browsers) audio and video chat and data sharing, and can also send text or binary data. WebRTC can make real-time communication a standard function by a simple JavaScript API without installing any plug-in.
FFmpeg video processing tool: FFmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert it into streams. The method provides a complete solution for recording, converting and streaming audio and video, comprises a very advanced audio/video coding and decoding library, and can also receive videos with a plurality of coding formats such as RTMP\RTSP\FLV\MPEG and the like, and control the code stream of the videos or process the cutting, splicing and the like of the videos.
Video push service program (Simple RTMP Server, SRS): a video playing service program, SRS positioning is a carrier-grade Internet live server cluster, pursuing better conceptual integrity and the simplest implementation code. The SRS service program provides rich access schemes for accessing the RTMP stream into the SRS module and also supports various transformations of the accessed RTMP video stream. The SRS service program can also provide various video push stream services such as RTMP\FLV\MPEG\WebSocket\WebRTC and the like.
H.265 video coding standard: the h.265 standard is a new video coding standard established by ITU-T VCEG followed by the h.264 standard. The h.265 standard surrounds the existing video coding standard h.264, retaining some of the original techniques, while improving some of the related techniques. The new technology uses advanced technology to improve the relation among code stream, coding quality, delay and algorithm complexity, so as to achieve the optimal setting. Specific research content includes: compression efficiency, robustness and error recovery capability are improved, real-time delay is reduced, channel acquisition time and random access delay are reduced, complexity is reduced, and the like.
At present, in order to solve the problem of how to process and transmit a video stream, a video processing method is provided in the embodiments of the present application, a transmission control protocol (Transmission Control Protocol, TCP) is used to connect and transmit the video stream to a server, the server encodes the video stream by using a real-time message transmission protocol (Real Time Messaging Protocol, RTMP) or WebSocket protocol, and transmits the encoded video stream to a client, so as to realize viewing of the video by the client.
However, since the TCP connection is a reliable connection, disconnection occurs in a region of poor signal or a large delay occurs, and delay accumulation occurs in normal operation due to network jitter.
In addition, a video coding scheme of RTMP protocol is adopted between the server and the client, and because the RTMP protocol requires the client to install a flash browser plug-in, the plug-in can generate more obvious clamping when playing multi-path video streams due to performance reasons; moreover, the system is black box for project engineers, the engineers cannot capture and process errors in the operation of the flash plug-in, and once the plug-in crashes, the monitoring system is not available, and pages need to be refreshed, so the system is not suitable for the monitoring system which needs to stably operate for a long time. And by adopting the WebSocket coding scheme, the decoding work needs to be carried out by using a JavaScript script at the client, the JavaScript language belongs to the scripting language, the performance does not meet the requirement of video stream decoding, and the obvious clamping phenomenon can occur in multiple paths of videos.
Based on the above description, another video processing method is provided in the embodiments of the present application, specifically, a target video stream sent by a target video capturing device is obtained, a forwarding video stream is determined based on the target video stream, and then the forwarding video stream is sent to a client based on web real-time communication WebRTC protocol, so that the client plays a video based on the forwarding video stream.
The video processing scheme method provided by the application can be applied to an intelligent transportation system (Intelligent Traffic System, ITS), and the intelligent transportation system is also called an intelligent transportation system (Intelligent Transportation System), and is an integrated transportation system which effectively and comprehensively applies advanced scientific technologies (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operation study, artificial intelligence and the like) to transportation, video monitoring, service control and vehicle manufacturing, and strengthens the connection among vehicles, roads and users, so that the safety is ensured, the efficiency is improved, the environment is improved and the energy is saved.
In a possible embodiment, the video processing method provided in the embodiment of the present application may also be implemented based on Cloud technology (Cloud technology). In particular, one or more of Cloud storage (Cloud storage) and Cloud computing (Cloud computing) in Cloud technology may be involved. Wherein cloud computing (cloud computing) refers to a delivery and usage mode of an IT infrastructure, which refers to obtaining required resources in an on-demand, easily-extensible manner through a network; generalized cloud computing refers to the delivery and usage patterns of services, meaning that the required services are obtained in an on-demand, easily scalable manner over a network. Such services may be IT, software, internet related, or other services. Cloud Computing is a product of fusion of traditional computer and network technology developments such as Grid Computing (Grid Computing), distributed Computing (distributed Computing), parallel Computing (Parallel Computing), utility Computing (Utility Computing), network storage (Network Storage Technologies), virtualization (Virtualization), load balancing (Load balancing), and the like.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside. At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The video processing method provided by the application can be executed by a computer device, in particular a server, or can be executed jointly by a target video shooting device, a server and a client. The target video shooting device can be a camera arranged on a movable platform, such as a vehicle-mounted terminal; but not limited to, a camera provided on a fixed platform, such as a field terminal. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery (Content Delivery Network, CDN) services, basic cloud computing services such as big data and artificial intelligent platforms, and the like. The client may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, etc.
For example, assuming that the video processing method is performed by the server, the target video capturing device may be a first type of video capturing device (such as a camera of an in-vehicle terminal) or may be a second type of video capturing device (such as a camera of a venue terminal), and specifically, the video processing method proposed in the present application may be implemented by adopting the architecture of the video processing system described below. Referring to fig. 1, fig. 1 is a schematic architecture diagram of a video processing system according to an embodiment of the present application, and as shown in fig. 1, the video processing system 100 may include a target video capturing device (one or more video capturing devices 101 of a first type, one or more video capturing devices 102 of a second type), a server 103, and one or more clients 104. Of course, the video processing system 100 may also include a target video capture device (one or more video capture devices 101 of the first type), a server 103, and one or more clients 104; the video processing system 100 may also include a target video capture device (one or more video capture devices 102 of a second type), a server 103, and one or more clients 104, embodiments of which are not limited in this disclosure. Wherein the first type of video capturing device 101 or the second type of video capturing device 102 is mainly used for transmitting the target video stream to the server 103; the server 103 is mainly configured to perform relevant steps of the video processing method, and send the obtained forwarding video stream to the client 104; the client 104 plays the video based primarily on the received forwarded video stream. The target video capturing device (the first type of video capturing device 101 or the second type of video capturing device 102), the server 103, and the client 104 may implement communication connection, and the connection manner may include wired connection and wireless connection, which is not limited herein.
In combination with the video processing system, the video processing method according to the embodiment of the application may generally include:
the server 103 acquires a target video stream sent by a target video capturing device (such as the first type video capturing device 101 and the second type video capturing device 102), and determines a forwarding video stream based on the target video stream; the forwarded video stream is further sent to the client 104 based on WebRTC protocol to enable the client 104 to play video based on the forwarded video stream. The method processes and transmits the video stream acquired by the target video shooting equipment, so that the transmission efficiency of the video stream can be ensured, and the time delay of video playing can be reduced. The target video shooting device may be a video monitoring device, and the video stream corresponding to the target video shooting device may be regarded as a monitoring video stream.
In one embodiment, the target video capturing device may include a first type of video capturing device, a second type of video capturing device, and both the first type of video capturing device and the second type of video capturing device, which are not limited herein. When the target video shooting equipment comprises first type video shooting equipment, the video stream corresponding to the first type video shooting equipment, which is included in the target video stream, is obtained by video coding the initial video stream acquired by the first type video shooting equipment based on an H.265 video coding standard; when the target video shooting device comprises a second type of video shooting device, the video stream corresponding to the second type of video shooting device, which is included in the target video stream, is obtained by video encoding the initial video stream acquired by the second type of video shooting device based on the target streaming media protocol. The target streaming protocol may be an RTMP protocol, a real-time streaming protocol (Real Time Streaming Protocol, RTSP) protocol, FLV (Flash Video) protocol, etc., which are not limited herein.
It may be understood that the schematic diagram of the system architecture described in the embodiments of the present application is for more clearly describing the technical solution of the embodiments of the present application, and does not constitute a limitation on the technical solution provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of a new service scenario, the technical solution provided in the embodiments of the present application is equally applicable to similar technical problems.
Based on the above description of the architecture of the video processing system, the embodiment of the present application discloses a video processing method, please refer to fig. 2, which is a schematic flow chart of a video processing method disclosed in the embodiment of the present application, where the video processing method may be executed by a computer device, and in particular, may be executed by the server 103 in the video processing system. The video processing method specifically includes steps S201 to S202:
s201, acquiring a target video stream sent by target video shooting equipment, and determining a forwarding video stream based on the target video stream.
In the embodiment of the application, the server receives the target video stream sent by the target video shooting device through the wireless signal receiver or the network cable, processes the target video stream, and determines the forwarding video stream. When the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard. The target video shooting device may be a video monitoring device, and the video stream corresponding to the target video shooting device may be regarded as a monitoring video stream.
It should be noted that the first type of video capturing device may be a terminal device that cannot connect to a specific network (such as Wi-Fi). For the first type of video shooting equipment, firstly, an initial video stream acquired by the first type of video shooting equipment is required to be subjected to video coding by utilizing an H.265 video coding standard to obtain a target video stream, and then the target video stream of the first type of video shooting equipment is transmitted to a server by utilizing a specific wireless signal transmitter. The specific wireless signal transmitter is different from the communication protocol adopted by WIFI, and may be a frequency modulation (Frequency Modulation, FM) transmitter, for example, that performs data transmission on a frequency band agreed by the sender and the receiver. Because the H.265 video coding standard has the advantages of error correction, video coding quality improvement, video transmission reliability and the like, real-time (or low-delay) transmission of the video stream can be realized even in a region with weak signals after the video stream is coded by adopting the H.265 video coding standard, so that the transmission efficiency of the video stream can be ensured.
In one possible implementation manner, when the target video capturing device includes a second type of video capturing device, the video stream corresponding to the second type of video capturing device included in the target video stream is obtained by video encoding the initial video stream acquired by the second type of video capturing device based on the target streaming media protocol.
It should be noted that the second type of video capturing device may be a terminal device that is connectable to a network and has a streaming function integrated therein. For the second type of video shooting equipment, a streaming media function can be utilized, a target streaming media protocol (RTMP\RTSP\FLV and the like) is adopted to carry out video coding on an initial video stream acquired by the second type of video shooting equipment, so as to obtain a target video stream, and then the target video stream of the second type of video shooting equipment is sent to a server through a WIFI or network cable connection network. Since the second type of video capturing apparatus itself has a streaming function, the efficiency of target video stream processing can be improved later.
In one possible implementation, the first type of video capture device is disposed on a movable platform and the second type of video capture device is disposed on a fixed platform. For example, the first type of video photographing apparatus may be a camera provided at an in-vehicle terminal, and the second type of video photographing apparatus may be a camera provided at a venue terminal.
For example, the first type of video capturing device is a camera disposed on the vehicle-mounted terminal, when the target video capturing device includes the camera of the vehicle-mounted terminal, the vehicle-mounted terminal encodes an initial video stream acquired by the camera of the vehicle-mounted terminal by using an h.265 video encoding standard to obtain a target video stream, and then sends the target video stream to the server by using a wireless signal transmitter (such as an FM antenna) on the vehicle.
For another example, the second type of video capturing device is a camera disposed at a venue terminal, where the venue terminal may be a terminal device capable of connecting to a network and integrating a streaming media function, and when the target video capturing device includes the camera of the venue terminal, the venue terminal uses a target streaming media protocol (such as an RTMP protocol) to perform video encoding on an initial video stream acquired by the camera of the venue terminal, so as to obtain a target video stream, and then connects to the network through WIFI or a network cable, and sends the target video stream to the server.
S202, the forwarding video stream is sent to the client based on web page real-time communication WebRTC protocol, so that the client plays the video based on the forwarding video stream.
In the embodiment of the application, the server and the client adopt the WebRTC protocol to transmit and forward the video stream, and the WebRTC protocol is a Web standard audio/video transmission protocol. Therefore, the forwarding video stream is sent to the client based on the WebRTC protocol, so that the client plays the video based on the forwarding video stream, and the time delay of video playing can be reduced.
It should be noted that, the server receives the video viewing request sent by the client at any time before the WebRTC protocol sends the forwarded video stream to the client. The video viewing request may include an identification of the target video capture device. The server receives a video viewing request sent by the client, and then responds to the video viewing request to acquire a target video stream of the target video shooting device. Also exemplary, the server obtains an identification of the target video capture device, determines a forwarded video stream based on the target video stream, and then, in response to the video view request, sends the forwarded video stream to the client based on web page real-time communication WebRTC protocol.
In one possible implementation, the web-based real-time communication WebRTC protocol sends the forwarded video stream to the client, including: and sending the forwarded video stream to a video push service SRS module, calling the video push service SRS module to send the forwarded video stream to a client based on web page real-time communication WebRTC protocol. It should be noted that, the video push service SRS module may utilize WebRTC protocol to enhance the service of the video viewing request for the client, so that the server needs to transfer the determined forwarding video stream to the video push service SRS module, so that a subsequent client may obtain the forwarding video stream through the video push service SRS module, and thus perform video playing based on the forwarding video stream, to implement viewing of the video by the client.
In general, a target video capturing device sends a target video stream to a server, the server determines a forwarding video stream based on the target video stream, and then sends the forwarding video stream to a client based on WebRTC protocol, and the client can play video based on the forwarding video stream. Referring to fig. 3, fig. 3 is a flow chart of another video processing method provided in the embodiment of the present application, and as shown in fig. 3, a field test includes a vehicle-mounted sensor and a field monitoring camera, where the vehicle-mounted sensor includes a camera, a GPS, an IMU, and so on. The target video shooting device can be regarded as a camera of the vehicle-mounted terminal and a field-end monitoring camera. The camera of the vehicle-mounted terminal and the field-end monitoring camera send the target video stream to a data transfer through the wireless transceiver, wherein the data transfer can be regarded as a server, the data transfer comprises video recording, video transcoding and Web background, and the data transfer can ensure the persistence of data in a mode of storing logs and recordings. The data transfer is to send the forwarding video stream to the client after determining the forwarding video stream based on the target video stream, and the client can display video playing through large-screen equipment, such as video data visualization, vehicle state real-time monitoring, test case management, vehicle-mounted and field-end camera video monitoring, vehicle real-time position following and historical record playback, wherein large-screen display pictures comprise test data statistics, average problem mileage trend graphs, vehicle-end monitoring pictures and field-end monitoring pictures, current test case data, current vehicle preparation data and current vehicle state.
In addition, in the specific embodiment of the present application, related data such as an initial video stream, a target video stream, a forwarding video stream, and the like are referred to, and all the referred data are acquired after authorization of related objects. When the above embodiments of the present application are applied to a specific product or technology, the data involved in the use needs to be licensed or agreed upon by the relevant subject, and the collection, use and processing of the relevant data needs to comply with relevant laws and regulations and standards of the relevant country and region.
In summary, in the embodiment of the present application, a target video stream sent by a target video capturing device is received, and a forwarding video stream is determined based on the target video stream; responding to a video viewing request of a client, and transmitting the forwarded video stream to the client based on a WebRTC protocol so that the client plays the video based on the forwarded video stream; when the target video shooting equipment comprises first type video shooting equipment, the video stream corresponding to the first type video shooting equipment, which is included in the target video stream, is obtained by video coding the initial video stream acquired by the first type video shooting equipment based on an H.265 video coding standard; when the target video shooting device comprises a second type of video shooting device, the video stream corresponding to the second type of video shooting device included in the target video stream is obtained by video encoding the initial video stream acquired by the second type of video shooting device based on a target streaming media protocol. It should be understood that after the video stream is encoded by using the h.265 video encoding standard, real-time (or low-delay) transmission of the video stream can be realized even in a region with weak signal, so that the transmission efficiency of the video stream can be ensured; the server and the client adopt the WebRTC protocol for communication, so that the time delay of video playing can be reduced.
Based on the above description of the architecture of the video processing system, the embodiment of the present application discloses a flowchart of yet another video processing method, please refer to fig. 4, which is a flowchart of another video processing method disclosed in the embodiment of the present application, where the video processing method may be executed by a computer device, and in particular, may be executed by the server 103 in the video processing system. The video processing method may specifically include steps S401 to S403, where step S401 and step S402 are a specific implementation manner of step S201, and step S403 is a specific implementation manner of step S202. Wherein:
s401, acquiring a target video stream of a target video shooting device, when the target video shooting device comprises a first type of video shooting device, performing video coding on a video stream corresponding to the first type of video shooting device included in the target video stream based on a target streaming media protocol, transcoding the coded video stream to obtain a first transcoded video stream, and determining a forwarding video stream based on the first transcoded video stream.
In the embodiment of the present application, the video stream corresponding to the first type of video capturing device is obtained by video encoding the initial video stream acquired by the first type of video capturing device based on the h.265 video encoding standard, where the video stream corresponding to the first type of video capturing device is not suitable for transmission between the server and the client. Therefore, after the server obtains the target video stream of the target video capturing device, the video stream corresponding to the first type of video capturing device included in the target video stream needs to be encoded by using a target streaming media protocol (rtmp\rtsp\flv, etc.), then the encoded video stream is transcoded by using an FFmpeg video processing tool to obtain a first transcoded video stream, and the first transcoded video stream is used to determine the transcoded video stream, which may specifically be that the first transcoded video stream is used as the transcoded video stream, or that the video stream after rate control is performed on the first transcoded video stream again is used as the transcoded video stream, which is not limited herein. The main purpose of transcoding the encoded video stream using FFmpeg video processing tools is to control video quality and traffic, reducing client performance and bandwidth pressure.
The first type of video shooting equipment is a camera of the vehicle-mounted terminal, after a server obtains a video stream corresponding to the camera of the vehicle-mounted terminal, video encoding is performed on the video stream corresponding to the camera of the vehicle-mounted terminal by using an RTMP protocol to obtain a video stream in an RTMP format, then the video stream in the RTMP format is transcoded by using an FFmpeg video processing tool to obtain a first transcoded video stream, and the first transcoded video stream is used for determining the transcoded video stream.
S402, when the target video shooting device comprises a second type of video shooting device, transcoding a video stream corresponding to the second type of video shooting device, which is included in the target video stream, to obtain a second transcoded video stream, and determining a forwarding video stream based on the second transcoded video stream.
In the embodiment of the present application, the video stream corresponding to the second type of video capturing device is obtained by video encoding the initial video stream acquired by the second type of video capturing device based on the target streaming media protocol. Because the second type of video shooting device may be a video shooting device integrated with a streaming media function, the video stream corresponding to the second type of video shooting device has been encoded by adopting a target streaming media protocol, after the server obtains the target video stream of the target video shooting device, the server directly transcodes the video stream corresponding to the second type of video shooting device included in the target video stream through an FFmpeg video processing tool to obtain a second transcoded video stream, and determines a forwarding video stream based on the second transcoded video stream.
The second type of video shooting device is a camera of the field terminal, the server obtains a video stream corresponding to the camera of the field terminal, the video stream corresponding to the camera of the field terminal is a video stream coded by the field terminal by adopting an RTMP protocol, the server directly transcodes the video stream corresponding to the camera of the field terminal through an FFmpeg video processing tool to obtain a second transcoded video stream, and the forwarding video stream is determined based on the second transcoded video stream.
S403, the forwarding video stream is sent to a video push service SRS module, the video push service SRS module is called, and the forwarding video stream is sent to a client based on web page real-time communication WebRTC protocol, so that the client plays the video based on the forwarding video stream.
In the embodiment of the application, the server and the client can communicate by adopting a WebRTC protocol, and the video push service SRS module can provide WebRTC services for the client, such as video and audio acquisition, data transmission, audio and video presentation, and the like. Therefore, the server needs to transfer the determined forwarding video stream to the video push service SRS module, so that a subsequent client can acquire the forwarding video stream through the video push service SRS module, video playing is performed based on the forwarding video stream, and viewing of the video by the client is realized.
It should be noted that, the video push service SRS module adopts a Docker as a carrier and operates in a Linux system. Where Docker is an open-source application container engine that allows developers to package their applications and rely on packages into a portable container and then release onto any popular operating system machine. In the embodiment of the application, when the Docker is started, a restart=always parameter is used, so that the service process can be restarted automatically after being withdrawn accidentally, and a disconnection reconnection mechanism is added in the client script, so that the stability of the service is ensured.
The video processing method is described below with a specific example:
the first type of video photographing apparatus is assumed to be a camera of a vehicle-mounted terminal, and the second type of video photographing apparatus is assumed to be a camera of a venue terminal. The method comprises the steps that a server obtains a target video stream of target video shooting equipment, wherein the target video stream comprises a video stream corresponding to a camera of a vehicle-mounted terminal, or the target video stream comprises a video stream corresponding to a camera of a field terminal:
(1) Referring to fig. 5, fig. 5 is a schematic diagram of video stream processing corresponding to a first type of video capturing apparatus according to an embodiment of the present application. When the target video shooting equipment comprises a video stream corresponding to a camera of the vehicle-mounted terminal, firstly, a server performs video coding on the video stream corresponding to the camera of the vehicle-mounted terminal by using an RTMP protocol to obtain a video stream in an RTMP format; and then transcoding the video stream in the RTMP format through the FFmpeg video processing tool to obtain a first transcoded video stream, and determining the forwarded video stream by using the first transcoded video stream. The server further sends the forwarded video stream to a video push service SRS module, and calls the video push service SRS module to send the forwarded video stream to the client based on the WebRTC protocol so that the client plays the video based on the forwarded video stream.
(2) Referring to fig. 6, fig. 6 is a schematic diagram of video stream processing corresponding to a second type of video capturing apparatus according to an embodiment of the present application. When the target video shooting device comprises a video stream corresponding to a camera of the field terminal, the video stream corresponding to the camera of the field terminal is a video stream coded by the field terminal by adopting an RTMP protocol, the server directly transcodes the video stream corresponding to the camera of the field terminal through an FFmpeg video processing tool to obtain a second transcoded video stream, and the forwarding video stream is determined based on the second transcoded video stream. The server further sends the forwarded video stream to a video push service SRS module, and calls the video push service SRS module to send the forwarded video stream to the client based on the WebRTC protocol so that the client plays the video based on the forwarded video stream.
In addition, in the specific embodiments of the present application, related data such as an initial video stream, a target video stream, a first transcoded video stream, a second transcoded video stream, a forwarded video stream, etc. are referred to, and all the referred data are acquired after authorization of related objects. When the above embodiments of the present application are applied to a specific product or technology, the data involved in the use needs to be licensed or agreed upon by the relevant subject, and the collection, use and processing of the relevant data needs to comply with relevant laws and regulations and standards of the relevant country and region.
In summary, in the embodiment of the present application, a target video stream of a target video capturing device is obtained, when the target video capturing device includes a first type of video capturing device, video encoding is performed on a video stream corresponding to the first type of video capturing device included in the target video stream based on a target streaming media protocol, the encoded video stream is transcoded to obtain a first transcoded video stream, and a forwarded video stream is determined based on the first transcoded video stream; when the target video shooting equipment comprises video shooting equipment of a second type, transcoding video streams corresponding to the video shooting equipment of the second type, which are included in the target video streams, to obtain second transcoded video streams, and determining forwarding video streams based on the second transcoded video streams; and sending the forwarded video stream to a video push service SRS module, calling the video push service SRS module to communicate with the WebRTC protocol in real time based on the webpage, and sending the forwarded video stream to the client so that the client plays the video based on the forwarded video stream. It should be understood that after the video stream is encoded by using the h.265 video encoding standard, real-time (or low-delay) transmission of the video stream can be realized even in a region with weak signal, so that the transmission efficiency of the video stream can be ensured; the server and the client are communicated by adopting the WebRTC protocol, so that the time delay of video playing can be reduced; and processing and transmitting video streams acquired by different types of video shooting equipment in different modes, so that the reliability of video playing is further improved.
Based on the above description of the architecture of the video processing system, the embodiment of the present application discloses a flowchart of yet another video processing method, please refer to fig. 7, which is a flowchart of another video processing method disclosed in the embodiment of the present application, where the video processing method may be executed by a computer device, and in particular, may be executed by the server 103 in the video processing system. The video processing method may specifically include steps S701 to S704, where steps S701 to S703 are a specific implementation manner of step S201, and step S704 is a specific implementation manner of step S202. Wherein:
s701, acquiring a target video stream of a target video shooting device, and respectively processing a first video stream and a second video stream to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream, wherein the target video shooting device comprises the first video shooting device and the second video shooting device, and the target video stream comprises the first video stream corresponding to the first video shooting device and the second video stream corresponding to the second video shooting device.
In the embodiment of the application, the server may acquire video streams corresponding to a plurality of video shooting devices, and the video streams corresponding to different types of video shooting devices are processed by adopting different processing modes. Taking the example of obtaining a first video stream corresponding to a first video shooting device and a second video stream corresponding to a second video shooting device, the server processes the video streams by adopting different processing modes for the first video stream corresponding to the first video shooting device and the second video stream corresponding to the second video shooting device respectively to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream. For the processing manner of the first video stream corresponding to the first video capturing device, reference may be made to the above description in step S401 of the processing manner of the video stream corresponding to the first type of video capturing device: the method comprises the steps that a server firstly utilizes a target streaming media protocol to carry out video coding on a first video stream corresponding to first video shooting equipment, and then a FFmpeg video processing tool is used for transcoding the coded video stream to obtain a first standard video stream corresponding to the first video stream; for the processing manner of the second video stream corresponding to the second video capturing apparatus, the processing manner of the video stream corresponding to the second type of video capturing apparatus may be described with reference to the above step S402: and the server directly transcodes a second video stream corresponding to the second video shooting device through the FFmpeg video processing tool to obtain a second standard video stream corresponding to the second video stream.
It should be noted that, the code rate of the first standard video stream corresponding to the first video stream is the same as the code rate of the second standard video stream corresponding to the second video stream, which is beneficial to ensuring that the first standard video stream and the second standard video stream are spliced later.
S702, splicing the first standard video stream and the second standard video stream to obtain a spliced video stream.
In this embodiment of the present application, the server may perform a splicing process on the first standard video stream and the second standard video stream by using an FFmpeg video processing tool, so as to obtain a spliced video stream, so that the server may send the spliced video stream to the client as a forwarding video stream, so that the client performs video playing based on the forwarding video stream.
In one possible implementation manner, the splicing processing is performed on the first standard video stream and the second standard video stream to obtain a spliced video stream, including: determining video frames to be spliced from the first standard video stream, and determining matching video frames from the second standard video stream; the first video frame to be spliced is any video frame in the first standard video stream, and the playing time corresponding to the matched video frame is matched with the playing time corresponding to the video frame to be spliced; splicing the video frames to be spliced and the matched video frames to obtain spliced video frames; and arranging the spliced video frames according to the sequence of the playing time to obtain the spliced video stream.
It should be noted that, the playing time corresponding to the matched video frame is matched with the playing time corresponding to the video frame to be spliced, which may mean that the playing time corresponding to the matched video frame is the same as the playing time corresponding to the video frame to be spliced, or that the playing time corresponding to the matched video frame and the playing time corresponding to the video frame to be spliced may be within a preset error range, which is not limited herein. By ensuring that the playing time corresponding to the matched video frames is matched with the playing time corresponding to the video frames to be spliced, the delay of playing a plurality of videos by the client can be ensured, so that the plurality of videos played in the error range are in the same time period.
The playing time corresponding to the matched video frame is 1 ms-5 ms, and the playing time corresponding to the video frame to be spliced is also 1 ms-5 ms, so that the playing time corresponding to the matched video frame and the playing time corresponding to the video frame to be spliced can be considered to be matched. Also, for example, the preset error range is 2ms, the playing time corresponding to the matched video frame is 1 ms-5 ms, the playing time corresponding to the video frame to be spliced is 2 ms-6 ms, and the error between the playing time corresponding to the matched video frame and the playing time corresponding to the video frame to be spliced is 1ms, that is, in the preset error range, the playing time corresponding to the matched video frame and the playing time corresponding to the video frame to be spliced can be considered as being matched.
As shown in fig. 8, fig. 8 is a schematic diagram of video stream splicing according to an embodiment of the present application. Firstly, any video frame in a first standard video stream is determined to be a video frame to be spliced, then a matched video frame which is matched with the playing time corresponding to the video frame to be spliced is determined in a second standard video stream, and the video frame to be spliced and the matched video frame are spliced through an FFmpeg video processing tool to obtain a spliced video frame, namely, the spliced video frame comprises the video frame to be spliced in the first standard video stream and the matched video frame in the second standard video stream under the same playing time. Further, arranging the spliced video frames according to the sequence of the playing time to obtain the spliced video stream.
S703, taking the spliced video stream as a forwarding video stream.
S704, the forwarding video stream is sent to a video push service SRS module, the video push service SRS module is called, and the forwarding video stream is sent to a client based on a web page real-time communication WebRTC protocol, so that the client plays the video based on the forwarding video stream.
In the embodiment of the application, the first standard video stream corresponding to the first video stream and the second standard video stream corresponding to the second video stream are spliced to obtain the spliced video stream, and then the spliced video stream is sent to the client as the forwarding video stream, so that the forwarding video stream can be sent to the client by adopting only 1 line, and the method is similar to using 1 large truck to pull N (N is a positive integer) tons of goods. If the splicing processing is not performed on the first standard video stream corresponding to the first video stream and the second standard video stream corresponding to the second video stream, then multiple (e.g., 2) lines are needed to send the first standard video stream and the second standard video stream to the client, which is similar to pulling 1 ton of goods by using N wagons respectively. According to the video streaming method and device, after video streams of the plurality of video shooting devices are spliced, 1 line is used for transmission, and compared with the mode that the video streams of the plurality of video shooting devices are respectively transmitted by the plurality of lines, the number of lines can be greatly reduced, so that system performance loss can be effectively reduced, and communication and software and hardware resources are saved; in addition, the transmission line is reduced, and the error probability can be reduced to some extent.
Other implementation manners of step S704 are the same as the specific implementation manner of step S403, and are not described herein.
It should be noted that, the foregoing is described by taking the video streams of two video capturing devices being spliced and then transmitted as a forwarding video as an example, and it is to be understood that, based on the idea of the video streams of a plurality of video capturing devices being spliced and then transmitted as a forwarding video provided in the embodiment of the present application, for application scenarios of more than two video capturing devices, a processing manner is similar, and specific reference may be made to the foregoing description, which is not repeated here.
The video processing method realizes the continuous play of the multi-channel video monitoring of the vehicle end and the field end for 24 hours at the client, and the video delay is lower than 2 seconds. The video processing method is described below with two specific examples:
example 1: the target video shooting device is assumed to comprise a first video shooting device and a second video shooting device, wherein the first video shooting device is a camera of the vehicle-mounted terminal, and the second video shooting device is a camera of the field terminal. The target video stream comprises a first video stream corresponding to the first video shooting device and a second video stream corresponding to the second video shooting device.
Referring to fig. 9, fig. 9 is a flowchart of another video processing method according to an embodiment of the present application. The steps 1 to 3 are executed by the first video shooting device; step 6 is a step executed by the second video shooting device, and step 4, step 5, step 7 and step 8 are steps executed by the server; step 9 is a step performed by the client.
The first video shooting equipment carries out video coding on the acquired video signals by adopting an H.265 video coding standard through an encoder to obtain a first video stream, and then the first video stream is sent to a server by utilizing a wireless signal transmitter of the vehicle-mounted terminal. The field terminal integrates the streaming media function, so that the second video stream corresponding to the second video shooting device is a video stream after video encoding by adopting the RTMP protocol through the field terminal, and the second video stream corresponding to the second video shooting device can be directly sent to the server.
After the server receives the first video stream by using a wireless signal receiver, video encoding is carried out on the first video stream corresponding to the first video shooting equipment by using an RTMP protocol, and then the encoded video stream is transcoded by using an FFmpeg video processing tool to obtain a first standard video stream corresponding to the first video stream; after receiving a second video stream corresponding to a second video shooting device, the server directly transcodes the second video stream through an FFmpeg video processing tool to obtain a second standard video stream corresponding to the second video stream.
Further, the server performs splicing processing on the first standard video stream and the second standard video stream through the FFmpeg video processing tool to obtain a spliced video stream, and the spliced video stream is used as a forwarding video stream. The spliced video stream at this time is a video stream in RTMP format.
And finally, the server sends the forwarded video stream to a video push service SRS module, calls the video push service SRS module to communicate with the WebRTC protocol in real time based on the webpage, and sends the forwarded video stream to the client, so that the client plays the video based on the forwarded video stream, namely, the video stream acquired by each video shooting device is checked through a browser supported by the client. As shown in fig. 10, fig. 10 is a large screen display interface provided in an embodiment of the present application, where the large screen display interface includes a test data statistics, an average mileage trend chart, a vehicle end monitoring screen, and a field end monitoring screen. The test data statistics comprise a total test mileage number, a coverage case number, an average problem mileage number, a total test duration, a total test vehicle number and a total test problem number; the vehicle end monitoring frames comprise monitoring frames acquired by 4 vehicle end cameras; the field terminal monitoring comprises monitoring pictures acquired by 6 field terminal cameras.
Example 2: the target video shooting device is assumed to comprise a first video shooting device, a second video shooting device and a third video shooting device, wherein the first video shooting device is a camera A of the vehicle-mounted terminal, the first video shooting device is a camera B of the vehicle-mounted terminal, and the third video shooting device is a camera of the field terminal. The target video stream comprises a first video stream corresponding to the first video shooting device, a second video stream corresponding to the second video shooting device and a third video stream corresponding to the third video shooting device.
Referring to fig. 11, fig. 11 is a flowchart of another video processing method according to an embodiment of the present application. The steps 1 to 3 are executed by the first video shooting device; step 4 to step 6 are steps executed by the second video shooting device; step 9 is a step executed by the third video shooting device, and step 7, step 8, step 10 and step 11 are steps executed by the server; step 12 is a step performed by the client.
The first video shooting equipment carries out video coding on the acquired video signals by adopting an H.265 video coding standard through an encoder to obtain a first video stream, and then the first video stream is sent to a server by utilizing a wireless signal transmitter of the vehicle-mounted terminal. And similarly, the second video shooting equipment performs video coding on the acquired video signals by adopting an H.265 video coding standard through an encoder to obtain a second video stream, and then the second video stream is sent to a server by utilizing a wireless signal transmitter of the vehicle-mounted terminal. The field terminal integrates the streaming media function, so that the third video stream corresponding to the third video shooting device is a video stream after video encoding by adopting the RTMP protocol through the field terminal, and the third video stream corresponding to the third video shooting device can be directly sent to the server.
After the server receives the first video stream and the second video stream by using a wireless signal receiver, respectively carrying out video coding on the first video stream corresponding to the first video shooting device and the second video stream corresponding to the second video shooting device by using an RTMP protocol, and then respectively carrying out transcoding on the coded first video stream and the coded second video stream by using an FFmpeg video processing tool to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream; after receiving a third video stream corresponding to a third video shooting device, the server directly transcodes the third video stream through an FFmpeg video processing tool to obtain a third standard video stream corresponding to the third video stream.
Further, the server performs splicing processing on the first standard video stream, the second standard video stream and the third standard video stream through an FFmpeg video processing tool to obtain a spliced video stream, and the spliced video stream is used as a forwarding video stream. The spliced video stream at this time is a video stream in RTMP format.
And finally, the server sends the forwarded video stream to a video push service SRS module, calls the video push service SRS module to communicate with the WebRTC protocol in real time based on the webpage, and sends the forwarded video stream to the client so that the client plays the video based on the forwarded video stream.
In addition, in the specific embodiments of the present application, related data such as an initial video stream, a target video stream, a first standard video stream, a second standard video stream, a spliced video stream, a forwarding video stream, etc. are referred to, and all the referred data are acquired after authorization of related objects. When the above embodiments of the present application are applied to a specific product or technology, the data involved in the use needs to be licensed or agreed upon by the relevant subject, and the collection, use and processing of the relevant data needs to comply with relevant laws and regulations and standards of the relevant country and region.
In summary, in this embodiment of the present application, a target video stream of a target video capturing device is obtained, and a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream are obtained by respectively processing the first video stream and the second video stream, where the target video capturing device includes a first video capturing device and a second video capturing device, and the target video stream includes a first video stream corresponding to the first video capturing device and a second video stream corresponding to the second video capturing device; splicing the first standard video stream and the second standard video stream to obtain a spliced video stream; taking the spliced video stream as a forwarding video stream; and sending the forwarded video stream to a video push service SRS module, calling the video push service SRS module to communicate with the WebRTC protocol in real time based on the webpage, and sending the forwarded video stream to the client so that the client plays the video based on the forwarded video stream. It should be understood that after the video stream is encoded by using the h.265 video encoding standard, real-time (or low-delay) transmission of the video stream can be realized even in a region with weak signal, so that the transmission efficiency of the video stream can be ensured; the server and the client are communicated by adopting the WebRTC protocol, so that the time delay of video playing can be reduced; and processing and transmitting video streams acquired by different types of video shooting equipment in different modes, and splicing the processed video streams, so that the system performance loss and the error probability are reduced, and the simultaneous playing of multiple paths of videos is realized.
Based on the video processing method, the embodiment of the application provides a video processing device. Referring to fig. 12, a schematic structural diagram of a video processing apparatus according to an embodiment of the present application, the video processing apparatus 1200 may operate as follows:
a processing unit 1201, configured to obtain a target video stream of a target video capturing device, and determine a forwarding video stream based on the target video stream;
a sending unit 1202, configured to send the forwarded video stream to a client based on a WebRTC protocol, so that the client plays a video based on the forwarded video stream;
when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
In one embodiment, when the target video capturing device includes a second type of video capturing device, the video stream corresponding to the second type of video capturing device included in the target video stream is obtained by video encoding an initial video stream acquired by the second type of video capturing device based on a target streaming media protocol.
In one embodiment, the processing unit 1201, when determining a forwarding video stream based on the target video stream, is specifically configured to: when the target video shooting equipment comprises the first type of video shooting equipment, video encoding is carried out on a video stream corresponding to the first type of video shooting equipment, which is included in the target video stream, based on a target streaming media protocol, the encoded video stream is transcoded to obtain a first transcoded video stream, and a forwarding video stream is determined based on the first transcoded video stream; and when the target video shooting equipment comprises the second type of video shooting equipment, transcoding the video stream corresponding to the second type of video shooting equipment, which is included in the target video stream, to obtain a second transcoded video stream, and determining a forwarding video stream based on the second transcoded video stream.
In one embodiment, the target video capturing device includes a first video capturing device and a second video capturing device, and the target video stream includes a first video stream corresponding to the first video capturing device and a second video stream corresponding to the second video capturing device; the processing unit 1201, when determining the forwarding video stream based on the target video stream, is specifically configured to: processing the first video stream and the second video stream respectively to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream; splicing the first standard video stream and the second standard video stream to obtain a spliced video stream; and taking the spliced video stream as a forwarding video stream.
In one embodiment, the processing unit 1201 is specifically configured to, when performing a splicing process on the first standard video stream and the second standard video stream to obtain a spliced video stream: determining video frames to be spliced from the first standard video stream, and determining matching video frames from the second standard video stream; the first video frame to be spliced is any video frame in the first standard video stream, and the playing time corresponding to the matched video frame is matched with the playing time corresponding to the video frame to be spliced; splicing the video frames to be spliced and the matched video frames to obtain spliced video frames; and arranging the spliced video frames according to the sequence of the playing time to obtain the spliced video stream.
In one embodiment, the sending unit 1202, when sending the forwarded video stream to the client based on WebRTC, is specifically configured to: and sending the forwarded video stream to a video push service SRS module, calling the video push service SRS module to send the forwarded video stream to a client based on web page real-time communication WebRTC protocol.
In one embodiment, the first type of video capture device is disposed on a movable platform and the second type of video capture device is disposed on a fixed platform.
In summary, receiving a target video stream sent by a target video shooting device, and determining a forwarding video stream based on the target video stream; responding to a video viewing request of a client, and transmitting the forwarded video stream to the client based on a WebRTC protocol so that the client plays the video based on the forwarded video stream; when the target video shooting equipment comprises first type video shooting equipment, the video stream corresponding to the first type video shooting equipment, which is included in the target video stream, is obtained by video coding the initial video stream acquired by the first type video shooting equipment based on an H.265 video coding standard; when the target video shooting device comprises a second type of video shooting device, the video stream corresponding to the second type of video shooting device included in the target video stream is obtained by video encoding the initial video stream acquired by the second type of video shooting device based on a target streaming media protocol. It should be appreciated that by the method for processing and transmitting the video stream acquired by the target video shooting device, the transmission efficiency of the video stream can be ensured, and the time delay of video playing can be reduced.
Based on the video processing method and the embodiments of the video processing apparatus, the embodiments of the present application provide a computer device, where the computer device corresponds to the foregoing server. Referring to fig. 13, a schematic structural diagram of a computer device according to an embodiment of the present application is provided, where the computer device 1300 may at least include: processor 1301, communication interface 1302, and computer storage medium 1303. Wherein the processor 1301, the communication interface 1302, and the computer storage medium 1303 may be connected by a bus or other means.
The computer storage medium 1303 may be stored in the memory 1304 of the computer device 1300, the computer storage medium 1303 storing a computer program including program instructions, the processor 1301 being configured to execute the program instructions stored in the computer storage medium 1303. Processor 1301 (or CPU (Central Processing Unit, central processing unit)) is a computing core and a control core of computer device 1300, which is adapted to implement one or more instructions, in particular to load and execute:
acquiring a target video stream of target video shooting equipment, and determining a forwarding video stream based on the target video stream; based on web page real-time communication WebRTC protocol, transmitting the forwarded video stream to a client, so that the client plays the video based on the forwarded video stream; when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
In one embodiment, when the target video capturing device includes a second type of video capturing device, the video stream corresponding to the second type of video capturing device included in the target video stream is obtained by video encoding an initial video stream acquired by the second type of video capturing device based on a target streaming media protocol.
In one embodiment, processor 1301, when determining a forwarding video stream based on the target video stream, is specifically configured to: when the target video shooting equipment comprises the first type of video shooting equipment, video encoding is carried out on a video stream corresponding to the first type of video shooting equipment, which is included in the target video stream, based on a target streaming media protocol, the encoded video stream is transcoded to obtain a first transcoded video stream, and a forwarding video stream is determined based on the first transcoded video stream; and when the target video shooting equipment comprises the second type of video shooting equipment, transcoding the video stream corresponding to the second type of video shooting equipment, which is included in the target video stream, to obtain a second transcoded video stream, and determining a forwarding video stream based on the second transcoded video stream.
In one embodiment, the target video capturing device includes a first video capturing device and a second video capturing device, and the target video stream includes a first video stream corresponding to the first video capturing device and a second video stream corresponding to the second video capturing device; wherein, the processor 1301 is specifically configured to, when determining the forwarding video stream based on the target video stream: processing the first video stream and the second video stream respectively to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream; splicing the first standard video stream and the second standard video stream to obtain a spliced video stream; and taking the spliced video stream as a forwarding video stream.
In one embodiment, the processor 1301 is specifically configured to, when performing a splicing process on the first standard video stream and the second standard video stream to obtain a spliced video stream: determining video frames to be spliced from the first standard video stream, and determining matching video frames from the second standard video stream; the first video frame to be spliced is any video frame in the first standard video stream, and the playing time corresponding to the matched video frame is matched with the playing time corresponding to the video frame to be spliced; splicing the video frames to be spliced and the matched video frames to obtain spliced video frames; and arranging the spliced video frames according to the sequence of the playing time to obtain the spliced video stream.
In one embodiment, the processor 1301, when sending the forwarded video stream to the client based on web page real-time communication WebRTC protocol, is specifically configured to: and sending the forwarded video stream to a video push service SRS module, calling the video push service SRS module to send the forwarded video stream to a client based on web page real-time communication WebRTC protocol.
In one embodiment, the first type of video capture device is disposed on a movable platform and the second type of video capture device is disposed on a fixed platform.
In summary, receiving a target video stream sent by a target video shooting device, and determining a forwarding video stream based on the target video stream; responding to a video viewing request of a client, and transmitting the forwarded video stream to the client based on a WebRTC protocol so that the client plays the video based on the forwarded video stream; when the target video shooting equipment comprises first type video shooting equipment, the video stream corresponding to the first type video shooting equipment, which is included in the target video stream, is obtained by video coding the initial video stream acquired by the first type video shooting equipment based on an H.265 video coding standard; when the target video shooting device comprises a second type of video shooting device, the video stream corresponding to the second type of video shooting device included in the target video stream is obtained by video encoding the initial video stream acquired by the second type of video shooting device based on a target streaming media protocol. It should be appreciated that by the method for processing and transmitting the video stream acquired by the target video shooting device, the transmission efficiency of the video stream can be ensured, and the time delay of video playing can be reduced.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments. The technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc., and may be a processor in the computer device in particular) to execute all or part of the steps of the above-described method of the embodiments of the present application. Wherein the aforementioned storage medium may comprise: a U-disk, a removable hard disk, a magnetic disk, an optical disk, a Read-Only Memory (abbreviated as ROM), a random access Memory (abbreviated as Random Access Memory, RAM), or the like.
Those of ordinary skill in the art will appreciate that the elements and steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in or transmitted across a computer storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by means of a wired (e.g., coaxial cable, fiber optic) or wireless (e.g., infrared, wireless, microwave, etc.). Computer storage media may be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain an integration of one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of video processing, the method comprising:
acquiring a target video stream of target video shooting equipment, and determining a forwarding video stream based on the target video stream;
based on web page real-time communication WebRTC protocol, sending the forwarding video stream to a client, so that the client plays a video based on the forwarding video stream;
when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
2. The method of claim 1, wherein when the target video capturing device includes a second type of video capturing device, the video stream corresponding to the second type of video capturing device included in the target video stream is obtained by video encoding an initial video stream acquired by the second type of video capturing device based on a target streaming media protocol.
3. The method of claim 2, wherein the determining a forwarding video stream based on the target video stream comprises:
when the target video shooting equipment comprises the first type of video shooting equipment, video coding is carried out on a video stream corresponding to the first type of video shooting equipment, which is included in the target video stream, based on a target streaming media protocol, the coded video stream is transcoded to obtain a first transcoded video stream, and a forwarding video stream is determined based on the first transcoded video stream;
and when the target video shooting equipment comprises the second type of video shooting equipment, transcoding the video stream corresponding to the second type of video shooting equipment, which is included in the target video stream, to obtain a second transcoded video stream, and determining a forwarding video stream based on the second transcoded video stream.
4. The method according to claim 1 or 2, wherein the target video capturing device comprises a first video capturing device and a second video capturing device, and the target video stream comprises a first video stream corresponding to the first video capturing device and a second video stream corresponding to the second video capturing device;
Wherein the determining a forwarding video stream based on the target video stream includes:
processing the first video stream and the second video stream respectively to obtain a first standard video stream corresponding to the first video stream and a second standard video stream corresponding to the second video stream;
splicing the first standard video stream and the second standard video stream to obtain a spliced video stream;
and taking the spliced video stream as a forwarding video stream.
5. The method of claim 4, wherein the splicing the first standard video stream and the second standard video stream to obtain a spliced video stream comprises:
determining video frames to be spliced from the first standard video stream, and determining matching video frames from the second standard video stream; the video frames to be spliced are any video frame in the first standard video stream, and the playing time corresponding to the matched video frame is matched with the playing time corresponding to the video frames to be spliced;
splicing the video frames to be spliced and the matched video frames to obtain spliced video frames;
and arranging the spliced video frames according to the sequence of the playing time to obtain the spliced video stream.
6. A method according to any of claims 1-3, wherein the sending the forwarded video stream to a client based on web page real time communication WebRTC protocol comprises:
and sending the forwarding video stream to a video push service SRS module, calling the video push service SRS module to send the forwarding video stream to a client based on web page real-time communication WebRTC protocol.
7. The method of claim 2, wherein the first type of video capture device is disposed on a movable platform and the second type of video capture device is disposed on a fixed platform.
8. A video processing apparatus, the apparatus comprising:
the processing unit is used for acquiring a target video stream of the target video shooting equipment and determining a forwarding video stream based on the target video stream;
the sending unit is used for sending the forwarding video stream to a client based on web page real-time communication WebRTC protocol so that the client plays the video based on the forwarding video stream;
when the target video shooting device comprises a first type of video shooting device, the video stream corresponding to the first type of video shooting device, which is included in the target video stream, is obtained by video encoding an initial video stream acquired by the first type of video shooting device based on an H.265 video encoding standard.
9. A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to implement the video processing method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more computer programs adapted to be loaded by a processor and to implement the video processing method according to any of claims 1-7.
11. A computer program product, characterized in that the computer program product comprises a computer program adapted to be loaded by a processor and to implement the video processing method according to any of claims 1-7.
CN202210053938.4A 2022-01-18 2022-01-18 Video processing method, apparatus, device, storage medium, and computer program product Pending CN116506643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210053938.4A CN116506643A (en) 2022-01-18 2022-01-18 Video processing method, apparatus, device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210053938.4A CN116506643A (en) 2022-01-18 2022-01-18 Video processing method, apparatus, device, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
CN116506643A true CN116506643A (en) 2023-07-28

Family

ID=87325385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210053938.4A Pending CN116506643A (en) 2022-01-18 2022-01-18 Video processing method, apparatus, device, storage medium, and computer program product

Country Status (1)

Country Link
CN (1) CN116506643A (en)

Similar Documents

Publication Publication Date Title
US20140139735A1 (en) Online Media Data Conversion Method, Online Video Playing Method and Corresponding Device
CN112752115B (en) Live broadcast data transmission method, device, equipment and medium
CN108347436A (en) A kind of unmanned plane long-distance video method for pushing based on high in the clouds
CN111093094A (en) Video transcoding method, device and system, electronic equipment and readable storage medium
CN112104918A (en) Image transmission method and device based on satellite network
WO2006077591A2 (en) A system circuit application and method for wireless transmission of multimedia content from a computing platform
CN107276990B (en) Streaming media live broadcasting method and device
US9866921B2 (en) Method and apparatus for transmitting video content compressed by codec
US9794317B2 (en) Network system and network method
CN115665474A (en) Live broadcast method and device, electronic equipment and storage medium
CN113301388B (en) Video stream processing system, equipment and method
CN115134632B (en) Video code rate control method, device, medium and content delivery network CDN system
CN113747191A (en) Video live broadcast method, system, equipment and storage medium based on unmanned aerial vehicle
CN106550493B (en) Media resource sharing method and mobile terminal
US10225586B2 (en) Method for transmitting video surveillance images
CN111918074A (en) Live video fault early warning method and related equipment
CN116506643A (en) Video processing method, apparatus, device, storage medium, and computer program product
CN115865884A (en) Network camera data access device and method, network camera and medium
KR101819193B1 (en) Streaming service method using real-time transformation file format
CN112055174B (en) Video transmission method and device and computer readable storage medium
US10904305B2 (en) Media streaming using a headless browser
CN113794761A (en) Vehicle-mounted terminal remote video downloading method and server
CN106851134B (en) Method, device and system for transmitting image data
KR101251312B1 (en) Method for handling video seek request in video transcoding server
US10171545B2 (en) System for transferring real-time audio/video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination