CN114268773A - Video playing method, system, storage medium and electronic device - Google Patents

Video playing method, system, storage medium and electronic device Download PDF

Info

Publication number
CN114268773A
CN114268773A CN202210200523.5A CN202210200523A CN114268773A CN 114268773 A CN114268773 A CN 114268773A CN 202210200523 A CN202210200523 A CN 202210200523A CN 114268773 A CN114268773 A CN 114268773A
Authority
CN
China
Prior art keywords
target
video
layer
video stream
access layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210200523.5A
Other languages
Chinese (zh)
Inventor
林亦宁
张炳健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Original Assignee
Hangzhou Shanma Zhiqing Technology Co Ltd
Shanghai Supremind Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shanma Zhiqing Technology Co Ltd, Shanghai Supremind Intelligent Technology Co Ltd filed Critical Hangzhou Shanma Zhiqing Technology Co Ltd
Priority to CN202210200523.5A priority Critical patent/CN114268773A/en
Publication of CN114268773A publication Critical patent/CN114268773A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a video playing method, a video playing system, a storage medium and an electronic device, and relates to the technical field of data processing. The method comprises the following steps: the webpage layer initiates a two-way communication request to the service layer based on a preset connection protocol; the service layer responds to the bidirectional communication request and pushes a target video stream to a video access layer; the video access layer performs a first process on the target video stream. By the method and the device, the problem of poor video playing effect caused by video playing time delay is solved, and the effect of improving the video playing efficiency is achieved.

Description

Video playing method, system, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a video playing method, a video playing system, a storage medium and an electronic device.
Background
With the coming of the era of the internet of things, the camera is visible everywhere in life. In various camera use scenes, a video player is required to play pictures shot by a camera; in addition, it is often necessary to control functions such as the shooting position and the focal length of the camera.
The existing video player is limited by network equipment, and often causes great time delay in video playing, so that the playing effect cannot be well adapted to the monitoring requirement, and the camera is not controlled timely.
In order to solve the above problems, no effective method has been proposed.
Disclosure of Invention
The embodiment of the invention provides a video playing method, a video playing system, a storage medium and an electronic device, which are used for at least solving the problem of poor monitoring effect caused by a time delay problem in the related technology.
According to an embodiment of the present invention, there is provided a video playing method including:
the webpage layer initiates a two-way communication request to the service layer based on a preset connection protocol;
the service layer responds to the bidirectional communication request and pushes a target video stream to a video access layer;
the video access layer performs a first process on the target video stream, wherein the first process comprises:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; the video access layer draws the video stream of the target format to the first element to form a first video; the webpage layer sends first information to the target player based on the first element to instruct the target player to play the first video.
In one exemplary embodiment, the first process further includes:
under the condition that the target video stream is determined to be in the second format, the video access layer constructs a first data warehouse; the video access layer pays the address of the first data warehouse to a first attribute of a target player; the first data warehouse receives the target video stream based on the first attribute and adds the target video stream to a buffer to instruct the target player to play the target video stream.
In an exemplary embodiment, after the video access layer performs the first processing on the target video stream, the method further includes:
the target player receives point location information of the service layer;
the target player draws point locations in the first element based on the point location information;
and the target player determines the position information of the framed target object in the target video stream based on the point location.
In one exemplary embodiment, the video access layer rendering the video stream in the target format to the first element to form a first video comprises:
the video access layer determines a frame image of the target format based on the video stream of the target format;
and the video access layer carries out replacement processing on the frame images through the first element so as to combine the frame images into the first video.
In one exemplary embodiment, the method further comprises:
the algorithm processing layer determines a target thread;
the target thread acquires continuous frame images of the image acquisition device;
the target thread compares the similarity of the continuous frame images;
and the target thread sends boundary prompt information to the service layer under the condition that the image acquisition device reaches the boundary according to the similarity comparison.
In one exemplary embodiment, the method further comprises:
the service layer pushes the cradle head control request to the video access layer according to the cradle head control request received from the webpage layer, and sends a preaction instruction to the algorithm processing layer, wherein the preaction instruction is used for indicating the algorithm processing layer to control the motion of the cradle head;
the video access layer carries out information conversion processing on the holder control request to obtain a target control request;
and the video access layer sends the target control request to the algorithm processing layer so as to instruct the algorithm processing layer to control the motion of the holder.
According to another embodiment of the present invention, there is provided a video playback system including:
the webpage layer is used for initiating a two-way communication request to the service layer based on a preset connection protocol and sending first information to the target player;
the service layer is used for responding to the bidirectional communication request and pushing the target video stream to the video access layer;
a video access layer configured to perform a first process on the target video stream, wherein the first process includes:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; the video access layer draws the video stream of the target format to the first element to form a first video;
and the target player is used for receiving the first information from the webpage layer and playing the first video based on the first information.
In one exemplary embodiment, further comprising:
and the video access layer acquires a target video stream through the algorithm processing layer, and analyzes and processes the target video stream through the algorithm processing layer.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, different playing processing is carried out on different formats through the preset protocol, so that the problem of poor playing effect caused by time delay can be avoided, the problem of poor video playing effect can be solved, and the effect of improving the playing quality is achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a video playing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video playing method according to an embodiment of the present invention;
FIG. 3 is a first diagram illustrating the effect of practical use according to an embodiment of the present invention;
fig. 4 is a block diagram of a video playing system according to an embodiment of the present invention;
FIG. 5 is a block diagram of the architecture of the first embodiment according to the present invention;
FIG. 6 is a block diagram of a second configuration in accordance with an embodiment of the present invention;
FIG. 7 is a block diagram of a third configuration in accordance with a specific embodiment of the present invention;
fig. 8 is a second practical effect diagram according to the embodiment of the invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the video playing method running on a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal of the video playing method according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to a video playing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a video playing method is provided, and fig. 2 is a flowchart according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, the webpage layer initiates a two-way communication request to the service layer based on a preset connection protocol;
in this embodiment, the request initiated by the web page layer according to the preset connection protocol is to reduce the time delay during live broadcasting, so as to improve the video playing effect.
The preset connection protocol can be (but is not limited to) a websocket protocol, the currently commonly used connection protocols are usually an RTSP/RTP/RTCP protocol family, an HTTP video protocol and the like, the time delay of the protocols is large, the video monitoring requirements cannot be met, and the time delay can be effectively reduced by performing video connection through the websocket protocol, so that the video monitoring requirements are met; it should be noted that the web page layer of the present embodiment is directly connected to an external input device (such as a keyboard, a mouse, and a touch screen), so as to quickly and directly receive a control request from the outside.
The business layer is responsible for being connected with the webpage layer through a websocket protocol so as to send the integrated data to the webpage layer; when the two-way communication request of the webpage layer is a pan-tilt control request, the layer can also forward the request to the video access layer, and meanwhile, sends notification information to the algorithm processing layer to inform the algorithm processing layer that the camera is about to start rotating.
The bidirectional communication request may be (but is not limited to) a pan/tilt control request for controlling pan/tilt rotation, a play request for controlling video playing, a request for video import or export, and the like.
It should be noted that, when the web page layer initiates a websocket connection request to the service layer, the websocket connection request can also carry a camera number, so that a worker can conveniently control a specific camera or play a video.
Step S204, the service layer responds to the two-way communication request and pushes the target video stream to the video access layer;
step S206, the video access layer performs a first process on the target video stream, where the first process includes:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; drawing a video stream in a target format to the first element by the video access layer to form a first video; the webpage layer sends first information to the target player based on the first element to instruct the target player to play the first video.
In this embodiment, the first format may be a stream format of h.265, or may be other formats, such as h.264; the analysis of the target video stream can include the conventional analysis of the video stream, and can also include the assignment and other processing of the video attribute, so that the video stream can be played by a corresponding player, and the time delay caused by the player in the analysis of the video stream is reduced; the first element may be (but is not limited to) a canvas element, which is equivalent to a bottom canvas in the video, and the first element is constructed according to the parsing result so as to enable a pixel picture in the video to be drawn into a player picture, thereby realizing normal playing of the player, wherein drawing the video stream in the target format to the first element is a process of drawing the video stream after format conversion to the first format; the conversion of the target video stream into the target format is to meet the playing format requirement of the player, and the target format may be an ArrayBuffer format or other formats, and may be specifically adjusted according to the actual requirement.
For example, when the stream format is h.265, a new thread is started for rendering to parse a video stream, and a canvas element is newly built in a player, and the video stream is converted into an ArrayBuffer format stream (the ArrayBuffer is a format for storing binary data, and since a web browser does not support direct parsing of a live stream of a camera, the live stream of the camera needs to be uniformly converted into the ArrayBuffer format stream to realize playing of a camera video by the player), and finally, the video of the ArrayBuffer format stream is drawn into the canvas, and a frame of image is rapidly replaced to form a video that can be played by the player.
Through the steps, different system modules are connected through a specific connection protocol such as a websocket protocol, so that time delay in the video transmission and playing process is reduced, the problem of poor video playing effect caused by overlarge video playing time delay is solved, and the video playing effect and the accuracy of video monitoring are improved.
The main body of the above steps may be a base station and a terminal, but is not limited thereto.
In an optional embodiment, the first processing further comprises:
under the condition that the target video stream is determined to be in the second format, the video access layer constructs a first data warehouse; the video access layer gives the address of the first data warehouse to the first attribute of the target player; the first data warehouse receives the target video stream based on the first attribute and adds the target video stream to the buffer to instruct the target player to play the target video stream.
In this embodiment, when the format of the target video stream is the second format, the target video stream is directly stored in the first data bin and finally loaded into the buffer area of the player, so as to play the target video stream.
The first data warehouse can be a MediaSource instance object, the construction mode can be realized by instantiation operation, it needs to be explained that the MediaSource instance object is a web media source extension, the extension function aims to make the web native audio and video support the streaming media playing, and the object can be regarded as the data warehouse after being instantiated; the first attribute may be an src attribute, and it should be noted that the src attribute is used to set a source address of video data.
For example, when the stream format is h.264, a MediaSource instance object is constructed and its address is assigned to the src attribute of the video tag, and the MediaSource instance receives stream data and adds to the buffer for the player to use each time a video stream is received, where the src attribute is used here to link the address of the MediaSource instance object to use the stream data in the repository for presentation.
In an optional embodiment, after the video access layer performs the first processing on the target video stream, the method further includes:
step S2010, the target player receives point location information of the service layer;
step S2012, the target player draws point locations in the first element based on the point location information;
step S2014, the target player determines the location information of the boxed target object in the target video stream based on the point location.
In this embodiment, the target object is framed and selected through the point location, so that the framing precision of the target object can be improved, more accurate position information can be acquired, and the staff can conveniently and visually judge the behavior of the target object.
The point location information includes (but is not limited to) all pixel location of the target object, pixel location information of a target frame of the framed target object determined based on the maximum pixel location of the target object, and the like, and the target object includes (but is not limited to) a vehicle, a pedestrian, a street lamp, a traffic light, a building, and the like.
For example, as shown in fig. 3, in the case of video streams in h.264 or h.265 format, the player will simultaneously receive point location information pushed by the service layer, draw a point location in the canvas element of the player, and then frame the target and display the update in real time.
The point location information may be obtained by analyzing the video stream, may be obtained by manual input, or may be obtained by other methods.
In an alternative embodiment, the video access layer rendering the video stream in the target format to the first element to form the first video comprises:
step S2062, the video access layer determines a frame image of a target format based on the video stream of the target format;
in step S2064, the video access layer performs an alternate processing on the frame images through the first element to combine the frame images into the first video.
In this embodiment, the replacement processing of the frame image refers to drawing the frame image on the first element in sequence according to the time axis, so as to form the video, which enables the frame image to be accurately identified by the player, thereby reducing the time delay when the video is played.
In an optional embodiment, the method further comprises:
step S2016, the algorithm processing layer determines a target thread;
step S2018, the target thread acquires continuous frame images of the image acquisition device;
step S2020, the target thread compares the similarity of the continuous frame images;
and step S2022, the target thread sends boundary prompt information to the service layer under the condition that the image acquisition device reaches the boundary according to the similarity comparison.
In this embodiment, in an actual use process, a user usually needs to adjust the position of an image acquisition device such as a camera to accurately acquire an image of a target area, when the acquisition direction of the camera is adjusted, when the camera reaches an adjustable limit, the adjustment is continued to damage the adjustment device, and the situation of reaching a boundary is determined through prompt information, so that a worker can conveniently stop the adjustment in time, and normal use of the image acquisition device such as the camera is ensured; furthermore, the data processing efficiency can be improved by processing whether the camera reaches the boundary or not through a specific thread.
As shown in fig. 8, the prompt information may be a flashing region displayed in the edge region of the image, a prompt box with characters displayed on the image interface, or a prompt by other means.
It should be noted that, the comparison of the image similarity may be performed directly by the processor, and no additional thread is needed, but this approach may additionally occupy the processing resources of the processor, thereby affecting the implementation of other functions.
In an optional embodiment, the method further comprises:
step S2024, the service layer pushes the cradle head control request to the video access layer according to the cradle head control request received from the webpage layer, and sends a preaction instruction to the algorithm processing layer, wherein the preaction instruction is used for instructing the algorithm processing layer to control the motion of the cradle head;
step S2026, the video access layer performs information conversion processing on the pan-tilt control request to obtain a target control request;
step S2028, the video access layer sends the target control request to the algorithm processing layer to instruct the algorithm processing layer to control the action of the holder.
In this embodiment, when the pan/tilt control request of the web page layer is actually used, there may be a situation that the language identification of the web page layer is not matched, so that the video access layer cannot accurately identify the pan/tilt control request.
The information conversion process may (but is not limited to) convert the pan/tilt control request into a camera control signal capable of performing operations such as camera rotation, focusing, adding a preset bit, and the like, may also be a transcoding process of the pan/tilt control request, and may also be a conversion in other forms.
It should be noted that, the algorithm processing layer (analyzer) is responsible for pulling a video stream to a stream layer in the video access layer, processing an image of each frame, and outputting point location information of a position of a target object (person, car) in real time, and in order to prompt a user, when the pan-tilt controls the rotation of the camera, the algorithm processing layer adds a thread, so that the camera sends a piece of information to the service layer when reaching the boundary, so as to prompt the user that the camera has reached the boundary (because part of camera hardware does not support boundary prompting, when the camera does not support, the thread performs video frame extraction and continuous image similarity comparison, so as to obtain that the camera has reached the boundary, and then prompts the user.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a video playing system is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a video playing system according to an embodiment of the present invention, and as shown in fig. 4, the apparatus includes:
the webpage layer 41 is used for initiating a two-way communication request to a service layer based on a preset connection protocol and sending first information to the target player;
a service layer 42, configured to respond to the bidirectional communication request, and push the target video stream to a video access layer;
a video access layer 43, configured to perform a first process on the target video stream, where the first process includes:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; the video access layer draws the video stream of the target format to the first element to form a first video;
and the target player 44 is configured to receive the first information from the web page layer, and play the first video based on the first information.
In an optional embodiment, the first processing further comprises:
under the condition that the target video stream is determined to be in the second format, the video access layer constructs a first data warehouse; the video access layer pays the address of the first data warehouse to a first attribute of a target player; the first data warehouse receives the target video stream based on the first attribute and adds the target video stream to a buffer to instruct the target player to play the target video stream.
In an optional embodiment, the system further comprises:
and the image acquisition device 45 is used for acquiring continuous frame images.
The image capturing device 45 may be a camera or an image capturing device such as a radar.
In an optional embodiment, the system further comprises:
and an algorithm processing layer 46, through which the video access layer obtains a target video stream, and through which the video access layer analyzes and processes the target video stream.
In an alternative embodiment, the algorithmic processing layer 46 comprises:
a thread determining unit 461, configured to determine a target thread;
wherein, the target thread includes:
an image frame acquisition unit 4612 configured to acquire successive frame images of the image acquisition apparatus;
a similarity comparison unit 4614 configured to perform similarity comparison on the consecutive frame images;
a prompt information sending unit 4616, configured to send boundary prompt information to the service layer when it is determined that the image capturing apparatus reaches the boundary according to the similarity comparison.
In an alternative embodiment, target player 44 further includes:
a point location information receiving unit 442, configured to receive point location information of the service layer after the video access layer performs the first processing on the target video stream;
a point location drawing unit 444 configured to draw a point location in the first element based on the point location information;
an object framing unit 446, configured to determine, based on the point location, location information of a framed target object in the target video stream.
In an alternative embodiment, the video access layer 43 comprises:
a frame image determining unit 432, configured to determine a frame image in the target format based on the video stream in the target format;
an image replacing unit 434, configured to perform replacement processing on the frame images through the first element to combine the frame images into the first video.
In an optional embodiment, the system further comprises:
the service layer pushes the cradle head control request to the video access layer according to the cradle head control request received from the webpage layer, and sends a preaction instruction to the algorithm processing layer, wherein the preaction instruction is used for indicating the algorithm processing layer to control the motion of the cradle head;
the video access layer carries out information conversion processing on the holder control request to obtain a target control request;
and the video access layer sends the target control request to the algorithm processing layer so as to instruct the algorithm processing layer to control the motion of the holder.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
The present invention will be described with reference to specific examples.
As shown in fig. 5-8, the overall architecture includes:
the video access layer 51 mainly comprises a device layer 511 and a stream layer 512, wherein the device layer is mainly responsible for storing configured camera information, and the stream layer is mainly responsible for acquiring the stored information, connecting the camera to acquire a video stream, and continuously transcoding the video stream to transmit the video stream to a video storage layer. The layer also processes a pan-tilt control request initiated by a user on a web operation layer (a webpage operation layer), namely, the pan-tilt control request is converted into a camera control signal, so that the functions of controlling the rotation and the focusing of a camera, adding and setting preset positions and the like are controlled; the pan/tilt control buttons (rotation, preset position, focal length adjustment, etc.) on the page support a single-click or long-press function (a user clicks the pan/tilt control button in the player to initiate pan/tilt control), and after operation, the control message is pushed to the service layer, and when the camera rotates, once a camera boundary notification is received, a prompt is displayed in the corresponding direction of the page (as shown in fig. 4).
An algorithm processing layer (analyzer) 52, which is responsible for pulling a video stream to the stream layer, processing the image of each frame, and outputting the point location information of the position of the target object (person, vehicle) in real time, when the pan-tilt controls the rotation of the camera, the algorithm processing layer will add a thread to send a piece of information to the service layer when the camera reaches the boundary, thereby prompting the user that the camera has reached the boundary (because part of the camera hardware does not support the boundary prompt, when the camera does not support, the thread will perform video frame-drawing to continuously compare the image similarity, thereby obtaining that the camera has reached the boundary and prompting the user).
And the service layer 53 is responsible for combining the video stream transmitted from the video access layer with the point location information data of the algorithm processing layer, connecting the video stream with the point location information data through a websocket, and transmitting the integrated data to the web layer (namely the webpage layer). The layer also processes the pan-tilt control request initiated by the web layer, forwards the pan-tilt control request to the video access layer and simultaneously informs the algorithm processing layer that the camera is about to start rotating.
The web layer 54 is responsible for connecting the service layer 53, initiating a websocket connection request to the service layer 53 and attaching a camera number; after receiving the response request, the service layer 53 will push the video stream (h.264, h.265) pushed by the video access layer 51, and the video component will determine the format of the video stream in the response data of the first frame of the websocket, and execute the corresponding operation, which specifically includes the following several cases:
when the stream format is h.264, a MediaSource instance object (the object is a web media source extended function, the extended function aims to make web native audio and video support stream media playing, the object can be regarded as a data warehouse after being instantiated), and an address of the object is assigned to a src attribute of a video tag (the src attribute is used for setting a video data source address and is used for linking the address of the MediaSource instance object to be shown by using stream data in the warehouse), and the MediaSource instance receives the stream data and adds the stream data to a buffer for the player to use each time the video stream is received.
When the stream format is h.265, a new thread is started for rendering to analyze the video stream, a canvas element is newly built in the player, the video stream is converted into an ArrayBuffer format stream when being received each time (because the web browser does not support direct analysis of the live stream of the camera, the live stream of the camera needs to be uniformly converted into the ArrayBuffer format stream to realize the video playing of the camera, and the ArrayBuffer is a format for storing binary data), and finally the video stream is drawn into the canvas to quickly replace one frame of image to form a video.
It should be noted that, no matter in h.264 or h.265 format, the player will receive the point location information pushed by the service layer at the same time, draw a point location in the canvas element of the player, select a target, and display the update in real time.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A video playback method, comprising:
the webpage layer initiates a two-way communication request to the service layer based on a preset connection protocol;
the service layer responds to the bidirectional communication request and pushes a target video stream to a video access layer;
the video access layer performs a first process on the target video stream, wherein the first process comprises:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; the video access layer draws the video stream of the target format to the first element to form a first video; the webpage layer sends first information to the target player based on the first element to instruct the target player to play the first video.
2. The method of claim 1, wherein the first processing further comprises:
under the condition that the target video stream is determined to be in the second format, the video access layer constructs a first data warehouse; the video access layer pays the address of the first data warehouse to a first attribute of a target player; the first data warehouse receives the target video stream based on the first attribute and adds the target video stream to a buffer to instruct the target player to play the target video stream.
3. The method of claim 1 or 2, wherein after the video access layer performs the first processing on the target video stream, the method further comprises:
the target player receives point location information of the service layer;
the target player draws point locations in the first element based on the point location information;
and the target player determines the position information of the framed target object in the target video stream based on the point location.
4. The method of claim 1, wherein the video access layer rendering the video stream in the target format to the first element to form a first video comprises:
the video access layer determines a frame image of the target format based on the video stream of the target format;
and the video access layer carries out replacement processing on the frame images through the first element so as to combine the frame images into the first video.
5. The method of claim 1, further comprising:
the algorithm processing layer determines a target thread;
the target thread acquires continuous frame images of the image acquisition device;
the target thread compares the similarity of the continuous frame images;
and the target thread sends boundary prompt information to the service layer under the condition that the image acquisition device reaches the boundary according to the similarity comparison.
6. The method of claim 1, further comprising:
the service layer pushes the cradle head control request to the video access layer according to the cradle head control request received from the webpage layer, and sends a preaction instruction to the algorithm processing layer, wherein the preaction instruction is used for indicating the algorithm processing layer to control the motion of the cradle head;
the video access layer carries out information conversion processing on the holder control request to obtain a target control request;
and the video access layer sends the target control request to the algorithm processing layer so as to instruct the algorithm processing layer to control the motion of the holder.
7. A video playback system, comprising:
the webpage layer is used for initiating a two-way communication request to the service layer based on a preset connection protocol and sending first information to the target player;
the service layer is used for responding to the bidirectional communication request and pushing the target video stream to the video access layer;
a video access layer configured to perform a first process on the target video stream, wherein the first process includes:
under the condition that the target video stream is determined to be in the first format, the video access layer carries out analysis operation on the target video stream and constructs a first element based on an analysis result; the video access layer converts the target video stream into a target format under the condition of receiving the target video stream; the video access layer draws the video stream of the target format to the first element to form a first video;
and the target player is used for receiving the first information from the webpage layer and playing the first video based on the first information.
8. The system of claim 7, further comprising:
and the video access layer acquires a target video stream through the algorithm processing layer, and analyzes and processes the target video stream through the algorithm processing layer.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
CN202210200523.5A 2022-03-03 2022-03-03 Video playing method, system, storage medium and electronic device Pending CN114268773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210200523.5A CN114268773A (en) 2022-03-03 2022-03-03 Video playing method, system, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210200523.5A CN114268773A (en) 2022-03-03 2022-03-03 Video playing method, system, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114268773A true CN114268773A (en) 2022-04-01

Family

ID=80833768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210200523.5A Pending CN114268773A (en) 2022-03-03 2022-03-03 Video playing method, system, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114268773A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309185A1 (en) * 2009-06-05 2010-12-09 Koester Robert D Low-power and lightweight high-resolution display
CN106302488A (en) * 2016-08-22 2017-01-04 国家电网公司 Visualization system based on RTSP/ONVIF agreement and method
US20170289214A1 (en) * 2016-04-04 2017-10-05 Hanwha Techwin Co., Ltd. Method and apparatus for playing media stream on web browser
CN107249011A (en) * 2017-04-10 2017-10-13 江苏东方金钰智能机器人有限公司 Tele-robotic system based on WebRTC
US20180349283A1 (en) * 2017-06-03 2018-12-06 Vmware, Inc. Video redirection in virtual desktop environments
CN109104590A (en) * 2018-09-05 2018-12-28 北京许继电气有限公司 lightweight visualization system
CN112437341A (en) * 2019-08-10 2021-03-02 华为技术有限公司 Video stream processing method and electronic equipment
CN112954431A (en) * 2021-01-29 2021-06-11 北京奇艺世纪科技有限公司 Video playing method and device, video playing equipment and readable storage medium
CN113596112A (en) * 2021-07-09 2021-11-02 南京纳源通信技术有限公司 Transmission method for video monitoring
CN114024941A (en) * 2021-11-11 2022-02-08 南京国电南自轨道交通工程有限公司 Multi-terminal multi-channel real-time video monitoring method based on WebRTC

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309185A1 (en) * 2009-06-05 2010-12-09 Koester Robert D Low-power and lightweight high-resolution display
US20170289214A1 (en) * 2016-04-04 2017-10-05 Hanwha Techwin Co., Ltd. Method and apparatus for playing media stream on web browser
CN106302488A (en) * 2016-08-22 2017-01-04 国家电网公司 Visualization system based on RTSP/ONVIF agreement and method
CN107249011A (en) * 2017-04-10 2017-10-13 江苏东方金钰智能机器人有限公司 Tele-robotic system based on WebRTC
US20180349283A1 (en) * 2017-06-03 2018-12-06 Vmware, Inc. Video redirection in virtual desktop environments
CN109104590A (en) * 2018-09-05 2018-12-28 北京许继电气有限公司 lightweight visualization system
CN112437341A (en) * 2019-08-10 2021-03-02 华为技术有限公司 Video stream processing method and electronic equipment
CN112954431A (en) * 2021-01-29 2021-06-11 北京奇艺世纪科技有限公司 Video playing method and device, video playing equipment and readable storage medium
CN113596112A (en) * 2021-07-09 2021-11-02 南京纳源通信技术有限公司 Transmission method for video monitoring
CN114024941A (en) * 2021-11-11 2022-02-08 南京国电南自轨道交通工程有限公司 Multi-terminal multi-channel real-time video monitoring method based on WebRTC

Similar Documents

Publication Publication Date Title
CN111327865B (en) Video transmission method, device and equipment
CN112637614B (en) Network direct broadcast video processing method, processor, device and readable storage medium
US9813613B2 (en) Method and apparatus for capturing image in portable terminal
CN109194866B (en) Image acquisition method, device, system, terminal equipment and storage medium
EP3316582B1 (en) Multimedia information processing method and system, standardized server and live broadcast terminal
CN108737884B (en) Content recording method and equipment, storage medium and electronic equipment
CN107566891B (en) Method and system for real-time screen capture of smart television
EP2866434A1 (en) Imaging apparatus
US20170070699A1 (en) Information processing apparatus, image capturing apparatus, and control methods for the same
CN107580234B (en) Photographing method, display end, camera head end and system in wireless live broadcast
CN111913683A (en) Multi-channel sound control method, equipment, electronic equipment and storage medium
CN109842524B (en) Automatic upgrading method and device, electronic equipment and computer readable storage medium
CN109815766B (en) Bar code scanning method and device, mobile terminal and readable storage medium
CN112822435A (en) Security method, device and system allowing user to easily access
CN111131883A (en) Video progress adjusting method, television and storage medium
US11405434B2 (en) Data sharing method providing reception status of shared data among receiving terminals, and communication system and recording medium therefor
CN111263061B (en) Camera control method and communication terminal
CN114268773A (en) Video playing method, system, storage medium and electronic device
CN111897506A (en) Screen projection method, control device, terminal and storage medium
CN116723353A (en) Video monitoring area configuration method, system, device and readable storage medium
WO2021018223A1 (en) Video caching method and apparatus
CN111210819B (en) Information processing method and device and electronic equipment
WO2024099353A1 (en) Video processing method and apparatus, electronic device, and storage medium
CN115174990B (en) Voice screen projection method, device, equipment and computer readable storage medium
US20240114230A1 (en) Method, electronic device, and storage medium for capturing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220401

RJ01 Rejection of invention patent application after publication