CN117177002A - Video playing method, device, medium and equipment - Google Patents

Video playing method, device, medium and equipment Download PDF

Info

Publication number
CN117177002A
CN117177002A CN202311066256.8A CN202311066256A CN117177002A CN 117177002 A CN117177002 A CN 117177002A CN 202311066256 A CN202311066256 A CN 202311066256A CN 117177002 A CN117177002 A CN 117177002A
Authority
CN
China
Prior art keywords
video stream
scene
acquiring
sequence
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311066256.8A
Other languages
Chinese (zh)
Inventor
宋建江
潘兰兰
郑成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuntu Vision Hangzhou Technology Co ltd
Original Assignee
Yuntu Vision Hangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuntu Vision Hangzhou Technology Co ltd filed Critical Yuntu Vision Hangzhou Technology Co ltd
Priority to CN202311066256.8A priority Critical patent/CN117177002A/en
Publication of CN117177002A publication Critical patent/CN117177002A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a video playing method, a device, a medium and equipment, wherein the method comprises the steps of obtaining a video stream of a certain mobile scene after receiving a video stream playing request of the certain mobile scene triggered by a user from an interface, displaying the video stream, and obtaining a next video stream, wherein the step of obtaining the next video stream comprises the steps of reading the next video stream after a preset action is executed; if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream; and if the current video stream is determined to be displayed and the next video stream is not available, determining that the monitoring video is displayed.

Description

Video playing method, device, medium and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and a device for video playing.
Background
In the related art, the playing of the surveillance video is usually performed in stages based on a predetermined scheduling policy, which is not efficient.
Disclosure of Invention
The application provides a video playing method, a device, a medium and equipment, which aim to partially solve the technical problem of how to improve the video playing efficiency.
In a first aspect, the present application provides a method for playing video, the method comprising:
when a video stream playing request of a certain mobile scene triggered by a user from an interface is received, acquiring the video stream of the certain mobile scene, displaying the video stream, and acquiring a next video stream, wherein the acquiring of the next video stream comprises reading the next video stream after the execution of a preset action is completed;
if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream;
and if the current video stream is determined to be displayed and the next video stream is not available, determining that the monitoring video is displayed.
Optionally, before responding to a video stream playing request of a mobile scene triggered by a user from an interface, the method includes:
acquiring a moving track of a target object, and determining a moving scene related to the target object according to the moving track;
after receiving a configuration request of a user for all the sequence of the mobile scenes through the visual interface, determining a scene sequence formed by all the mobile scenes according to the sequence.
Optionally, the acquiring the next video stream includes:
determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence;
and acquiring the video stream of the next target mobile scene.
Optionally, after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, acquiring the video stream of the certain mobile scene includes:
after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, analyzing the request to determine that the certain mobile scene belongs to a certain mobile scene or is a certain batch of mobile scenes.
Optionally, after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, acquiring the video stream of the certain mobile scene, and displaying the video stream includes:
and acquiring a plurality of monitoring videos related to the certain mobile scene, determining the video related to the certain mobile scene in all monitoring videos at the same time, and taking all monitoring videos related to the video at the same time as the video stream of the current certain mobile scene.
In a second aspect, the present application provides an apparatus for video playing, including:
the video playing unit is configured to acquire a video stream of a certain mobile scene after receiving a video stream playing request of the certain mobile scene triggered by a user from an interface, display the video stream and acquire a next video stream, wherein the acquiring the next video stream comprises reading the next video stream after the preset action is executed;
if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream;
and the judging unit is configured to determine that the monitoring video presentation is ended if the current video stream presentation is ended and no next video stream exists.
Optionally the apparatus further comprises: a moving scene determining unit configured to acquire a moving track of a target object and determine a moving scene related to the target object according to the moving track;
and the scene sequence determining unit is configured to determine a scene sequence formed by all the mobile scenes according to the sequence after receiving a configuration request of a user for all the mobile scene sequences through the visual interface.
Optionally, the acquiring the next video stream includes: determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence; and acquiring the video stream of the next target mobile scene.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method of video playing provided in the first aspect when executing the program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of video playback as provided in the first aspect.
The application discloses a video playing method, a device, a medium and equipment, wherein the method comprises the steps of obtaining a video stream of a certain mobile scene after receiving a video stream playing request of the certain mobile scene triggered by a user from an interface, displaying the video stream and obtaining the next video stream; if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream; and if the current video stream is determined to be displayed and the next video stream is not available, determining that the monitoring video is displayed.
According to the method, the next video stream can be acquired while the current video stream is displayed, so that when the next frame of video stream is played, only the video stream of the image of the next frame is required to be decoded, the next video stream is not required to be acquired first and then decoded, and the efficiency of playing the video is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a flow chart of a video playing method provided in the present application;
fig. 2 is a schematic diagram of an apparatus for video scheduling according to the present application;
fig. 3 is a schematic diagram of an electronic device corresponding to fig. 1 according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a video playing method provided in the present application, where an execution body for executing the method of the present embodiment may be located at a server side or a client side, and the method includes the following steps:
s101: when a video stream playing request of a certain mobile scene triggered by a user from an interface is received, the video stream of the certain mobile scene is obtained, the video stream is displayed, and a next video stream is obtained, wherein the obtaining of the next video comprises reading the next video stream after the execution of a preset action is completed.
In this embodiment, a user may select different mobile scenes through the visual configuration interface, so as to play a video in the mobile scene. The moving scene may be a scene involving a target object, which may be an object such as a pedestrian, a vehicle, or the like. The video stream corresponding to the mobile scene may be a surveillance video, which may be a video collected by a high-point camera disposed in each road. Further, the moving scene is often determined according to the target object, so when the target object is monitored, the moving track of the target object can be acquired first, the moving scene related to the target object can be determined according to the moving track, and when the moving scene is determined, the moving scene can be determined according to the actual scene related to the moving process according to the moving track, such as an intersection scene, a market scene and other scenes with position points can be used as the moving scene. Each mobile scene may correspond to a plurality of monitoring devices, such as cameras, through which a monitoring video in each scene is generated. It should be appreciated that the movement trajectory may be a trajectory pre-planned for the target object.
After the video stream of the mobile scene is acquired, the video stream is displayed, and the next video stream is acquired at the time of display, before the display is finished or before the display. The next video stream may be the video stream of the next moving scene or the video stream of the next time.
Further, acquiring the next video stream may be interpreted as that after the preset action is performed, the next video may be obtained, for example, if the next video stream is obtained and there is a need to perform an operation on the next mobile scene monitoring device, such as account login, verification, etc., the action is automatically performed during the presentation. It should be understood that the design of the preset actions may be made as needed according to the actual business scenario.
S102: and if the video stream display is determined to be ended, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream.
In this embodiment, after determining that the current video stream is finished, the next video stream is decoded, so that an image of the next video stream can be displayed. In the present specification, the decoding method of the video stream is not particularly limited, and may be set according to practical applications.
When the next video stream is displayed, the currently displayed video stream can be used as the current video stream, the next video stream of the current video stream is acquired, and the process is repeated.
S103: and if the current video stream is determined to be displayed and the next video stream is not available, determining that the monitoring video is displayed.
In this embodiment, if the client determines that there is no next video stream after the current video stream is displayed, it may determine that the monitoring video is displayed.
As an optional implementation manner of this embodiment, before responding to a video stream playing request of a mobile scene triggered by a user from an interface, the method includes: acquiring a moving track of a target object, and determining a moving scene related to the target object according to the moving track; after receiving a configuration request of a user for all the sequence of the mobile scenes through the visual interface, determining a scene sequence formed by all the mobile scenes according to the sequence.
In this optional implementation manner, the moving scene is often determined according to the target object, so when the target object is monitored, the moving track of the target object can be obtained first, the moving scene related to the target object can be determined according to the moving track, and when the moving scene is determined, the moving scene can be determined according to the actual scene related to the moving process according to the moving track, for example, scenes with position points, such as an intersection scene, a market scene and the like, can be used as the moving scene.
When determining the scene sequence, the sequence of each target scene can be determined as the sequence of the target scene in a default sequence, namely according to the moving track of the target object. The sequence of the user visual configuration can also be received, and the playing of the video can be flexibly controlled through the configuration.
As an optional implementation manner of this embodiment, the acquiring the next video stream includes: determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence; and acquiring the video stream of the next target mobile scene.
In this optional implementation manner, before the currently played video stream is finished, a next moving scene of the current moving scene may be determined based on the sequence of the moving scenes, and after the determining, a monitoring device under the next moving scene may be called, and based on the determined monitoring device, the next played video stream is determined. And after the currently played video stream is finished, the next played video stream can be determined to be decoded. When playing the current frame, the video stream of the next frame is acquired, and when playing the image of the next frame, decoding is only needed, and the video stream of the image of the next frame is not needed to be acquired. The speed of video playing is improved.
Further, after the video stream of a mobile scene is displayed, a judgment can be made based on the sequence to judge whether the element in the sequence is the last element, and if so, the playing is finished.
As an optional implementation manner of this embodiment, the step of obtaining the video stream of a certain mobile scene after receiving the video stream playing request of the certain mobile scene triggered by the user from the interface includes parsing the request after receiving the video stream playing request of the certain mobile scene triggered by the user from the interface, to determine that the certain mobile scene belongs to a certain mobile scene or is a certain batch of mobile scenes.
In this alternative implementation, a scene sequence includes a plurality of mobile scenes, where the mobile scenes may be represented by unique identifiers, and the unique identifiers may be mapped with corresponding monitoring devices. After determining the sequence, the user can select a scene to be played, and the scene can be selected one or a batch, if the scene is selected one or a plurality of scenes according to continuous movement, or can be a moving scene with intervals, if the scene is an interval moving scene, the next moving scene is a scene which is adjacent to the last moving scene in the selected moving scenes.
Through the optional implementation mode, flexible video playing is realized.
As an optional implementation manner of this embodiment, when receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, obtaining a video stream of the certain mobile scene, and displaying the video stream includes: and acquiring a plurality of monitoring videos related to the certain mobile scene, determining the video related to the certain mobile scene in all monitoring videos at the same time, and taking all monitoring videos related to the video at the same time as the video stream of the current certain mobile scene.
In this optional implementation manner, as described above, one mobile scene may correspond to one or more monitoring devices, and each monitoring device may generate a monitoring video, so when a video stream of a certain mobile scene is acquired, multiple monitoring videos may be acquired simultaneously.
According to the method, the next video stream can be acquired while the current video stream is displayed, so that when the next frame of video is played, only the video stream of the image of the next frame is needed to be decoded, the video stream of the image of the next frame is not needed to be acquired first and then decoded, and the efficiency of playing the video is improved.
The above method for playing video provided by one or more embodiments of the present application is based on the same concept, and the present application further provides a corresponding apparatus for playing video, as shown in fig. 2.
Fig. 2 is a schematic diagram of an apparatus for video scheduling according to the present application, including:
the video playing unit is configured to acquire a video stream of a certain mobile scene, display the video stream and acquire a next video stream after receiving a video stream playing request of the certain mobile scene triggered by a user from an interface; wherein, the obtaining the next video stream includes reading the next video stream after the preset action is executed;
if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream;
and the judging unit is configured to determine that the monitoring video presentation is ended if the current video stream presentation is ended and no next video stream exists.
Optionally, the apparatus further comprises:
a moving scene determining unit configured to acquire a moving track of a target object and determine a moving scene related to the target object according to the moving track;
and the scene sequence determining unit is configured to determine a scene sequence formed by all the mobile scenes according to the sequence after receiving a configuration request of a user for all the mobile scene sequences through the visual interface.
Optionally, the acquiring the next video stream includes:
determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence;
and acquiring the video stream of the next target mobile scene.
Optionally, after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, acquiring the video stream of the certain mobile scene includes:
after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, analyzing the request to determine that the certain mobile scene belongs to a certain mobile scene or is a certain batch of mobile scenes.
Optionally, after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, acquiring the video stream of the certain mobile scene, and displaying the video stream includes:
and acquiring a plurality of monitoring videos related to the certain mobile scene, determining the video related to the certain mobile scene in all monitoring videos at the same time, and taking all monitoring videos related to the video at the same time as the video stream of the current certain mobile scene.
The present application also provides a computer readable medium storing a computer program operable to perform the method provided in fig. 1 above.
The application also provides a schematic block diagram of the electronic device shown in fig. 3, which corresponds to fig. 1. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the program to implement a video playing method as described in fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer media including memory storage devices.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A method of video playback, the method comprising:
when a video stream playing request of a certain mobile scene triggered by a user from an interface is received, acquiring the video stream of the certain mobile scene, displaying the video stream, and acquiring a next video stream, wherein the acquiring of the next video stream comprises reading the next video stream after the execution of a preset action is completed;
if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream;
and if the current video stream is determined to be displayed and the next video stream is not available, determining that the monitoring video is displayed.
2. The method of claim 1, wherein prior to responding to a video streaming request for a mobile scene triggered by a user from an interface, the method comprises:
acquiring a moving track of a target object, and determining a moving scene related to the target object according to the moving track;
after receiving a configuration request of a user for all the sequence of the mobile scenes through the visual interface, determining a scene sequence formed by all the mobile scenes according to the sequence.
3. The method of claim 2, wherein the acquiring the next video stream comprises:
determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence;
and acquiring the video stream of the next target mobile scene.
4. The method according to claim 2, wherein, after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, acquiring the video stream of the certain mobile scene includes:
after receiving a video stream playing request of a certain mobile scene triggered by a user from an interface, analyzing the request to determine that the certain mobile scene belongs to a certain mobile scene or is a certain batch of mobile scenes.
5. The method of claim 1, wherein upon receiving a video stream play request of a mobile scene triggered by a user from an interface, obtaining the video stream of the mobile scene, and displaying the video stream comprises:
and acquiring a plurality of monitoring videos related to the certain mobile scene, determining the video related to the certain mobile scene in all monitoring videos at the same time, and taking all monitoring videos related to the video at the same time as the video stream of the current certain mobile scene.
6. An apparatus for video playback, comprising:
the video playing unit is configured to acquire a video stream of a certain mobile scene after receiving a video stream playing request of the certain mobile scene triggered by a user from an interface, display the video stream and acquire a next video stream, wherein the acquiring the next video stream comprises reading the next video stream after the preset action is executed;
if the video stream display is determined to be finished, decoding the next video stream to display the next video stream, taking the next video stream as a current video stream, and acquiring the next video stream of the current video stream;
and the judging unit is configured to determine that the monitoring video presentation is ended if the current video stream presentation is ended and no next video stream exists.
7. The apparatus of claim 6, wherein the apparatus further comprises:
a moving scene determining unit configured to acquire a moving track of a target object and determine a moving scene related to the target object according to the moving track;
and the scene sequence determining unit is configured to determine a scene sequence formed by all the mobile scenes according to the sequence after receiving a configuration request of a user for all the mobile scene sequences through the visual interface.
8. The apparatus of claim 7, wherein the obtaining the next video stream comprises:
determining a next target moving scene of the certain moving scene according to the sequence from the scene sequence;
and acquiring the video stream of the next target mobile scene.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-6.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-6 when executing the program.
CN202311066256.8A 2023-08-23 2023-08-23 Video playing method, device, medium and equipment Pending CN117177002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311066256.8A CN117177002A (en) 2023-08-23 2023-08-23 Video playing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311066256.8A CN117177002A (en) 2023-08-23 2023-08-23 Video playing method, device, medium and equipment

Publications (1)

Publication Number Publication Date
CN117177002A true CN117177002A (en) 2023-12-05

Family

ID=88932879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311066256.8A Pending CN117177002A (en) 2023-08-23 2023-08-23 Video playing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN117177002A (en)

Similar Documents

Publication Publication Date Title
CN110162089B (en) Unmanned driving simulation method and device
CN110496395B (en) Component operation method, system and equipment for illusion engine
CN109922298A (en) Meeting room monitoring method and device
CN115828162B (en) Classification model training method and device, storage medium and electronic equipment
CN113325855B (en) Model training method for predicting obstacle trajectory based on migration scene
CN116822657B (en) Method and device for accelerating model training, storage medium and electronic equipment
CN116821647B (en) Optimization method, device and equipment for data annotation based on sample deviation evaluation
CN108769152B (en) Service refresh policy registration method, service refresh request method, device and equipment
CN116048977B (en) Test method and device based on data reduction
CN112489522A (en) Method, device, medium and electronic device for playing simulation scene data
CN110968483A (en) Service data acquisition method and device and electronic equipment
CN117177002A (en) Video playing method, device, medium and equipment
CN115623221A (en) Video coding method and device, storage medium and image acquisition equipment
CN114547874A (en) Method and device for reproducing operation process of equipment
CN111539961A (en) Target segmentation method, device and equipment
CN110262732B (en) Picture moving method and device
CN110502551A (en) Data read-write method, system and infrastructure component
CN117097954A (en) Video processing method, device, medium and equipment
CN117152040B (en) Point cloud fusion method and device based on depth map
CN114528923B (en) Video target detection method, device, equipment and medium based on time domain context
CN109615234B (en) Resource change condition determining method and device
CN115344410B (en) Method and device for judging event execution sequence, storage medium and electronic equipment
CN114968457B (en) Form processing method and device applied to subprogram
CN116401652A (en) Recommendation method, device and equipment for verification mode and readable storage medium
CN117573359A (en) Heterogeneous cluster-based computing framework management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination