CN114079821B - Video playing method and device, electronic equipment and readable storage medium - Google Patents

Video playing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114079821B
CN114079821B CN202111365787.8A CN202111365787A CN114079821B CN 114079821 B CN114079821 B CN 114079821B CN 202111365787 A CN202111365787 A CN 202111365787A CN 114079821 B CN114079821 B CN 114079821B
Authority
CN
China
Prior art keywords
frame
original image
image
images
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111365787.8A
Other languages
Chinese (zh)
Other versions
CN114079821A (en
Inventor
郑文
林恒
张翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Original Assignee
Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd filed Critical Fujian Huichuan Internet Of Things Technology Science And Technology Co ltd
Priority to CN202111365787.8A priority Critical patent/CN114079821B/en
Publication of CN114079821A publication Critical patent/CN114079821A/en
Application granted granted Critical
Publication of CN114079821B publication Critical patent/CN114079821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/907Television signal recording using static stores, e.g. storage tubes or semiconductor memories

Abstract

The embodiment of the application provides a video playing method, a video playing device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images; acquiring annotation data corresponding to the multi-frame original image; acquiring the labeling data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the labeling data corresponding to the multi-frame original image; generating a plurality of frames of marked images according to the single frame of original images in the plurality of frames of original images and marked data of the single frame of original images in the plurality of frames of original images; and playing the multi-frame original image and/or the multi-frame marked image. By implementing the embodiment of the application, the storage space can be saved, and the workload is reduced.

Description

Video playing method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of monitoring technologies, and in particular, to a video playing method, a video playing device, an electronic device, and a computer readable storage medium.
Background
And using a monitoring camera to automatically monitor the video, and sending out alarm information when a target event is monitored. In order to facilitate a worker to quickly locate objects associated with an alarm event in a video frame, the objects are typically marked with visible marks in a video file.
The existing method needs to save an original video file of a network video recorder (Network Video Recorder, NVR) of the monitoring camera and an additional alarming video file with labels, so that the occupation of storage space is increased.
In addition, when the warning video file is checked, the problem that the labeling frame shields the object may exist, so that a worker needs to check the warning video file and the NVR video file at the same time for comparison. The existing method needs to open two video players and manually control the video playing progress bar, is complex in operation and increases workload.
Disclosure of Invention
An object of an embodiment of the present application is to provide a video playing method, apparatus, electronic device, and computer readable storage medium, which can reduce storage space and workload.
In a first aspect, an embodiment of the present application provides a video playing method, including:
acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images;
acquiring annotation data corresponding to the multi-frame original image;
acquiring the labeling data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the labeling data corresponding to the multi-frame original image;
generating a plurality of frames of marked images according to the single frame of original images in the plurality of frames of original images and marked data of the single frame of original images in the plurality of frames of original images;
and playing the multi-frame original image and/or the multi-frame marked image.
In the implementation process, the labeling data of the single-frame original image in the multi-frame original image can be rapidly obtained from the labeling data corresponding to the multi-frame original image through the serial numbers corresponding to the single-frame original image in the multi-frame original image, and the labeling image is generated. Based on the embodiment, the annotation image contains the annotation data, and the original image can be watched while the annotation image is watched, so that the storage space is saved, two players are not required to be opened at the same time, and the workload is reduced.
Further, the step of obtaining the serial numbers corresponding to the multiple frames of original images and the single frame of original images in the multiple frames of original images includes:
acquiring first coded data;
and decoding the first encoded data to obtain the multi-frame original image and serial numbers corresponding to single-frame original images in the multi-frame original image.
In the implementation process, the video contains multiple frames of original images, each frame of original image has a corresponding serial number, and the video is transmitted and stored after being encoded; and decoding the first encoded data to quickly obtain a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images.
Further, the step of obtaining the annotation data corresponding to the multi-frame original image includes:
acquiring second coded data;
and decoding the second encoded data to obtain the annotation data corresponding to the multi-frame original image.
In the implementation process, the marking data corresponding to the multi-frame original image are stored and transmitted after being encoded, so that the storage space can be reduced and the transmission efficiency can be improved; and decoding the encoded data when playing the video to obtain the annotation data corresponding to the multi-frame original image.
Further, the step of playing the multi-frame original image and/or the multi-frame annotation image comprises the following steps of;
acquiring a playing mode;
and playing the multi-frame marked image and/or the multi-frame original image according to the playing mode.
In the implementation process, the user can select a playing mode according to the needs to play the multi-frame annotation image and/or the multi-frame original image, so that the user can quickly acquire the information required by the user, and the user experience is improved.
Further, the step of playing the multi-frame labeling image and/or the multi-frame original image according to the playing mode includes:
when the playing mode is that only the multi-frame original image is played, playing the multi-frame original image according to a serial number corresponding to a single-frame original image in the multi-frame original image;
and when the playing mode is that only the multi-frame marked image is played, playing the multi-frame marked image according to the serial number corresponding to the single-frame original image in the multi-frame original image.
In the implementation process, the user can select the playing mode according to the needs, and the user can acquire the information required by the user by playing the multi-frame original image and the multi-frame labeling image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the playing mode, so that the user experience is improved.
Further, the step of playing the multi-frame labeling image and/or the multi-frame original image according to the playing mode further includes:
when the playing mode is a left-right layout mode, splicing a single-frame original image in the multi-frame original image and a single-frame marked image corresponding to the single-frame original image in the multi-frame marked image to the left-right according to a preset scaling ratio to generate multi-frame left-right spliced images; displaying the left and right spliced images of the multiple frames according to the serial numbers and the video frame rates corresponding to the single-frame original images in the multiple frames of original images;
when the playing mode is an up-down layout mode, splicing a single-frame original image in the multi-frame original image and a single-frame marked image corresponding to the single-frame original image in the multi-frame marked image up and down according to a preset scaling ratio to generate multi-frame up-down spliced images; and displaying the multi-frame up-down spliced image according to the serial number and the video frame rate corresponding to the single-frame original image in the multi-frame original image.
In the implementation process, the single-frame original image in the multi-frame original image and the corresponding single-frame marked image in the multi-frame marked image are spliced left and right or up and down according to the preset scaling, so that the multi-frame left and right spliced image or the multi-frame up and down spliced image is formed, a user can check the original image and the marked image at the same time, the comparison is carried out intuitively, the marked data corresponding to the original image is obtained while the more comprehensive information of the original image is obtained, and the user experience is improved.
Further, the step of playing the multi-frame labeling image and/or the multi-frame original image further includes:
receiving an intercepting instruction;
intercepting the played multi-frame marked image and/or the multi-frame original image according to the intercepting instruction to obtain an intercepted video;
and saving the intercepted video.
In the implementation process, receiving an interception instruction, and intercepting a video according to a multi-frame marked image and/or a multi-frame original image which are played by the interception instruction; the video clips required by the user can be saved, and the user experience is improved.
In a second aspect, an embodiment of the present application further provides a video playing device, where the device includes:
the original image acquisition module is used for acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images;
the marking data acquisition module is used for acquiring marking data corresponding to the multi-frame original image; acquiring the annotation data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image and the annotation data corresponding to the multi-frame original image;
the marked image generation module is used for generating a plurality of marked images according to the single-frame original image in the plurality of original images and marked data of the single-frame original image in the plurality of original images;
and the playing module is used for playing the multi-frame original image and/or the multi-frame marked image.
In a third aspect, an electronic device provided in an embodiment of the present application includes: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the method according to any of the first aspects.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a video playing method provided in an embodiment of the present application;
FIG. 2 is a flowchart for acquiring an original image and a serial number according to an embodiment of the present application;
FIG. 3 is a flowchart for obtaining annotation data according to an embodiment of the present application;
fig. 4 is a flowchart of playing an image according to an embodiment of the present application;
fig. 5 is a schematic diagram of an internal structure of a video playing device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1
Referring to fig. 1, an embodiment of the present application provides a video playing method, including:
s1: acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images;
s2: obtaining marking data corresponding to a plurality of frames of original images;
s3: acquiring the labeling data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the labeling data corresponding to the multi-frame original image;
s4: generating a plurality of frames of marked images according to the single frame of original images in the plurality of frames of original images and marked data of the single frame of original images in the plurality of frames of original images;
s5: and playing a plurality of frames of original images and/or a plurality of frames of marked images.
In the above embodiment, the labeling data may include a plurality of pieces of labeling information, where each piece of labeling information includes: labeling type (e.g., rectangle, arrow, circle, polygon, straight line, wavy line, text, etc.), labeling size (e.g., width and height of rectangle, radius of circle, vertex coordinates of polygon, font size of text, etc.), labeling position (drawing top left corner pixel coordinates or center point pixel coordinates of label), labeling style (contour color, thickness, fill color, transparency, etc.), image sequence number (which may be a single sequence number, sequence number list, or sequence number section); each frame of image can correspond to a plurality of labeling information. The serial number may be a frame number corresponding to the original image, or may be a timestamp corresponding to each frame of the original image.
In S4, generating a single-frame original image and a single-frame marked image corresponding to the single-frame original image by marking data corresponding to the single-frame original image, wherein in S1, a plurality of continuous images are acquired, so that a plurality of marked images are generated.
The labeling data of the single-frame original image in the multi-frame original image can be quickly obtained from the labeling data corresponding to the multi-frame original image through the serial numbers corresponding to the single-frame original image in the multi-frame original image, and the labeling image is generated. Based on the embodiment, the annotation image contains the annotation data, and the original image can be watched while the annotation image is watched, so that the storage space is saved, two players are not required to be opened at the same time, and the workload is reduced.
Referring to fig. 2, in one possible embodiment, S1 includes:
s11: acquiring first coded data;
s12: and decoding the first encoded data to obtain a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images.
In the above embodiment, the first encoded data is video streaming media data obtained from an NVR video file.
In the embodiment of the application, the NVR is a network video recorder and is a store-and-forward part of a network video monitoring system, and the NVR and the video encoder or the network camera work cooperatively to complete video recording, storing and forwarding functions.
The video comprises a plurality of frames of original images, each frame of original image has a corresponding serial number, and the video is transmitted and stored after being encoded; and decoding the first encoded data to quickly obtain serial numbers corresponding to the multi-frame original image and a single-frame original image in the multi-frame original image.
Referring to fig. 3, in one possible embodiment, S2 includes:
s21: acquiring second coded data;
s22: and decoding the second encoded data to obtain the marking data corresponding to the multi-frame original image.
In one possible embodiment, the second encoded data is a compressed data file or a network data stream;
in one possible implementation, the second encoded data is stored after compression in a custom sub-container, and the sub-container is contained within a user data container (user data box) in mp4 format video.
In the embodiment, the marking data corresponding to the multi-frame original image is stored and transmitted after being encoded, so that the storage space can be reduced and the transmission efficiency can be improved; and decoding the encoded data when playing the video to obtain the annotation data corresponding to the multi-frame original image.
Referring to fig. 4, in one possible embodiment, S4 includes:
s41: acquiring a playing mode;
s42: and playing the multi-frame marked image and/or the multi-frame original image according to the playing mode.
In the embodiment, the user can select the playing mode according to the needs to play the multi-frame annotation image and/or the multi-frame original image, so that the user can quickly acquire the information required by the user, and the user experience is improved.
In one possible implementation, the specific step S42 is:
when the playing mode is that only the multi-frame original image is played, the multi-frame original image is played according to the serial number corresponding to the single-frame original image in the multi-frame original image;
when the playing mode is to only play the multi-frame marked images, the multi-frame marked images are played according to the serial numbers corresponding to the single-frame original images in the multi-frame original images.
In the embodiment, the user can select the playing mode according to the requirement, and the user can acquire the information required by the user by playing the multi-frame original image and the multi-frame labeling image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the playing mode, so that the user experience is improved.
By playing the multi-frame original image and the multi-frame labeling image according to the serial numbers and the playing modes corresponding to the single-frame original image in the multi-frame original image, the user can acquire the information required by the user, and the user experience is improved.
In a possible implementation manner, when the playing mode is a left-right layout mode, a single-frame original image in a plurality of frames of original images and a single-frame marked image corresponding to the single-frame original image in a plurality of frames of marked images are spliced left and right according to a preset scaling ratio, so as to generate a plurality of frames of spliced left and right images; displaying a plurality of frames of left and right spliced images according to serial numbers corresponding to single frames of original images in the plurality of frames of original images;
when the playing mode is an up-down layout mode, splicing up-down single-frame original images in the multi-frame original images and single-frame marked images corresponding to the single-frame original images in the multi-frame marked images according to preset scaling, and generating multi-frame up-down spliced images; and displaying the multi-frame up-down spliced image according to the serial number corresponding to the single-frame original image in the multi-frame original image.
In the above embodiment, the single-frame original image in the multi-frame original image and the single-frame label image in the multi-frame label image are respectively spliced to obtain a plurality of single-frame left-right spliced images/up-down spliced images, and the plurality of single-frame left-right spliced images/up-down spliced images are the multi-frame left-right spliced images/multi-frame up-down spliced images.
In the embodiment, the single-frame original image in the multi-frame original image and the corresponding single-frame marked image in the multi-frame marked image are spliced left and right or up and down according to the preset scaling, so that the multi-frame left and right spliced image or up and down spliced image is formed, a user can check the original image and the marked image at the same time, the comparison is performed intuitively, the marked data corresponding to the original image is obtained while the more comprehensive information of the original image is obtained, and the experience of the user is improved.
In a possible implementation manner, when the playing mode is the picture-in-picture mode, overlapping the original image and the marked image according to a preset scaling and pixel positions to form the picture-in-picture image; sequentially displaying picture-in-picture images in a video playing window according to parameters such as a serial number corresponding to a single frame original image of a plurality of frames of original images and a video frame rate, so as to form a video playing effect; it should be noted that, the labeling images may be spliced at different positions (such as upper left, lower left, upper right, lower right, etc.) of the original image according to the input of the user;
in one possible implementation, the user may select a video frame rate at which the original image and/or the annotation image are played.
It should be noted that, the video frame rate, the playing mode, the scaling, etc. may be selected and changed by the user according to the needs during the video playing process.
In one possible implementation, after S4, the method further includes:
receiving an intercepting instruction;
intercepting the played multi-frame marked image and/or the multi-frame original image according to the intercepting instruction to obtain an intercepted video;
and saving the intercepted video.
Receiving a interception instruction, and intercepting a video from a played multi-frame marked image and/or multi-frame original image according to the interception instruction; the video clips required by the user can be saved, and the user experience is improved.
In one possible implementation, export instructions may also be received, exporting the captured video.
Example 2
Referring to fig. 5, an embodiment of the present application provides a video playing device, including:
the original image acquisition module 1 is used for acquiring serial numbers corresponding to a plurality of frames of original images and a single frame of original image in the plurality of frames of original images;
the marking data acquisition module 2 is used for acquiring marking data corresponding to a plurality of frames of original images; acquiring the annotation data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image and the annotation data corresponding to the multi-frame original image;
the marked image generating module 3 is used for generating a plurality of frames of marked images according to the single frame of original images in the plurality of frames of original images and marked data of the single frame of original images in the plurality of frames of original images;
and the playing module 4 is used for playing a plurality of frames of original images and/or a plurality of frames of marked images.
In a possible implementation, the original image acquisition module 1 is further configured to acquire first encoded data; and decoding the first encoded data to obtain a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images.
In a possible embodiment, the labeling data acquisition module 2 is further configured to acquire second encoded data; and decoding the second encoded data to obtain the marking data corresponding to the multi-frame original image.
In a possible implementation manner, the playing module 4 is further configured to obtain a playing mode; and playing the multi-frame marked image and/or the multi-frame original image according to the playing mode.
In a possible implementation manner, the playing module 4 is further configured to play the multiple frames of original images according to the serial numbers corresponding to the single frames of original images in the multiple frames of original images when the playing mode is that only the multiple frames of original images are played; when the playing mode is to only play the multi-frame marked images, the multi-frame marked images are played according to the serial numbers corresponding to the single-frame original images in the multi-frame original images.
In a possible implementation manner, when the playing mode is a left-right layout mode, the playing module 4 is further configured to splice, left and right, corresponding single-frame labeling images in the single-frame original images and the multiple-frame labeling images in the multiple-frame original images according to a preset scaling ratio, so as to generate multiple-frame left and right spliced images; displaying a plurality of frames of left and right spliced images according to the serial numbers and the video frame rates corresponding to single frames of original images in the plurality of frames of original images; when the playing mode is an up-down layout mode, the corresponding single-frame marked images in the single-frame original images and the multi-frame marked images in the multi-frame original images are spliced up and down according to a preset scaling ratio, and multi-frame up-down spliced images are generated; and displaying the multi-frame up-down spliced image according to the serial number and the video frame rate corresponding to the single-frame original image in the multi-frame original image.
In a possible implementation manner, the playing module 4 is further configured to receive an intercept instruction;
intercepting the video in the played multi-frame marked image and/or multi-frame original image according to the intercepting instruction; and saving the intercepted video.
Example 3
An electronic device includes a memory for storing a computer program and a processor that executes the computer program to cause the electronic device to perform the video playback method of embodiment 1.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include a processor 61, a communication interface 62, a memory 63, and at least one communication bus 64. Wherein the communication bus 64 is used to enable direct connection communication of these components. The communication interface 62 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The processor 61 may be an integrated circuit chip with signal processing capabilities.
The processor 61 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. The general purpose processor may be a microprocessor or the processor 61 may be any conventional processor or the like.
The Memory 63 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), or the like. The memory 63 has stored therein computer readable instructions which, when executed by the processor 61, may cause the apparatus to perform the steps described above in relation to the method embodiments of fig. 1-4.
Optionally, the electronic device may further include a storage controller, an input-output unit. The memory 6, the memory controller, the processor 61, the peripheral interface, and the input/output unit are electrically connected directly or indirectly to each other, so as to realize data transmission or interaction. For example, the components may be electrically coupled to each other via one or more communication buses 64. The processor 61 is adapted to execute executable modules stored in the memory 63, such as software functional modules or computer programs comprised by the device.
The input-output unit is used for providing the user with the creation task and creating the starting selectable period or the preset execution time for the task so as to realize the interaction between the user and the server. The input/output unit may be, but is not limited to, a mouse, a keyboard, and the like.
It will be appreciated that the configuration shown in fig. 6 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof.
In addition, the embodiment of the present application further provides a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements the method for indicating the motion state of the robot according to the first embodiment.
The embodiment of the application also provides a computer program product, which when running on a computer, causes the computer to execute the video playing method of the embodiment of the method.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application. It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (9)

1. A video playing method, applied to an electronic device, comprising:
acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images;
acquiring annotation data corresponding to the multi-frame original image;
acquiring the labeling data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image in the multi-frame original image and the labeling data corresponding to the multi-frame original image;
generating a plurality of frames of marked images according to the single frame of original images in the plurality of frames of original images and marked data of the single frame of original images in the plurality of frames of original images;
playing the multi-frame original image and/or the multi-frame marked image;
the step of obtaining the annotation data corresponding to the multi-frame original image comprises the following steps:
acquiring second coded data;
decoding the second encoded data to obtain labeling data corresponding to the multi-frame original image;
the second encoded data is stored after compression in a sub-container, and the sub-container is contained within a user data container in mp4 format video.
2. The video playing method according to claim 1, wherein the step of obtaining serial numbers corresponding to a plurality of frames of original images and a single frame of original image in the plurality of frames of original images includes:
acquiring first coded data;
and decoding the first encoded data to obtain the multi-frame original image and serial numbers corresponding to single-frame original images in the multi-frame original image.
3. The method according to claim 1, wherein the step of playing the multi-frame original image and/or the multi-frame annotation image comprises:
acquiring a playing mode;
and playing the multi-frame marked image and/or the multi-frame original image according to the playing mode.
4. A video playing method according to claim 3, wherein the step of playing the multi-frame annotation image and/or the multi-frame original image according to the playing mode comprises:
when the playing mode is that only the multi-frame original image is played, playing the multi-frame original image according to a serial number corresponding to a single-frame original image in the multi-frame original image;
and when the playing mode is that only the multi-frame marked image is played, playing the multi-frame marked image according to the serial number corresponding to the single-frame original image in the multi-frame original image.
5. The video playing method according to claim 3, wherein the step of playing the multi-frame annotation image and/or the multi-frame original image according to the playing mode further comprises:
when the playing mode is a left-right layout mode, splicing a single-frame original image in the multi-frame original image and a single-frame marked image corresponding to the single-frame original image in the multi-frame marked image to the left-right according to a preset scaling ratio to generate multi-frame left-right spliced images; playing the left and right spliced images of the multiple frames according to the serial numbers corresponding to the single-frame original images in the multiple frames;
when the playing mode is an up-down layout mode, splicing a single-frame original image in the multi-frame original image and a single-frame marked image corresponding to the single-frame original image in the multi-frame marked image up and down according to a preset scaling ratio to generate multi-frame up-down spliced images; and displaying the multi-frame up-down spliced image according to the serial number corresponding to the single-frame original image in the multi-frame original image.
6. The method according to claim 1, wherein the step of playing the multi-frame annotation image and/or the multi-frame original image further comprises:
receiving an intercepting instruction;
intercepting the played multi-frame marked image and/or the multi-frame original image according to the intercepting instruction to obtain an intercepted video;
and saving the intercepted video.
7. A video playback apparatus, characterized by being applied to an electronic device, comprising:
the original image acquisition module is used for acquiring a plurality of frames of original images and serial numbers corresponding to single frames of original images in the plurality of frames of original images;
the marking data acquisition module is used for acquiring marking data corresponding to the multi-frame original image; acquiring the annotation data of the single-frame original image in the multi-frame original image according to the serial number corresponding to the single-frame original image and the annotation data corresponding to the multi-frame original image;
the marked image generation module is used for generating a plurality of marked images according to the single-frame original image in the plurality of original images and marked data of the single-frame original image in the plurality of original images;
the playing module is used for playing the multi-frame original image and/or the multi-frame marked image;
the annotation data acquisition module is also used for acquiring second coding data; decoding the second encoded data to obtain labeling data corresponding to the multi-frame original image;
the second encoded data is stored after compression in a sub-container, and the sub-container is contained within a user data container in mp4 format video.
8. An electronic device, comprising: memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1-6 when the computer program is executed.
9. A computer readable storage medium having instructions stored thereon which, when run on a computer, cause the computer to perform the method of any of claims 1-6.
CN202111365787.8A 2021-11-18 2021-11-18 Video playing method and device, electronic equipment and readable storage medium Active CN114079821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111365787.8A CN114079821B (en) 2021-11-18 2021-11-18 Video playing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111365787.8A CN114079821B (en) 2021-11-18 2021-11-18 Video playing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114079821A CN114079821A (en) 2022-02-22
CN114079821B true CN114079821B (en) 2024-02-20

Family

ID=80284130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111365787.8A Active CN114079821B (en) 2021-11-18 2021-11-18 Video playing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114079821B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112441A (en) * 2013-04-18 2014-10-22 深圳市迈普视通科技有限公司 Multi-source tiled display system and display screen gap aberration eliminating method
CN107292944A (en) * 2017-06-26 2017-10-24 上海传英信息技术有限公司 The method for recording and recording system of a kind of screen picture
CN110597577A (en) * 2019-05-31 2019-12-20 珠海全志科技股份有限公司 Head-mounted visual equipment and split-screen display method and device thereof
CN111428083A (en) * 2020-03-19 2020-07-17 平安国际智慧城市科技股份有限公司 Video monitoring warning method, device, equipment and storage medium
CN111506769A (en) * 2020-04-21 2020-08-07 浙江大华技术股份有限公司 Video file processing method and device, storage medium and electronic device
CN111666024A (en) * 2020-05-28 2020-09-15 维沃移动通信(杭州)有限公司 Screen recording method and device and electronic equipment
WO2020233483A1 (en) * 2019-05-22 2020-11-26 腾讯科技(深圳)有限公司 Video coding method and video decoding method
CN112019834A (en) * 2020-07-22 2020-12-01 北京迈格威科技有限公司 Video stream processing method, device, equipment and medium
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200952486A (en) * 2008-06-13 2009-12-16 Ind Tech Res Inst Video surveillance system, module and method for annotation and de-annotation thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112441A (en) * 2013-04-18 2014-10-22 深圳市迈普视通科技有限公司 Multi-source tiled display system and display screen gap aberration eliminating method
CN107292944A (en) * 2017-06-26 2017-10-24 上海传英信息技术有限公司 The method for recording and recording system of a kind of screen picture
WO2020233483A1 (en) * 2019-05-22 2020-11-26 腾讯科技(深圳)有限公司 Video coding method and video decoding method
CN110597577A (en) * 2019-05-31 2019-12-20 珠海全志科技股份有限公司 Head-mounted visual equipment and split-screen display method and device thereof
CN111428083A (en) * 2020-03-19 2020-07-17 平安国际智慧城市科技股份有限公司 Video monitoring warning method, device, equipment and storage medium
CN111506769A (en) * 2020-04-21 2020-08-07 浙江大华技术股份有限公司 Video file processing method and device, storage medium and electronic device
CN111666024A (en) * 2020-05-28 2020-09-15 维沃移动通信(杭州)有限公司 Screen recording method and device and electronic equipment
CN112019834A (en) * 2020-07-22 2020-12-01 北京迈格威科技有限公司 Video stream processing method, device, equipment and medium
CN112019767A (en) * 2020-08-07 2020-12-01 北京奇艺世纪科技有限公司 Video generation method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
远程视频监控图像自动标注优化方法;武卫翔;;电子技术与软件工程(第18期);全文 *

Also Published As

Publication number Publication date
CN114079821A (en) 2022-02-22

Similar Documents

Publication Publication Date Title
US8854474B2 (en) System and method for quick object verification
KR20150053162A (en) Search System and Video Search method
US11894021B2 (en) Data processing method and system, storage medium, and computing device
CN111683267A (en) Method, system, device and storage medium for processing media information
EP3229174A1 (en) Method for video investigation
CN111651966A (en) Data report file generation method and device and electronic equipment
CN112269713A (en) Method, device and equipment for acquiring program running state and storage medium
CN114079821B (en) Video playing method and device, electronic equipment and readable storage medium
KR101984825B1 (en) Method and Apparatus for Encoding a Cloud Display Screen by Using API Information
CN113949920A (en) Video annotation method and device, terminal equipment and storage medium
CN113793323A (en) Component detection method, system, equipment and medium
CN113190680A (en) Unstructured data marking method, device, equipment and storage medium
WO2024002092A1 (en) Method and apparatus for pushing video, and storage medium
US20180121729A1 (en) Segmentation-based display highlighting subject of interest
CN111104549A (en) Method and equipment for retrieving video
CN110889352A (en) Image blurring processing method, computer device, and computer-readable storage medium
CN111369591A (en) Method, device and equipment for tracking moving object
CN112738629B (en) Video display method and device, electronic equipment and storage medium
CN112004065B (en) Video display method, display device and storage medium
CN115690496A (en) Real-time regional intrusion detection method based on YOLOv5
CN112637538B (en) Smart tag method, system, medium, and terminal for optimizing video analysis
CN115168171A (en) Webpage exception handling method and device, electronic equipment and medium
CN107357906B (en) Data processing method and device and image acquisition equipment
CN112768046A (en) Data processing method, medical management system and terminal
CN109214474B (en) Behavior analysis and information coding risk analysis method and device based on information coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant