CN112866640A - Data storage method and device - Google Patents

Data storage method and device Download PDF

Info

Publication number
CN112866640A
CN112866640A CN202110022623.9A CN202110022623A CN112866640A CN 112866640 A CN112866640 A CN 112866640A CN 202110022623 A CN202110022623 A CN 202110022623A CN 112866640 A CN112866640 A CN 112866640A
Authority
CN
China
Prior art keywords
video
data packet
data
packet
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110022623.9A
Other languages
Chinese (zh)
Inventor
万龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Vyagoo Technology Co ltd
Original Assignee
Zhuhai Vyagoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Vyagoo Technology Co ltd filed Critical Zhuhai Vyagoo Technology Co ltd
Priority to CN202110022623.9A priority Critical patent/CN112866640A/en
Publication of CN112866640A publication Critical patent/CN112866640A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the disclosure discloses a data storage method and a data storage device. One embodiment of the method comprises: receiving videos collected by a plurality of image collecting devices in communication connection; compiling the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video; and combining the obtained multiple data packets into a video file according to the equipment identification and the video acquisition time included by each data packet, and storing the video file. The embodiment can avoid the occurrence of data loss caused by multi-path data writing, and is beneficial to improving the stability and reliability of data storage.

Description

Data storage method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a data storage method and device.
Background
Currently, a monitoring system usually needs to access a plurality of cameras to cover a monitoring area so as to acquire audio and/or video of each monitoring area. In the related art, each camera usually obtains corresponding monitoring data in the monitoring process. For the storage medium, the simultaneous writing of multiple data paths can significantly reduce the data writing speed, and even easily cause data loss due to untimely writing.
Therefore, in the related art, if the monitoring system writes the monitoring data collected by the accessed multiple cameras into the storage medium of the monitoring system at the same time, the data writing speed is easily attenuated, and even the problem of data loss due to untimely writing occurs.
Disclosure of Invention
The embodiment of the disclosure provides a data storage method and device.
In a first aspect, an embodiment of the present disclosure provides a data storage method, where the method includes: receiving videos collected by a plurality of image collecting devices in communication connection; compiling the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video; and combining the obtained multiple data packets into a video file according to the equipment identification and the video acquisition time included by each data packet, and storing the video file.
In some embodiments, separately compiling the received plurality of videos to obtain a data packet for each video includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the video as the data packet of the data packet; and combining the obtained packet header and the obtained packet data to obtain a data packet for the video.
In some embodiments, separately compiling the received plurality of videos to obtain a data packet for each video includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the basic code stream corresponding to the video as the packet data of the data packet; and combining the obtained packet header and the packet data to obtain a data packet for the video.
In some embodiments, combining the obtained multiple data packets into one video file according to the device identifier and the video capture time included in each data packet includes: and generating file names according to the video acquisition time, using the obtained multiple data packets as file contents, and determining the combination of the file contents and the generated file names as a video file.
In some embodiments, the method further comprises: in response to receiving a first video playing request which is input by a user and comprises request information, searching a data packet where the request information is located from a plurality of stored video files, and playing a video corresponding to the searched data packet, wherein the request information comprises any one or more of the following items: equipment identification and video acquisition time.
In some embodiments, the method further comprises: in response to receiving a second video playing request comprising N device identifications input by a user, creating N windows; searching a data packet where each equipment identifier is located from a plurality of stored video files; and presenting videos corresponding to the data packets under the N device identifications in the created N windows, wherein one window presents the video corresponding to the data packet under one device identification.
In a second aspect, embodiments of the present disclosure provide a data storage device, the device including: a video receiving unit configured to receive videos captured by a plurality of image capturing devices communicatively connected; the video compiling unit is configured to compile the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video; and the file storage unit is configured to combine the obtained multiple data packets into one video file according to the equipment identification and the video acquisition time included in each data packet, and store the video file.
In some embodiments, separately compiling the received plurality of videos to obtain a data packet for each video includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the video as the data packet of the data packet; and combining the obtained packet header and the obtained packet data to obtain a data packet for the video.
In some embodiments, separately compiling the received plurality of videos to obtain a data packet for each video includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the basic code stream corresponding to the video as the packet data of the data packet; and combining the obtained packet header and the packet data to obtain a data packet for the video.
In some embodiments, combining the obtained multiple data packets into one video file according to the device identifier and the video capture time included in each data packet includes: and generating file names according to the video acquisition time, using the obtained multiple data packets as file contents, and determining the combination of the file contents and the generated file names as a video file.
In some embodiments, the apparatus further comprises: the video playing unit is configured to respond to a first video playing request which is input by a user and comprises request information, search a data packet where the request information is located from a plurality of stored video files, and play a video corresponding to the searched data packet, wherein the request information comprises any one or more of the following items: equipment identification and video acquisition time.
In some embodiments, the video playback unit is further configured to create N windows in response to receiving a second video playback request comprising N device identifications input by a user; searching a data packet where each equipment identifier is located from a plurality of stored video files; and presenting videos corresponding to the data packets under the N device identifications in the created N windows, wherein one window presents the video corresponding to the data packet under one device identification.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when executed by the one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
The data storage method and the data storage device provided by the embodiment of the disclosure can receive videos acquired by a plurality of image acquisition devices in communication connection. Then, according to a preset compiling period, compiling the received videos respectively to obtain data packets for the videos. The obtained data packet comprises the equipment identification corresponding to the image acquisition equipment and the video acquisition time corresponding to the video. And finally, combining the obtained multiple data packets into a video file according to the equipment identification and the video acquisition time included by each data packet, and storing the video file. According to the method and the device provided by the embodiment of the disclosure, one video file is obtained by processing a plurality of videos acquired by each image acquisition device, and when data is written into the storage medium, only the video file needs to be written into at the same time, that is, only one path of data needs to be written into at the same time, which is beneficial to improving the data writing speed. In addition, the data loss caused by multi-path data writing can be avoided, and the stability and the reliability of data storage are improved.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a data storage method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of another data storage method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of yet another data storage method provided by an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a data storage device provided by an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates a flow of one embodiment of a data storage method according to the present disclosure. The data storage method comprises the following steps:
step 101, receiving videos collected by a plurality of image collecting devices which are in communication connection.
In this embodiment, the execution subject of the data storage method may be an electronic device, such as an in-vehicle terminal device, an in-vehicle monitoring device, and the like. The execution main body can receive videos acquired by a plurality of image acquisition devices in communication connection in a wired connection mode or a wireless connection mode. Wherein one image capturing device typically captures one video. Multiple videos may be captured by multiple image capture devices.
It should be noted that the image capturing device may capture video as well as audio. If one or more image capturing devices are used to capture audio, the video received by the execution main body may include audio.
And 102, respectively compiling the received videos according to a preset compiling period to obtain data packets aiming at the videos.
The obtained data packet comprises the equipment identification corresponding to the image acquisition equipment and the video acquisition time corresponding to the video. Here, the device identification may be information for identifying the device. As an example, the device identifier may be 001, a, or a nail. The video capture time is generally information indicating the time at which the image capture device captures the video. As an example, the video capture time may be 20201221092012, which indicates that the image capture device captures video at 9 am, 20 min, 12 sec 12 m 12/21 st/12 th of 2020.
The coding cycle may be a preset value indicating a cycle length. As an example, the compiling period may be 1 minute, 2 minutes, or the like. It should be noted that if the compiling cycle is on the order of seconds, for example, the compiling cycle is 1 second, the required computing resources are increased, and the practical application value is usually not great. Therefore, in practice, to save computational resources, the compilation cycle is not usually set to the order of seconds.
In this embodiment, the execution body may process the video received in one coding cycle at a time by taking the coding cycle as a unit. For example, if the coding cycle is 1 second, the execution entity may start processing the video of one second at the end time of the one second. For example, the execution main body may process each video received within the 5 th second at the time when the 5 th second ends.
It should be noted that, since the captured video is usually continuous, the video capture time is usually referred to as the end time of each coding cycle. For example, if the coding cycle is 1 second, the execution entity may process each video received within the 5 th second at the time when the 5 th second ends, and at this time, it may consider that the video currently being processed has the video capture time of the 5 th second.
Here, the execution main body may respectively compile each video received in one compilation cycle to obtain a plurality of data packets corresponding to the compilation cycle, where each video in the compilation cycle may obtain one data packet. For example, for each video received in a coding cycle, the execution body may code the video as follows: the video, the equipment identification of the image acquisition equipment for acquiring the video and the video acquisition time of the video are directly combined into a data packet.
And 103, combining the obtained multiple data packets into a video file according to the equipment identification and the video acquisition time included in each data packet, and storing the video file.
In this embodiment, the execution main body may integrate videos whose video capturing times belong to the same compiling cycle into the same video file. In practice, video files are usually indexed by video capture time, and in the video files, data packets are usually distinguished by device identification. The method is beneficial to quickly finding the data packet to be searched from the video file, so that the video to be viewed is obtained.
In this embodiment, after obtaining the video file, the execution main body may store the video file. It should be noted that a video file can be integrated in one coding cycle.
According to the method provided by the embodiment of the disclosure, one video file is obtained by processing a plurality of videos acquired by each image acquisition device, and when data is written into the storage medium, only the video file needs to be written into at the same time, that is, only one path of data needs to be written into at the same time, which is beneficial to improving the data writing speed. In addition, the data loss caused by multi-path data writing can be avoided, and the stability and the reliability of data storage are improved.
In some optional implementation manners of this embodiment, the compiling the received multiple videos to obtain data packets for the videos includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the video as the data packet of the data packet; and combining the obtained packet header and the obtained packet data to obtain a data packet for the video.
Here, the execution body may compile each video received in a compilation cycle to obtain a packet corresponding to the video. Specifically, for each video, the execution subject may first combine the video capture time of the video and the device identifier of the image capture device that captures the video to form a packet header of a data packet. As an example, if the video capture time of the video is 20201221092012, the device identification of the image capture device capturing the video is a. Then "a" may be used as one field of the packet header and "20201221092012" as another field of the packet header. Thus, the two obtained fields can be combined to obtain the header of the data packet. Then, the execution body may directly take the video as packet data of a packet. Finally, the combination of the packet header and the packet data is the data packet for the video.
The realization mode can conveniently and rapidly obtain the data packet aiming at each video, and is beneficial to improving the data processing efficiency.
In some optional implementation manners of this embodiment, the compiling the received multiple videos to obtain data packets for the videos includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the basic code stream corresponding to the video as the packet data of the data packet; and combining the obtained packet header and the packet data to obtain a data packet for the video.
Here, the execution body may obtain a packet for each video in another manner. Specifically, for each video, the execution subject may first combine the video capture time of the video and the device identifier of the image capture device that captures the video to form a packet header of a data packet. As an example, if the video capture time of the video is 20201221092012, the device identification of the image capture device capturing the video is a. Then "a" and "20201221092012" can be directly combined to obtain "a-20201221092012" and thus the packet header. Then, the execution body may use an Elementary Stream (ES) corresponding to the video as packet data of the packet. Finally, the combination of the packet header and the packet data is the data packet for the video. It should be noted that, the execution main body may adopt the existing technology or the technology in the future development to obtain the elementary stream corresponding to the video, which is not limited in this embodiment.
In this implementation, since the elementary stream corresponding to the video is data obtained by compressing the video, the data obtained by the compression is usually much smaller than the video itself. Therefore, the elementary stream corresponding to the video is used as the packet data of the data packet, and the size of the data packet corresponding to the video can be effectively reduced. Which contributes to saving storage space.
In an optional implementation manner of each embodiment of the present disclosure, the combining the obtained multiple data packets into one video file according to the device identifier and the video capture time included in each data packet includes: and generating file names according to the video acquisition time, using the obtained multiple data packets as file contents, and determining the combination of the file contents and the generated file names as a video file.
Here, since the captured video is generally continuous, the video capturing time generally refers to the end time of each coding cycle. Therefore, the video acquisition time of each video acquired in the same compiling period is the same.
In this implementation, the execution body may first create a video file. Then, a file name is generated for the video file. As an example, the execution subject may obtain a file name of a video file by: and directly taking the video acquisition time as the file name. Thereafter, the file content is assigned to the video file. Here, the execution body may directly use, as the file content, a packet corresponding to each video received in the compilation cycle. As an example, if 5 videos are received in the current coding cycle, 5 data packets may be obtained. At this time, the file content may have the above-mentioned 5 packets. And finally, combining the file content and the file name to obtain the video file.
The realization mode can realize that a plurality of videos collected by each image collecting device are integrated into the same video file. The obtained video file is named by taking video acquisition time as name, and in the video file, each data packet can be distinguished through equipment identification. The data packets to be searched can be quickly found from the video file, so that the video to be viewed is obtained.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a data storage method according to an embodiment of the disclosure. The data storage method comprises the following steps:
step 201, receiving videos collected by a plurality of image collecting devices which are connected in a communication mode.
Step 202, compiling the received videos respectively according to a preset compiling period to obtain data packets for each video.
The obtained data packet comprises the equipment identification corresponding to the image acquisition equipment and the video acquisition time corresponding to the video.
Step 203, combining the obtained multiple data packets into a video file according to the device identifier and the video acquisition time included in each data packet, and storing the video file.
In the present embodiment, the specific operations of steps 201-203 are substantially the same as the operations of steps 101-103 in the embodiment shown in fig. 1, and are not repeated herein.
Step 204, in response to receiving a first video playing request including request information and input by a user, searching a data packet where the request information is located from the stored multiple video files, and playing a video corresponding to the searched data packet.
Wherein the request information comprises any one or more of the following items: equipment identification and video acquisition time.
The first video playing request is generally information for requesting to play a video, and the first video playing request includes the request information.
In this embodiment, the execution main body may receive a first video playing request input by a user. After receiving the first video playing request, the execution main body may search for the data packet in which the request information is located from the plurality of currently stored video files. Thereby playing the video corresponding to the data packet. For example, if the packet data of the data packet is a video itself, the video is directly played. If the data packet of the data packet is the basic code stream of the video, the basic code stream is restored to the video for playing.
As an example, when the executing entity executes the step of searching for the data packet where the request information is located from the stored multiple video files, if the request information is the device identifier, the executing entity may search for the data packet where the device identifier is located from each currently stored video file, and may obtain multiple data packets. Therefore, the execution main body can play the videos corresponding to the data packets according to the sequence of the video acquisition time.
As another example, when the executing main body executes the step of searching for the data packet in which the request information is located from the stored multiple video files, if the request information is the device identifier and the video capture time, the executing main body may search for the video file in which the video capture time is located from the currently stored multiple video files, and then find out the data packet in which the device identifier is located from the searched video file, so as to obtain one data packet. Thus, the execution main body can directly play the video corresponding to the data packet.
In this embodiment, each data packet in the video file includes the device identifier and the video capture time, so that the execution main body can quickly and accurately find the data packet required by the user when receiving the video playing request of the user, and thus quickly and accurately play the video that the user wants to view. The video query speed and the video playing speed can be improved.
Referring further to fig. 3, fig. 3 is a schematic flow chart of a data storage method according to an embodiment of the present disclosure. The data storage method comprises the following steps:
step 301, receiving videos collected by a plurality of image collecting devices connected in communication.
Step 302, compiling the received videos respectively according to a preset compiling period to obtain data packets for each video.
The obtained data packet comprises the equipment identification corresponding to the image acquisition equipment and the video acquisition time corresponding to the video.
Step 303, combining the obtained multiple data packets into a video file according to the device identifier and the video acquisition time included in each data packet, and storing the video file.
In the present embodiment, the specific operations of steps 301-303 are substantially the same as the operations of steps 101-103 in the embodiment shown in fig. 1, and are not described herein again.
Step 304, in response to receiving a second video playing request which is input by a user and comprises N device identifications, creating N windows; searching a data packet where each equipment identifier is located from a plurality of stored video files; and presenting videos corresponding to the data packets under the N device identifications in the created N windows.
Wherein, a window presents a video corresponding to the data packet under a device identifier. N is an integer greater than 1.
The second video playing request is generally information for requesting to play a video, and the second video playing request includes N device identifiers.
In this embodiment, the execution main body may receive a second video playing request input by the user. Upon receiving the second video playback request, the execution body may first create a number of windows equal to the device identification using existing or future-developed techniques. Then, for each device identifier, the execution main body may search for a data packet in which the device identifier is located from each currently stored video file to obtain a plurality of data packets, may select one window from the N created windows, and play a video corresponding to each data packet in the selected window according to the order of video acquisition time. For example, if the packet data of the data packet is a video itself, the video is directly played. If the data packet of the data packet is the basic code stream of the video, the basic code stream is restored to the video for playing.
In this embodiment, each data packet in the video file includes the device identifier and the video capture time, so that the execution main body can quickly and accurately find the data packet required by the user when receiving the video playing request of the user, and thus quickly and accurately play the video that the user wants to view. The video query speed and the video playing speed can be improved. In addition, in the present embodiment, if the user wants to view videos from a plurality of image capturing apparatuses at the same time, the user may input a second play request including a plurality of apparatus identifications to the execution main body. In this way, the execution main body can search the data packet where each device identifier is located for each device identifier at the same time, so that videos from different image acquisition devices are presented in different windows. The method is beneficial to realizing the synchronous playing of multiple paths of videos.
With further reference to fig. 4, as an implementation of the method shown in fig. 1, the present disclosure provides an embodiment of a data storage device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 4, the data storage device of the present embodiment includes: a video receiving unit 401 configured to receive videos captured by a plurality of image capturing apparatuses that are communicatively connected; a video compiling unit 402, configured to respectively compile the received multiple videos according to a preset compiling period, so as to obtain a data packet for each video, where the obtained data packet includes a device identifier corresponding to the image capturing device and video capturing time corresponding to the video; a file storage unit 403 configured to combine the obtained plurality of data packets into one video file according to the device identifier and the video capture time included in each data packet, and store the video file.
In some optional implementation manners of this embodiment, compiling the received multiple videos respectively to obtain data packets for the videos includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the video as the data packet of the data packet; and combining the obtained packet header and the obtained packet data to obtain a data packet for the video.
In some optional implementation manners of this embodiment, compiling the received multiple videos respectively to obtain data packets for the videos includes: aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the basic code stream corresponding to the video as the packet data of the data packet; and combining the obtained packet header and the packet data to obtain a data packet for the video.
In some optional implementations of this embodiment, combining the obtained multiple data packets into one video file according to the device identifier and the video capture time included in each data packet includes: and generating file names according to the video acquisition time, using the obtained multiple data packets as file contents, and determining the combination of the file contents and the generated file names as a video file.
In some optional implementations of this embodiment, the apparatus may further include a video playing unit (not shown in the figure). The video playing unit may be configured to, in response to receiving a first video playing request including request information input by a user, search a data packet in which the request information is located from among the stored plurality of video files, and play a video corresponding to the searched data packet, where the request information includes any one or more of: equipment identification and video acquisition time.
In some optional implementations of this embodiment, the video playing unit may be further configured to create N windows in response to receiving a second video playing request including N device identifications input by the user; searching a data packet where each equipment identifier is located from a plurality of stored video files; and presenting videos corresponding to the data packets under the N device identifications in the created N windows, wherein one window presents the video corresponding to the data packet under one device identification.
According to the device provided by the above embodiment of the present disclosure, one video file is obtained by processing a plurality of videos collected by each image collection device, and when data is written into a storage medium, only the video file needs to be written into at the same time, that is, only one path of data needs to be written into at the same time, which is beneficial to improving the data writing speed. In addition, the data loss caused by multi-path data writing can be avoided, and the stability and the reliability of data storage are improved.
Referring now to FIG. 5, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., a Central Processing Unit (CPU), a graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, image capture device, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the steps of: receiving videos collected by a plurality of image collecting devices in communication connection; compiling the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video; and combining the obtained multiple data packets into a video file according to the equipment identification and the video acquisition time included by each data packet, and storing the video file.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a video receiving unit, a video compiling unit, and a file storing unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, a video receiving unit may also be described as a "unit that receives video captured by a plurality of image capturing apparatuses that are communicatively connected".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A method of data storage, wherein the method comprises:
receiving videos collected by a plurality of image collecting devices in communication connection;
compiling the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video;
and combining the obtained plurality of data packets into a video file according to the equipment identification and the video acquisition time included by each data packet, and storing the video file.
2. The method of claim 1, wherein the compiling the received plurality of videos to obtain data packets for each video comprises:
aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the video as the data packet of the data packet; and combining the obtained packet header and the obtained packet data to obtain a data packet for the video.
3. The method of claim 1, wherein the compiling the received plurality of videos to obtain data packets for each video comprises:
aiming at a video in a plurality of received videos, combining and generating a packet header of a data packet according to the video acquisition time of the video and the equipment identification of image acquisition equipment for acquiring the video; using the basic code stream corresponding to the video as the packet data of the data packet; and combining the obtained packet header and the packet data to obtain a data packet for the video.
4. The method according to claim 1, wherein said combining the obtained plurality of data packets into one video file according to the device identifier and the video capture time included in each data packet comprises:
and generating file names according to the video acquisition time, using the obtained multiple data packets as file contents, and determining the combination of the file contents and the generated file names as the video file.
5. The method according to any one of claims 1-4, wherein the method further comprises:
in response to receiving a first video playing request which is input by a user and comprises request information, searching a data packet where the request information is located from a plurality of stored video files, and playing a video corresponding to the searched data packet, wherein the request information comprises any one or more of the following items: equipment identification and video acquisition time.
6. The method according to any one of claims 1-4, wherein the method further comprises:
in response to receiving a second video playing request comprising N device identifications input by a user, creating N windows; searching a data packet where each equipment identifier is located from a plurality of stored video files; and presenting videos corresponding to the data packets under the N device identifications in the created N windows, wherein one window presents the video corresponding to the data packet under one device identification.
7. A data storage apparatus, wherein the apparatus comprises:
a video receiving unit configured to receive videos captured by a plurality of image capturing devices communicatively connected;
the video compiling unit is configured to compile the received videos respectively according to a preset compiling period to obtain a data packet for each video, wherein the obtained data packet comprises an equipment identifier corresponding to the image acquisition equipment and video acquisition time corresponding to the video;
and the file storage unit is configured to combine the obtained multiple data packets into one video file according to the equipment identification and the video acquisition time included in each data packet, and store the video file.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the video playing unit is configured to respond to a first video playing request which is input by a user and comprises request information, search a data packet where the request information is located from a plurality of stored video files, and play a video corresponding to the searched data packet, wherein the request information comprises any one or more of the following items: equipment identification and video acquisition time.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202110022623.9A 2021-01-08 2021-01-08 Data storage method and device Pending CN112866640A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110022623.9A CN112866640A (en) 2021-01-08 2021-01-08 Data storage method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110022623.9A CN112866640A (en) 2021-01-08 2021-01-08 Data storage method and device

Publications (1)

Publication Number Publication Date
CN112866640A true CN112866640A (en) 2021-05-28

Family

ID=76005351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110022623.9A Pending CN112866640A (en) 2021-01-08 2021-01-08 Data storage method and device

Country Status (1)

Country Link
CN (1) CN112866640A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179435A (en) * 2013-02-27 2013-06-26 北京视博数字电视科技有限公司 Multi-channel video data multiplexing method and device
CN108287668A (en) * 2018-01-25 2018-07-17 深圳市智物联网络有限公司 Processing method and processing device, computer installation and the readable storage medium storing program for executing of device data
CN110677623A (en) * 2019-10-15 2020-01-10 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103179435A (en) * 2013-02-27 2013-06-26 北京视博数字电视科技有限公司 Multi-channel video data multiplexing method and device
CN108287668A (en) * 2018-01-25 2018-07-17 深圳市智物联网络有限公司 Processing method and processing device, computer installation and the readable storage medium storing program for executing of device data
CN110677623A (en) * 2019-10-15 2020-01-10 北京百度网讯科技有限公司 Data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US20230093621A1 (en) Search result display method, readable medium, and terminal device
CN111629251B (en) Video playing method and device, storage medium and electronic equipment
CN111897740A (en) User interface testing method and device, electronic equipment and computer readable medium
CN112632323A (en) Video playing method, device, equipment and medium
CN111625422B (en) Thread monitoring method, thread monitoring device, electronic equipment and computer readable storage medium
CN115203004A (en) Code coverage rate testing method and device, storage medium and electronic equipment
US20230139416A1 (en) Search content matching method, and electronic device and storage medium
CN111355995A (en) Method and device for determining sound delay time of Bluetooth device and terminal device
CN112102836B (en) Voice control screen display method and device, electronic equipment and medium
WO2023098576A1 (en) Image processing method and apparatus, device, and medium
CN110414625B (en) Method and device for determining similar data, electronic equipment and storage medium
CN111669625A (en) Processing method, device and equipment for shot file and storage medium
CN112866640A (en) Data storage method and device
WO2023134617A1 (en) Template selection method and apparatus, and electronic device and storage medium
CN114584709B (en) Method, device, equipment and storage medium for generating zooming special effects
CN112287171A (en) Information processing method and device and electronic equipment
CN113556480A (en) Vehicle continuous motion video generation method, device, equipment and medium
CN113473236A (en) Processing method and device for screen recording video, readable medium and electronic equipment
CN111694875B (en) Method and device for outputting information
CN112015746A (en) Data real-time processing method, device, medium and electronic equipment
CN111666449A (en) Video retrieval method, video retrieval device, electronic equipment and computer readable medium
CN112311842A (en) Method and device for information interaction
CN110825697B (en) Method and apparatus for formatting a storage device
CN114690988B (en) Test method and device and electronic equipment
CN111625432B (en) Page loading time consumption determination method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination