CN111400544A - Video data storage method, device, equipment and storage medium - Google Patents

Video data storage method, device, equipment and storage medium Download PDF

Info

Publication number
CN111400544A
CN111400544A CN201911240631.XA CN201911240631A CN111400544A CN 111400544 A CN111400544 A CN 111400544A CN 201911240631 A CN201911240631 A CN 201911240631A CN 111400544 A CN111400544 A CN 111400544A
Authority
CN
China
Prior art keywords
video data
video
association
time range
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911240631.XA
Other languages
Chinese (zh)
Other versions
CN111400544B (en
Inventor
胡威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201911240631.XA priority Critical patent/CN111400544B/en
Publication of CN111400544A publication Critical patent/CN111400544A/en
Application granted granted Critical
Publication of CN111400544B publication Critical patent/CN111400544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application discloses a data storage method, a data storage device, data storage equipment and a data storage medium, and belongs to the technical field of data storage. The method comprises the following steps: acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time for which each path of video data is retrieved; determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data; and storing the multi-channel video data, and correspondingly storing the video identification of the video data with the association relationship. Therefore, the incidence relation among the multi-channel video data can be dynamically determined according to the I frame information and/or the video retrieval information, and the video data can be stored according to the incidence relation, so that the storage mode of the video data is increased, and the flexibility of the storage of the video data is improved.

Description

Video data storage method, device, equipment and storage medium
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a method, an apparatus, a device, and a storage medium for storing video data.
Background
Currently, multiple paths of video data can be collected from different directions by multiple cameras and then stored. Since the multiple paths of video data usually have a certain relationship, in order to facilitate subsequent operations on the multiple paths of video data, such as querying the multiple paths of video data, the association relationship among the multiple cameras may be preset, so that when the multiple paths of video data are stored, the video data recorded by the cameras with the association relationship are stored in association.
However, in the above implementation, the association relationship between the multiple paths of video data is preset, which results in a single video storage manner.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for storing video data, which can solve the problem that the storage mode of the video data in the related technology is single. The technical scheme is as follows:
in one aspect, a video data storage method is provided, and the method includes:
acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time for which each path of video data is retrieved;
determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data;
and storing the multi-channel video data, and correspondingly storing the video identification of the video data with the association relationship.
In a possible implementation manner of the present application, the determining, according to the I-frame information and/or the video retrieval information of the multiple paths of video data, video data having an association relationship in the multiple paths of video data includes:
for first video data in the multiple paths of video data, when it is determined that the number of I frames in the first video data within a first time range is greater than or equal to a first reference threshold according to I frame information of the first video data, if it is determined that second video data exists in the multiple paths of video data according to I frame information of other video data except the first video data, and it is determined that the first video data and the second video data have an association relationship, where the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data;
and/or the presence of a gas in the gas,
when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where a time intersection exists between the third time range and the fourth time range, and the third video data refers to video data retrieved within a fourth time range.
In a possible implementation manner of the present application, after correspondingly storing the video identifier of the video data having the association relationship, the method further includes:
determining a first score value of the first video data according to the total retrieval times and/or the locking state of the first video data, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multi-channel video data;
when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
In one possible implementation manner of the present application, the method further includes:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first video data in a first time range and the number of I frames in the second video data in a second time range;
and updating the first scoring value of the first video data and the associated scoring value of the video data stored in association with the first video data according to the association degree.
In a possible implementation manner of the present application, the determining a first score value of the first video data according to the total number of times of retrieval and the lock status of the first video data includes:
multiplying the total retrieval times of the first video data by a first grading threshold corresponding to the retrieval times to obtain a retrieval grading value of the first video data;
and determining the sum of the retrieval score value of the first video data and a second score threshold corresponding to the lock state to obtain a first score value of the first video data.
In one possible implementation manner of the present application, the method further includes:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage amount;
when the video storage state meets the cyclic coverage condition of video storage, determining a second score value of the multi-channel video data;
and deleting the first quantity threshold value video data with the second rating values ranked behind in the multi-channel video data.
In one possible implementation manner of the present application, the determining the second score value of the multi-channel video data includes:
for first video data in the multi-path video data, determining a first score value of the first video data based on the total retrieval times and/or the lock state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one of the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is determined at least based on the total retrieval times and/or the lock state of the second video data;
and determining the sum of the first score value of the first video data and the associated score value to obtain a second score value of the first video data.
In another aspect, a video data apparatus is provided, the apparatus comprising:
the acquisition module is used for acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time for retrieving each path of video data;
the determining module is used for determining video data with incidence relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data;
and the storage module is used for storing the multi-channel video data and correspondingly storing the video identification of the video data with the association relationship.
In one possible implementation manner of the present application, the determining module is configured to:
for first video data in the multiple paths of video data, when it is determined that the number of I frames in the first video data within a first time range is greater than or equal to a first reference threshold according to I frame information of the first video data, if it is determined that second video data exists in the multiple paths of video data according to I frame information of other video data except the first video data, and it is determined that the first video data and the second video data have an association relationship, where the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data;
and/or the presence of a gas in the gas,
when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where a time intersection exists between the third time range and the fourth time range, and the third video data refers to video data retrieved within a fourth time range.
In one possible implementation manner of the present application, the storage module is further configured to:
determining a first score value of the first video data according to the total retrieval times and/or the locking state of the first video data, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multi-channel video data;
when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
In one possible implementation manner of the present application, the storage module is further configured to:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first video data in a first time range and the number of I frames in the second video data in a second time range;
and updating the first scoring value of the first video data and the associated scoring value of the video data stored in association with the first video data according to the association degree.
In one possible implementation manner of the present application, the storage module is further configured to:
multiplying the total retrieval times of the first video data by a first grading threshold corresponding to the retrieval times to obtain a retrieval grading value of the first video data;
and determining the sum of the retrieval score value of the first video data and a second score threshold corresponding to the lock state to obtain a first score value of the first video data.
In one possible implementation manner of the present application, the storage module is further configured to:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage amount;
when the video storage state meets the cyclic coverage condition of video storage, determining a second score value of the multi-channel video data;
and deleting the first quantity threshold value video data with the second rating values ranked behind in the multi-channel video data.
In one possible implementation manner of the present application, the storage module is further configured to:
for first video data in the multi-path video data, determining a first score value of the first video data based on the total retrieval times and/or the lock state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one of the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is determined at least based on the total retrieval times and/or the lock state of the second video data;
and determining the sum of the first score value of the first video data and the associated score value to obtain a second score value of the first video data.
In another aspect, an electronic device is provided, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of any one of the methods of the above aspects.
In another aspect, a computer-readable storage medium is provided, having instructions stored thereon, which when executed by a processor, implement the steps of any of the methods of the above aspects.
In another aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of any of the methods of the above aspects.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the method comprises the steps of obtaining I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time when each path of video data is retrieved. And determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data, and correspondingly storing the video identification of the video data with the association relation while storing the multi-channel video data. Therefore, the incidence relation among the multi-channel video data can be dynamically determined according to the I frame information and/or the video retrieval information, and the video data can be stored according to the incidence relation, so that the storage mode of the video data is increased, and the flexibility of the storage of the video data is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow diagram illustrating a method of video storage according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating a framework for video data storage according to an exemplary embodiment;
FIG. 3 is a diagram illustrating an association between video data according to an exemplary embodiment;
FIG. 4 is a diagram illustrating an association between video data according to another exemplary embodiment;
FIG. 5 is a block diagram illustrating a framework for video data storage according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating the structure of a video storage device in accordance with an exemplary embodiment;
fig. 7 is a schematic structural diagram of an electronic device according to another exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the reference to "at least one" in the embodiments of the present application may be one or more; the reference to "comprising" means that the inclusion is not exclusive, i.e. may include other elements in addition to the elements mentioned; reference to "a and/or B" means one or both of a or B.
Before describing the video storage method provided by the embodiment of the present application in detail, the terms and implementation environments related to the embodiment of the present application will be briefly described.
First, terms related to the embodiments of the present application will be briefly described.
I Frame (Intra Coded Frame): can be decoded into a complete image independently without depending on other frames. When the picture changes greatly, the number of I frames is relatively large, the number of P frames and B frames is relatively small, when the picture changes slightly or is basically unchanged, the number of I frames is relatively small, and the number of P frames and B frames is relatively large.
P Frame (Predictive Coded Frame): the data of the last decodable frame (such as an I frame) can be decoded into a complete image only by depending on the difference with the last decodable frame, and because only the difference data with the last decodable frame is reserved, the data volume is small, and the transmission bandwidth and the storage capacity can be greatly saved.
B Frame (bidirectional Coded Frame): the difference between the frame and the frame which can be decoded before and after the frame is included, the data of other frames which can be decoded into a complete image, and the data volume of the B frame is very small like the P frame, so that the transmission bandwidth and the storage capacity can be greatly saved.
And (3) cyclic coverage: since the hardware capacity for storing video data is limited, the storage capacity is exhausted with the continuous writing of new video data, and therefore, in order to ensure the writing of new video data, some video data in the stored video data needs to be overwritten, that is, some video data needs to be deleted. Of course, the condition for triggering the loop overlay may be related to the storage time of the video data, such as when the storage of the video data reaches a certain time (e.g. 30 days), the newly written video data needs to be overlaid with the previous video data to ensure that the new video data can be stored continuously, typically, the video data with the storage time far away from the current time is overlaid.
And (3) data locking: when a user considers that certain video data is important and does not want to be deleted in the loop coverage, the video data can be locked, and the video data can not be deleted generally when the loop coverage is executed.
Dynamic association: by dynamically analyzing each path of video data in the multi-path video data, the incidence relation between the multi-path video data can be continuously confirmed and corrected along with the change of time.
Next, an implementation environment according to the embodiment of the present application will be briefly described.
The video data storage method provided by the embodiment of the application can be executed by the electronic equipment. As an example, the electronic device may be configured with multiple cameras by itself to capture multiple channels of video data through the configured multiple cameras. As another example, the electronic device may also be connected with a plurality of cameras to capture multiple channels of video data through the connected plurality of cameras. Further, the electronic device can be configured with a video acquisition module, and the configured video acquisition module is used for controlling the plurality of cameras to acquire the plurality of paths of video data.
In some embodiments, the electronic device may further include a storage module and a cache module, the storage module may include at least one hard disk, and the cache module may be configured to cache some temporary video data. Further, the electronic device may be provided with a user interaction interface, so that a user may lock, retrieve, etc. the stored video data based on the user interaction interface. By way of example, the electronic device may be an in-vehicle device, a computer device, an electronic device, and the like, which is not limited in this application.
After the terms and implementation environments related to the embodiments of the present application are described, a detailed description will be given below of a video storage method provided by the embodiments of the present application with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a video storage method according to an exemplary embodiment, where the video storage method may be applied to the electronic device, and the method may include the following implementation steps:
step 101: the method comprises the steps of obtaining I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time when each path of video data is retrieved.
As an example, the I-frame information may include generation time stamp information of each I-frame in the multi-path video data, and in fact, the time stamp information may be used to determine the position of the corresponding I-frame in the video data.
As an example, the video retrieval information may include time information corresponding to when each of the plurality of paths of video data is retrieved.
That is, the electronic device determines a generation time of an I frame in each path of video data; or, determining the time when each path of video data is retrieved; alternatively, the generation time of the I frame in each path of video data is determined, and the time at which each path of video data is retrieved.
Step 102: and determining the video data with the association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data.
That is, the electronic device may detect whether the multiple paths of video data have an association relationship and which video data have an association relationship therebetween according to the I-frame information of the multiple paths of video data. Alternatively, the electronic device may also detect whether the multiple paths of video data have an association relationship and which video data have an association relationship therebetween according to the video retrieval information of the multiple paths of video data. Still alternatively, the electronic device may further detect whether the multiple paths of video data have an association relationship and which video data have an association relationship therebetween according to the I-frame information and the video retrieval information of the multiple paths of video data.
As an example, according to the I-frame information and/or the video retrieval information of the multiple paths of video data, the specific implementation of determining that there is an association relationship in the multiple paths of video data may include any one of the following manners:
the first implementation mode comprises the following steps: when determining the video data with the association relation in the multi-path video data according to the I frame information of the multi-path video data, for a first video data in the multi-path video data, when the number of I frames in the first video data in a first time range is determined to be greater than or equal to a first reference threshold value according to the I frame information of the first video data, if the second video data exists in the multi-path video data according to the I frame information of other video data except the first video data in the multi-path video data, determining that the first video data and the second video data have an association relationship, the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data.
The first reference threshold may be set by a user according to actual needs, or may also be set by the electronic device by default, which is not limited in the embodiment of the present application.
The second reference threshold may be set by a user according to actual needs, or may be set by the electronic device by default, which is not limited in the embodiment of the present application.
Because the information amount of the I frames including the pictures is large, when the number of the I frames is large, the monitored pictures are large in change, and therefore, when the number of the I frames of the first video data in the first time range is large, the pictures collected by the first camera recording the first video data in the first time range are large in change. If there is second video data with a plurality of I frames in a second time range near the first time range in the multi-path video data, it is indicated that the picture change acquired by a second camera recording the second video data in the second time range is also large, and at this time, it can be determined that the two paths of the first video data and the second video data have a certain association relationship, for example, the first video data and the second video data may be two cameras respectively from different directions monitoring the same scene, so it can be determined that the first video data and the second video data have an association relationship, and further, the first video data and the second video data can be taken as an association group. Thus, in this implementation, one or more association sets may be determined, as shown in fig. 3, where a video data and b video data are one association set, and x video data and y video data are one association set.
It is understood that the greater the intersection of the first time range and the second time range, the closer the first time range and the second time range is, the higher the correlation between the first video data and the second video data is, and the greater the number of times the correlation between the first video data and the second video data occurs, the higher the correlation between the first video data and the second video data is.
The second implementation mode comprises the following steps: when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where the third time range and the fourth time range have a time intersection, and the third video data refers to video data retrieved within a fourth time range.
That is, if the first video data is retrieved in a third time range and the retrieved second video data exists in a fourth time range near the third time range, for example, if the third time range is [ t1, t1+ x ], and the fourth time range is [ t1-d1, t1+ d1], it may be determined that there is some association relationship between the first video data and the second video data, and therefore, the first video data and the second video data may be associated, and further, the first video data and the second video data may be regarded as an association group. Thus, in this implementation, one or more association sets may be determined, as shown in fig. 4, where a video data and b video data are one association set, and x video data and y video data are one association set.
It should be noted that, the third time range is only used as an example for description, in another embodiment, the association relationship may also be determined based on time, for example, when it is determined that the first video data is retrieved at the target time according to the video retrieval information of the multiple paths of video data, and the second video data is retrieved within a fourth time range, it is determined that the association relationship exists between the first video data and the second video data, where the fourth time range includes the target time. For example, if the target time is t2, the fourth time range is [ t2-d2, t2+ d2], the d2 can be set according to the time requirement, for example, the d2 is 5 minutes. It is understood that the smaller this d2, the higher the degree of association between the first video data and the second video data, and in addition, the more times the association between the first video data and the second video data occurs, the higher the degree of association.
The third implementation mode comprises the following steps: when determining video data having an association relationship in the multiple paths of video data according to the I-frame information and the video retrieval information of the multiple paths of video data, the specific implementation may include: for first video data in the multi-channel video data, determining a first numerical value corresponding to the first video data according to I frame information of the multi-channel video data, and determining a second numerical value corresponding to the first video data according to video retrieval information of the multi-channel video data, wherein the first numerical value is provided with a first weight value, and the second numerical value is provided with a second weight value. And performing preset operation based on the first numerical value, the second numerical value, the first weight value and the second weight value to obtain an operation numerical value, and then determining video data in the multi-channel video data, which has an association relation with the first video data, according to the operation numerical value.
Wherein, the first weight value and the second weight value can be set according to actual requirements.
For example, when it is determined that the number of I frames in the first video data in a first time range is greater than or equal to a first reference threshold according to the I frame information of the first video data, if it is determined that second video data exists in the multiple video data according to the I frame information of other video data in the multiple video data except the first video data, a first value corresponding to the first video data is determined (e.g., the number of I frames is determined as the first value) according to the number of I frames in the first time range, and the second video data is video data in which the number of I frames in a second time range is greater than or equal to a second reference threshold. When it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the plurality of paths of video data according to the video index information of other video data in the plurality of paths of video data except the first video data, a second value corresponding to the first video data is determined according to the number of times the first video data is retrieved within the third time range (for example, the number of times the first video data is retrieved is determined as the second value), and the third video data is retrieved within a fourth time range. And then, multiplying the first numerical value by the first weight value, multiplying the second numerical value by the second weight value, and adding the results obtained by the two times of multiplication to obtain an operation numerical value. And if the obtained operand value is larger than a certain set threshold value, determining that the first video data and the second video data have an association relation.
Step 103: and storing the multi-channel video data, and correspondingly storing the video identification of the video data with the association relationship.
The video identifier may be used to uniquely identify a path of video data.
When it is determined that some paths of video data in the multiple paths of video data have an association relationship, in the process of storing the multiple paths of video data, the video identifiers of the video data having the association relationship may be stored correspondingly, for example, a correspondence relationship between the video identifiers may be established, that is, a group of correspondence relationships corresponds to a related group.
Further, after the video identifiers of the video data with the association relationship are correspondingly stored, the following operations may also be included.
Step 104: a first value of credit for the first video data is determined based on a total number of times the first video data is retrieved and/or a lock status, the lock status including a locked status or an unlocked status.
As mentioned above, the first video data is any one of the plurality of video data. The first score value may be used to indicate the importance of the first video data, that is, the importance of each video data in the plurality of video data may be determined according to the total number of times of retrieval and/or the lock status of the first video data.
Specifically, the first score value of each path of video data may be determined according to the total number of times of retrieval of each path of video data, or the first score value of each path of video data may also be determined according to the lock state of each path of video data, or the first score value of each path of video data may also be determined according to the total number of times of retrieval and the lock state of each path of video data.
As an example, determining the first score value of the first video data according to the total number of retrieval times and the lock status of the first video data may include: and multiplying the total retrieval times of the first video data by a first scoring threshold corresponding to the retrieval times to obtain a retrieval scoring value of the first video data, and determining the sum of the retrieval scoring value of the first video data and a second scoring threshold corresponding to the lock state to obtain the first scoring value of the first video data.
The first scoring threshold may be set by a user according to actual needs, or may be set by the electronic device by default, which is not limited in the embodiment of the present application.
The second scoring threshold may be set by a user according to actual needs, or may be set by the electronic device by default, which is not limited in the embodiment of the present application.
In addition, when the lock states are different, the corresponding second scoring thresholds are typically different. As an example, when the lock status includes a lock status, it is important to indicate that the corresponding video data is important, and the user generally does not want to delete the video data when the loop is covered, so that the second score threshold corresponding to the lock status can be set to be relatively larger; conversely, when the lock status includes an unlocked status, it is stated that the video data is not important, and therefore, the second score threshold corresponding to the unlocked status may be set to be a relatively small value, for example, the second score threshold corresponding to the unlocked status may be set to 0.
That is, the first score value of each video data is determined according to the total number of times of retrieval and the lock state of each video data. When the stored first video data is detected to be retrieved, the electronic device may count the total retrieval times of the first video data, and thus, when the first score value of the first video data needs to be determined, the electronic device may obtain the total retrieval times of the first video data, and multiply the total retrieval times by a first score threshold value to obtain a corresponding retrieval score value in a retrieval dimension. It is understood that the larger the total number of times of retrieval of the first video data, the more frequently the user retrieves the first video data, so that the larger the user's demand for the first video data can be determined, and the more important the first video data can be determined, and therefore, the retrieval score value can be used as an index for determining the first score value of the first video data. In addition, whether the first video data is locked or not can be considered, if the first video data is locked, the user does not want the first video data to be deleted, and therefore the importance of the first video data is large, and therefore the lock state can be used as another index for determining the first score value of the first video data. In implementation, the sum of the retrieval score value and the second scoring threshold corresponding to the lock status is determined as the first score value of the first video data.
For example, assuming that the first score threshold is 1, the second score threshold corresponding to the locked state is 70, and the second score threshold corresponding to the unlocked state is 0, for a first video data in the multi-channel video data, if the total number of times of retrieving the first video data is 10 and the locked state of the first video data includes the locked state, it may be determined that the first score value of the first video data is 10 × 1+70, which is 80.
It should be noted that, the above description is only given by taking the example of determining the first score value of each path of video data according to the total number of times of retrieval and the lock state of each path of video data. In another embodiment, when the first score value of each path of video data is determined according to the total retrieval times of each path of video data, the total retrieval times of each path of video data may be multiplied by a first score threshold corresponding to the retrieval times, and the result of the multiplication may be determined as the first score value of each path of video data. Or, in another embodiment, when the first score value of each path of video data is determined according to the lock state of each path of video data, the lock state of each path of video data may be detected, and the second score threshold corresponding to the lock state of each path of video data is determined as the first score value of each path of video data.
Step 105: and inquiring whether video data stored in association with the first video data exists in the multi-path video data.
As described above, in the process of storing video data, the video identifiers of the video data having an association relationship are correspondingly stored, so that it can be queried from the stored association relationship whether there are other video identifiers having an association relationship with the video identifier of the first video data according to the video identifier of the first video data, if so, it is determined that there is video data stored in association with the first video data in the multiple paths of video data, otherwise, it is determined that there is no video data stored in association with the first video data in the multiple paths of video data.
Step 106: when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
In the stored multiple paths of video data, there may be second video data having an association relationship with the first video data, wherein the number of the second video data may be one or more. It is understood that if the first video data is important, the second video data having an association relationship with the first video data is also important, so that, in order to ensure consistency in importance among the plurality of paths of video data having an association relationship, after the first score value of the first video data is determined, the first score value can be used as the association score value of the second video data having an association relationship with the first video data, so that the existing second video data is as important as the first video data, as shown in fig. 2.
Further, according to the number of I frames in the first video data in a first time range and the number of I frames in the second video data in a second time range, the association degree between the first video data and the second video data is determined, and according to the association degree, the first scoring value of the first video data and the association scoring value of the video data stored in association with the first video data are updated.
In an implementation, if the number of I frames in the first video data in the first time range is greater than a first reference threshold, and the number of I frames in the second video data in the second time range is greater than a second reference threshold, it indicates that the picture of the first video data in the first time range changes more frequently, and the picture of the second video data in the second time range changes more frequently, and at this time, it may be determined that the association degree is greater. In this case, the first score value may be increased by a specified threshold times, and the associated score value of the video data stored in association with the first video data may be increased by a specified threshold times, to increase the importance of the first video data and the second video data.
The specified threshold may be set by a user according to actual needs in a self-defined manner, or may be set by the electronic device in a default manner, which is not limited in the embodiment of the present application.
Of course, in another embodiment, the association degree between the first video data and the second video data may also be determined according to the size of the intersection of the first time range and the second time range, which may be referred to above.
Further, the method can also perform cyclic coverage on the multiple paths of video data according to the association relationship among the multiple paths of video data, and the specific implementation includes: and detecting a current video storage state, wherein the video storage state comprises storage time and/or storage amount, when the video storage state meets the cyclic coverage condition of video storage, determining a second score value of the multi-channel video data, and deleting a first quantity threshold channel of video data with the second score value ranked in the rear in the multi-channel video data.
The first number threshold may be set by a user according to actual needs, or may also be set by the electronic device by default, which is not limited in the embodiment of the present application.
As an example, during video storage, the electronic device may periodically detect the current video storage status, for example, the electronic device may detect the current video storage status every other time threshold, i.e., the electronic device may periodically detect the current video storage status.
Of course, the above description is only given by taking the example that the electronic device periodically detects the current video storage state. In some embodiments, the current video storage status may also be detected after receiving a video data storage request, for example, the video data storage request may carry video data to be written, and as an example, the video data storage request may be triggered by a user through a reference operation, for example, the reference operation may be a click operation, a slide operation, or the like. That is, the electronic device may also detect the current video storage status upon detecting that new video data is to be written.
The duration threshold may be set by a user according to actual needs, or may be set by the default of the electronic device, which is not limited in the embodiment of the present application.
It should be appreciated that detecting the current video storage status may include detecting an occupancy of storage capacity in the current storage module, i.e., detecting how much storage space has been occupied by video data and how much storage space remains, and/or detecting an earliest storage time and/or a latest storage time of currently stored video data.
Then, the electronic device may determine whether a loop coverage condition of video storage is satisfied according to the detected video storage state, and as an example, the implementation may include at least one of the following ways:
the first implementation mode comprises the following steps: and when the cyclic coverage condition comprises a reference capacity threshold, determining the current video storage capacity, and if the current video storage capacity is greater than or equal to the reference capacity threshold, determining that the current video storage state meets the cyclic coverage condition of video storage.
The reference capacity threshold may be set by a user according to actual needs, or may be set by the electronic device by default, which is not limited in the embodiment of the present application.
That is, if the current video storage capacity is greater than or equal to the reference capacity threshold, which indicates that the storage space is about to be or is already full, it may be determined that the video storage status satisfies the loop coverage condition of video storage, that is, the loop coverage processing needs to be performed.
The second implementation mode comprises the following steps: when the cyclic coverage condition includes a reference time threshold, determining an earliest storage time in the currently stored video data, and when the difference value of the earliest storage time from the current time is greater than or equal to the reference time threshold, determining that the current video storage state meets the cyclic coverage condition of the video storage.
The reference time threshold may be set by a user according to actual needs, or may be set by default by the electronic device, which is not limited in the embodiment of the present application.
That is, if the earliest stored video data in the stored multiple paths of video data has been a little bit away from the current time, it may be determined that the stored video data needs to be processed, so as to determine that the current video storage state satisfies the loop coverage condition of video storage.
When it is determined that the video storage state satisfies the loop covering condition of video storage, it is determined that loop covering processing needs to be performed on the stored video data, and for this reason, a second score value of the multi-channel video data may be determined, and since the second score value may be used to indicate the importance of the corresponding video data, the lower the second score value is, the smaller the importance of the corresponding video data is, and thus, some video data having smaller importance may be deleted to enable writing of new video data to be continued. For example, when the first number threshold is 1, the video data with the lowest second score value in the multi-path video data is deleted, and the other video data is retained. .
As an example, determining the second score value of the multi-channel video data may include: for a first video data in the multi-path video data, a first scoring value of the first video data is determined based on the total retrieval times and/or the lock state of the first video data. And detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one of the multiple paths of video data, and acquiring an association score value when the second video data stored in association with the first video data exists, wherein the association score value is determined at least based on the total retrieval times and/or the lock state of the second video data. And determining the sum of the first scoring value of the first video data and the associated scoring value to obtain a second scoring value of the first video data.
It is understood that, if there is second video data in the stored multi-channel video data, where the first video data has an association relationship with the second video data, the association score value of the first video data may exist, and therefore, before determining the second score value of the first video data, the electronic device may further detect whether there is an association score value of the first video data, and when there is an association score value of the first video data, determine the sum of the first score value of the first video data and the association score value as the second score value of the first video data.
That is, when determining the second score value, it may be determined whether there is an association score value corresponding to second video data having an association relationship with the first video data, as shown in fig. 2, the association relationship recorded by the association relationship module may determine whether there is an association score value, and if there is an association score value of the first video data, when determining the second score value of the first video data, the association score value may be increased in addition to the first score value of the first video data, so that the sum of the two determines the second score value of the first video data.
Further, since the multiple channels of video data are generally stored in the storage module, such as a hard disk, the data need to be read from the hard disk when the user retrieves the data, which may result in a slow response speed. Therefore, when it is detected that a user retrieves first video data in the stored multiple paths of video data, the index information of second video data having an association relationship with the first video data and part of the video data can be loaded into the cache module, that is, the data related to the first video data is preloaded, as shown in fig. 5, so that the user can directly obtain data from the cache module when retrieving the data related to the first video data, and the need of reading from the storage module is avoided, thereby improving the response speed of the user for retrieving the related video data.
In the embodiment of the application, I frame information and/or video retrieval information of multiple paths of video data are obtained, the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is used for indicating at least the time when each path of video data is retrieved. And determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data, and correspondingly storing the video identification of the video data with the association relation while storing the multi-channel video data. Therefore, the incidence relation among the multi-channel video data can be dynamically determined according to the I frame information and/or the video retrieval information, and the video data can be stored according to the incidence relation, so that the storage mode of the video data is increased, and the flexibility of the storage of the video data is improved.
Fig. 6 is a schematic diagram illustrating a structure of a video storage device according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both. The video storage apparatus may include:
an obtaining module 610, configured to obtain I-frame information and/or video retrieval information of multiple paths of video data, where the I-frame information of each path of video data is used to indicate a position of an I-frame included in each path of video data, and the video retrieval information of each path of video data is used to indicate at least a time when each path of video data is retrieved;
a determining module 620, configured to determine, according to the I frame information and/or the video retrieval information of the multiple paths of video data, video data having an association relationship in the multiple paths of video data;
the storage module 630 is configured to store the multiple paths of video data, and correspondingly store the video identifiers of the video data having the association relationship.
In one possible implementation manner of the present application, the determining module 620 is configured to:
for first video data in the multiple paths of video data, when it is determined that the number of I frames in the first video data within a first time range is greater than or equal to a first reference threshold according to I frame information of the first video data, if it is determined that second video data exists in the multiple paths of video data according to I frame information of other video data except the first video data, and it is determined that the first video data and the second video data have an association relationship, where the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data;
and/or the presence of a gas in the gas,
when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where a time intersection exists between the third time range and the fourth time range, and the third video data refers to video data retrieved within a fourth time range.
In a possible implementation manner of the present application, the storage module 630 is further configured to:
determining a first score value of the first video data according to the total retrieval times and/or the locking state of the first video data, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multi-channel video data;
when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
In a possible implementation manner of the present application, the storage module 630 is further configured to:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first video data in a first time range and the number of I frames in the second video data in a second time range;
and updating the first scoring value of the first video data and the associated scoring value of the video data stored in association with the first video data according to the association degree.
In a possible implementation manner of the present application, the storage module 630 is further configured to:
multiplying the total retrieval times of the first video data by a first grading threshold corresponding to the retrieval times to obtain a retrieval grading value of the first video data;
and determining the sum of the retrieval score value of the first video data and a second score threshold corresponding to the lock state to obtain a first score value of the first video data.
In a possible implementation manner of the present application, the storage module 630 is further configured to:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage amount;
when the video storage state meets the cyclic coverage condition of video storage, determining a second score value of the multi-channel video data;
and deleting the first quantity threshold value video data with the second rating values ranked behind in the multi-channel video data.
In a possible implementation manner of the present application, the storage module 630 is further configured to:
for first video data in the multi-path video data, determining a first score value of the first video data based on the total retrieval times and/or the lock state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one of the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is determined at least based on the total retrieval times and/or the lock state of the second video data;
and determining the sum of the first score value of the first video data and the associated score value to obtain a second score value of the first video data.
In the embodiment of the application, I frame information and/or video retrieval information of multiple paths of video data are obtained, the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is used for indicating at least the time when each path of video data is retrieved. And determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data, and correspondingly storing the video identification of the video data with the association relation while storing the multi-channel video data. Therefore, the incidence relation among the multi-channel video data can be dynamically determined according to the I frame information and/or the video retrieval information, and the video data can be stored according to the incidence relation, so that the storage mode of the video data is increased, and the flexibility of the storage of the video data is improved.
It should be noted that: in the storage apparatus for video data provided in the foregoing embodiment, when the storage method for video data is implemented, only the division of the above functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the storage apparatus for video data and the storage method embodiment for video data provided by the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
Fig. 7 shows a block diagram of an electronic device 700 according to an exemplary embodiment of the present disclosure, where the electronic device 700 may be a smart phone, a tablet pc, an MP3 player (Moving Picture Experts Group Audio L layer III, mpeg Audio layer 3), an MP4 player (Moving Picture Experts Group Audio L layer IV, mpeg Audio layer 4), a notebook pc, or a desktop pc, and the electronic device 700 may also be referred to as a user equipment, a portable electronic device, a laptop electronic device, a desktop electronic device, or other names.
In general, the electronic device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more Processing cores, such as a 4-core processor, an 8-core processor, etc. processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a P L a (Programmable logic Array), processor 701 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in a wake-up state, also known as a CPU (Central Processing Unit), the coprocessor being a low-power processor for Processing data in a standby state, in some embodiments, processor 701 may be integrated with a GPU (Graphics Processing Unit) for rendering and rendering content for display, in some embodiments, processor 701 may also include an AI (intelligent processor) for learning operations related to an AI (Artificial Intelligence processor) for computing operations related to display screens.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the video data storage method provided by the method embodiments herein.
In some embodiments, the electronic device 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The Display 705 is used to Display a UI (User Interface) that may include graphics, text, icons, video, and any combination thereof, when the Display 705 is a touch Display, the Display 705 also has the ability to capture touch signals on or over a surface of the Display 705. the touch signals may be input to the processor 701 for processing as control signals.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic location of the electronic device 700 to implement navigation or L BS (L optically based Service). The positioning component 708 can be a positioning component based on the GPS (Global positioning System) in the United states, the Beidou System in China, or the Galileo System in Russia.
The power supply 709 is used to supply power to various components in the electronic device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the electronic device 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the user with respect to the electronic device 700. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of electronic device 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the electronic device 700, a user holding signal of the electronic device 700 may be detected, and the processor 701 may perform left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting fingerprints of a user, the identity of the user is identified by the processor 701 according to the fingerprints collected by the fingerprint sensor 714, or the identity of the user is identified by the fingerprint sensor 714 according to the collected fingerprints, when the identity of the user is identified as a credible identity, the user is authorized to execute relevant sensitive operations by the processor 701, the sensitive operations comprise screen unlocking, encrypted information viewing, software downloading, payment, setting change and the like, the fingerprint sensor 714 can be arranged on the front side, the back side or the side of the electronic device 700, when a physical key or a manufacturer L ogo is arranged on the electronic device 700, the fingerprint sensor 714 can be integrated with the physical key or the manufacturer L ogo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on the front panel of the electronic device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the electronic device 700. In one embodiment, the processor 701 controls the touch display screen 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front surface of the electronic device 700 gradually becomes larger, the processor 701 controls the touch display screen 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the electronic device 700 and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
Embodiments of the present application further provide a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video data storage method provided in the foregoing embodiments.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the video data storage method provided by the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method for storing video data, the method comprising:
acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time for which each path of video data is retrieved;
determining video data with an association relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data;
and storing the multi-channel video data, and correspondingly storing the video identification of the video data with the association relationship.
2. The method according to claim 1, wherein the determining the video data having the association relationship in the multiple paths of video data according to the I-frame information and/or the video retrieval information of the multiple paths of video data comprises:
for first video data in the multiple paths of video data, when it is determined that the number of I frames in the first video data within a first time range is greater than or equal to a first reference threshold according to I frame information of the first video data, if it is determined that second video data exists in the multiple paths of video data according to I frame information of other video data except the first video data, and it is determined that the first video data and the second video data have an association relationship, where the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data;
and/or the presence of a gas in the gas,
when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where a time intersection exists between the third time range and the fourth time range, and the third video data refers to video data retrieved within a fourth time range.
3. The method according to claim 2, wherein after correspondingly storing the video identifiers of the video data with the association relationship, the method further comprises:
determining a first score value of the first video data according to the total retrieval times and/or the locking state of the first video data, wherein the locking state comprises a locking state or an unlocking state;
inquiring whether video data stored in association with the first video data exists in the multi-channel video data;
when there is video data stored in association with the first video data, the first score value is taken as an associated score value of the video data stored in association with the first video data.
4. The method of claim 3, wherein the method further comprises:
determining a degree of association between the first video data and the second video data according to the number of I frames in the first video data in a first time range and the number of I frames in the second video data in a second time range;
and updating the first scoring value of the first video data and the associated scoring value of the video data stored in association with the first video data according to the association degree.
5. The method of claim 3, wherein determining the first value of credit for the first video data based on the total number of searches and the lock status of the first video data comprises:
multiplying the total retrieval times of the first video data by a first grading threshold corresponding to the retrieval times to obtain a retrieval grading value of the first video data;
and determining the sum of the retrieval score value of the first video data and a second score threshold corresponding to the lock state to obtain a first score value of the first video data.
6. The method of claim 1, wherein the method further comprises:
detecting a current video storage state, wherein the video storage state comprises storage time and/or storage amount;
when the video storage state meets the cyclic coverage condition of video storage, determining a second score value of the multi-channel video data;
and deleting the first quantity threshold value video data with the second rating values ranked behind in the multi-channel video data.
7. The method of claim 6, wherein said determining a second value of credit for the multi-channel video data comprises:
for first video data in the multi-path video data, determining a first score value of the first video data based on the total retrieval times and/or the lock state of the first video data;
detecting whether second video data stored in association with the first video data exists or not, wherein the first video data is any one of the multiple paths of video data;
when second video data stored in association with the first video data exists, acquiring an association score value, wherein the association score value is determined at least based on the total retrieval times and/or the lock state of the second video data;
and determining the sum of the first score value of the first video data and the associated score value to obtain a second score value of the first video data.
8. An apparatus for storing video data, the apparatus comprising:
the acquisition module is used for acquiring I frame information and/or video retrieval information of multiple paths of video data, wherein the I frame information of each path of video data is used for indicating the position of an I frame included in each path of video data, and the video retrieval information of each path of video data is at least used for indicating the time for retrieving each path of video data;
the determining module is used for determining video data with incidence relation in the multi-channel video data according to the I frame information and/or the video retrieval information of the multi-channel video data;
and the storage module is used for storing the multi-channel video data and correspondingly storing the video identification of the video data with the association relationship.
9. The apparatus of claim 8, wherein the determination module is to:
for first video data in the multiple paths of video data, when it is determined that the number of I frames in the first video data within a first time range is greater than or equal to a first reference threshold according to I frame information of the first video data, if it is determined that second video data exists in the multiple paths of video data according to I frame information of other video data except the first video data, and it is determined that the first video data and the second video data have an association relationship, where the first time range and the second time range have a time intersection, the second video data is video data in which the number of I frames in the second time range is greater than or equal to a second reference threshold, and the first video data is any one of the multiple paths of video data;
and/or the presence of a gas in the gas,
when it is determined that the first video data is retrieved within a third time range according to the video retrieval information of the first video data, if it is determined that third video data exists in the multi-path video data according to the video index information of other video data except the first video data in the multi-path video data, it is determined that the first video data and the third video data have an association relationship, where a time intersection exists between the third time range and the fourth time range, and the third video data refers to video data retrieved within a fourth time range.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of any of the methods of claims 1-7.
11. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any of the methods of claims 1-7.
CN201911240631.XA 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium Active CN111400544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911240631.XA CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911240631.XA CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111400544A true CN111400544A (en) 2020-07-10
CN111400544B CN111400544B (en) 2023-09-19

Family

ID=71433992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911240631.XA Active CN111400544B (en) 2019-12-06 2019-12-06 Video data storage method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111400544B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005084701A (en) * 2003-09-04 2005-03-31 Ntt Comware Corp Multiplex relating method, device and program of multimedia data for multimedia data searching index
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
US20130028574A1 (en) * 2011-07-29 2013-01-31 Xerox Corporation Systems and methods for enriching audio/video recordings
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN104639860A (en) * 2014-12-31 2015-05-20 安科智慧城市技术(中国)有限公司 Storage method and device of supervision video
US20160345035A1 (en) * 2015-05-18 2016-11-24 Zepp Labs, Inc. Multi-angle video editing based on cloud video sharing
CN106326439A (en) * 2016-08-26 2017-01-11 东方网力科技股份有限公司 Real-time video storage and retrieval method and device
CN107493458A (en) * 2017-09-13 2017-12-19 郑州云海信息技术有限公司 A kind of method and system for accessing monitor video
CN109327680A (en) * 2018-09-21 2019-02-12 北京世纪东方国铁科技股份有限公司 Multi-path video data concurrently stores and read method
CN110198432A (en) * 2018-10-30 2019-09-03 腾讯科技(深圳)有限公司 Processing method, device, computer-readable medium and the electronic equipment of video data

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7024100B1 (en) * 1999-03-26 2006-04-04 Matsushita Electric Industrial Co., Ltd. Video storage and retrieval apparatus
JP2005084701A (en) * 2003-09-04 2005-03-31 Ntt Comware Corp Multiplex relating method, device and program of multimedia data for multimedia data searching index
US20130028574A1 (en) * 2011-07-29 2013-01-31 Xerox Corporation Systems and methods for enriching audio/video recordings
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104281651A (en) * 2014-09-16 2015-01-14 福建星网锐捷安防科技有限公司 Method and system for searching large volume of video data
CN104639860A (en) * 2014-12-31 2015-05-20 安科智慧城市技术(中国)有限公司 Storage method and device of supervision video
US20160345035A1 (en) * 2015-05-18 2016-11-24 Zepp Labs, Inc. Multi-angle video editing based on cloud video sharing
CN106326439A (en) * 2016-08-26 2017-01-11 东方网力科技股份有限公司 Real-time video storage and retrieval method and device
CN107493458A (en) * 2017-09-13 2017-12-19 郑州云海信息技术有限公司 A kind of method and system for accessing monitor video
CN109327680A (en) * 2018-09-21 2019-02-12 北京世纪东方国铁科技股份有限公司 Multi-path video data concurrently stores and read method
CN110198432A (en) * 2018-10-30 2019-09-03 腾讯科技(深圳)有限公司 Processing method, device, computer-readable medium and the electronic equipment of video data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. SMITH, A. KHOTANZAD: "Efficient Indexing and Storage Algorithm for Visual Databases", pages 162 - 166 *
刘晓军: "多路视频流的压缩与存储控制技术" *

Also Published As

Publication number Publication date
CN111400544B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109874312B (en) Method and device for playing audio data
CN110022489B (en) Video playing method, device and storage medium
CN108132790B (en) Method, apparatus and computer storage medium for detecting a garbage code
CN113127181A (en) Memory management method, device and storage medium
WO2020088542A1 (en) Data management method, apparatus and system, and server, terminal and storage medium
CN112084811A (en) Identity information determining method and device and storage medium
CN110673944B (en) Method and device for executing task
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN115766490A (en) Calibration data acquisition method, calibration data storage method, device and equipment
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN111400002A (en) Application process and processor core binding method and terminal
CN111459410B (en) Memory space allocation method and device, electronic equipment and storage medium
CN111813322A (en) Method, device and equipment for creating storage pool and storage medium
CN107944024B (en) Method and device for determining audio file
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN111427629A (en) Application starting method and device, vehicle equipment and storage medium
CN111428080A (en) Storage method, search method and device for video files
CN115344537A (en) Storage space allocation method and device, file system and storage medium
CN111711841B (en) Image frame playing method, device, terminal and storage medium
CN111708669B (en) Analysis method, device, equipment and storage medium for system operation
CN110032421B (en) Method and device for managing atlas in memory, terminal and storage medium
CN113301422A (en) Method, terminal and storage medium for acquiring video cover
CN114388001A (en) Multimedia file playing method, device, equipment and storage medium
CN108664421B (en) Method, device and storage medium for acquiring multimedia data
CN110519319B (en) Method and device for splitting partitions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant