CN113347502A - Video review method, video review device, electronic equipment and medium - Google Patents

Video review method, video review device, electronic equipment and medium Download PDF

Info

Publication number
CN113347502A
CN113347502A CN202110612998.0A CN202110612998A CN113347502A CN 113347502 A CN113347502 A CN 113347502A CN 202110612998 A CN202110612998 A CN 202110612998A CN 113347502 A CN113347502 A CN 113347502A
Authority
CN
China
Prior art keywords
video
target
event
duration
target event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110612998.0A
Other languages
Chinese (zh)
Other versions
CN113347502B (en
Inventor
陈辉
杜沛力
张智
熊章
胡国湖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Xingxun Intelligent Technology Co ltd
Original Assignee
Ningbo Xingxun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Xingxun Intelligent Technology Co ltd filed Critical Ningbo Xingxun Intelligent Technology Co ltd
Priority to CN202310208827.0A priority Critical patent/CN116208821A/en
Priority to CN202110612998.0A priority patent/CN113347502B/en
Publication of CN113347502A publication Critical patent/CN113347502A/en
Application granted granted Critical
Publication of CN113347502B publication Critical patent/CN113347502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to the technical field of video monitoring, solves the technical problem that in the prior art, a video clip of a specific event is difficult to search and brings inconvenience to a user, and provides a video review method, a video review device, electronic equipment and a video review medium. Acquiring a request instruction for reviewing a target video corresponding to a target event; determining a basic video associated with the target event according to the request instruction; and synthesizing video data associated with the target event in the basic video according to a preset video synthesis rule to generate a target video, and outputting the target video corresponding to the target event. According to the invention, the user sends the request instruction for looking back the video, and the basic video associated with the target event is determined through the request instruction so as to synthesize the target video, so that on one hand, the user does not need to search blindly and the time of the user is saved, and on the other hand, the target video is synthesized under the request instruction only when the user views the target video, and is not stored in advance, so that the storage space of the equipment is saved.

Description

Video review method, video review device, electronic equipment and medium
Technical Field
The invention relates to the technical field of video monitoring, in particular to a video review method, a video review device, electronic equipment and a video review medium.
Background
Video monitoring is a physical basis for real-time monitoring of key departments or important places in various industries, and management departments can obtain effective data, image or sound information through the video monitoring system, timely monitor and memorize the process of sudden abnormal events, and provide efficient and timely command and processing and the like. Video surveillance is an important component of security systems. The traditional monitoring system comprises a front-end camera, a transmission cable and a video monitoring platform. The cameras can be divided into network digital cameras and analog cameras and can be used for collecting front-end video image signals. It is a comprehensive system with strong precautionary ability. Video monitoring is widely applied to many occasions due to intuition, accuracy, timeliness and rich information content.
In recent years, with the rapid development of computers, networks, image processing and transmission technologies, video monitoring technologies have been developed, current video monitoring equipment usually continuously records videos, the continuous videos make video files larger, and the larger video files store massive video data, so that the technical problem brought about is that video segments when a specific event occurs are difficult to search, and inconvenience is brought to users.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video review method, apparatus, electronic device and medium, so as to solve the technical problem in the prior art that a video segment when a specific event occurs is difficult to search, which brings inconvenience to a user.
The technical scheme adopted by the invention is as follows:
the invention provides a video review method, which comprises the following steps:
s1: acquiring a request instruction for reviewing a target video corresponding to a target event;
s2: determining at least one continuous basic video associated with the target event according to the request instruction;
s3: and synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, and outputting the target video corresponding to the target event.
Preferably, the S2 includes:
s21: acquiring label information of the target event and duration corresponding to the target video;
s22: determining a first basic video where the target event is located according to the time information of the tag information;
s23: determining at least one continuous base video associated with the target event according to the corresponding duration of the target video and the position of the target event in the first base video;
and the duration corresponding to the target video is less than or equal to the duration of the basic video.
Preferably, the S23 includes:
s231: dividing the first basic video into a plurality of video segments according to the duration corresponding to the target video;
s232: determining a target video segment of a first basic video to which the target event belongs according to the time information of the target event;
s233: determining at least one continuous base video associated with the target event according to the position information of the target video segment compared with the first base video and the corresponding duration of the target video.
Preferably, the S231 includes:
s2311: acquiring a target time length corresponding to one half of the time length of the target video;
s2312: and segmenting the first basic video according to the target duration to obtain the plurality of video segments.
Preferably, the S3 includes:
s31: acquiring a preset video synthesis rule of the target video and a time length corresponding to the target video;
s32: extracting each frame image of target video data associated with the target event in each basic video according to the preset video synthesis rule, and synthesizing the target video;
and the total duration of each frame of image of the target video data is equal to the duration corresponding to the target video.
Preferably, said S1 comprises before:
s01, acquiring a time length threshold corresponding to the target event;
s02: timing the special event of the target area to generate a reference time length corresponding to the timing time length;
s03: and when the reference duration meets the requirement of the duration threshold, taking a key frame image of the special event corresponding to the reference duration as an index image of the target event corresponding to the special event.
Preferably, said S03 includes, after:
s04: acquiring all index images in a specified time period;
s05: identifying each index image, and obtaining the confidence of each index image;
s06: when the confidence of the index image meets the requirement of a confidence threshold, extracting the index image meeting the requirement of the confidence threshold and establishing an index image list.
Preferably, the S06 includes:
s061: acquiring historical video watching data of a user, determining event content in a target video which the user is interested in, and determining historical key frame images in the event content;
s062: giving a weighted value to the similarity degree of the key frame image of the special event and the historical key frame image;
s063: and sorting each index image in the index image list according to the confidence coefficient of each index image and the weight value.
The present invention also provides an apparatus comprising:
an instruction acquisition module: a request instruction for acquiring a review target event;
basic video positioning module: the video processing device is used for determining at least one continuous basic video associated with the target event according to the request instruction;
the target video synthesis module: and the video synthesis module is used for synthesizing the video data associated with the target event in the base video according to a preset video synthesis rule and outputting the target video corresponding to the target event.
The present invention also provides an electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of the above.
The invention also provides a medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any of the above.
In conclusion, the beneficial effects of the invention are as follows:
the invention provides a video review method, a video review device, electronic equipment and a video review medium, which are used for acquiring a request instruction of a target video corresponding to a review target event; determining at least one continuous basic video associated with the target event according to the request instruction; synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, outputting the target video corresponding to the target event, sending a request instruction for reviewing the video by a user, determining the base video associated with the target event through the request instruction, and synthesizing the target video to be reviewed through the base video, so that the user does not need to search videos related to the event to be reviewed in a large amount of recorded base videos, and valuable time of the user is saved, and on the other hand, the target video synthesizes the video data associated with the target event in the base video according to the preset video synthesis rule under the request instruction of the user only when the user needs to review, so as to generate the target video which is not stored in advance, the storage space of the device can be saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, without any creative effort, other drawings may be obtained according to the drawings, and these drawings are all within the protection scope of the present invention.
Fig. 1 is a schematic flowchart of a video review method in example 1 according to a first embodiment of the present invention;
fig. 2 is a schematic flowchart of determining a base video in example 1 according to a first embodiment of the present invention;
fig. 3 is a schematic flowchart of determining a base video according to duration and location in example 1 according to a first embodiment of the present invention;
fig. 4 is a flowchart illustrating a process of segmenting a base video according to an embodiment 1 of the present invention;
fig. 5 is a schematic flowchart of synthesizing a target video in example 1 according to a first embodiment of the present invention;
FIG. 6 is a schematic flow chart illustrating the generation of an index image in example 1 according to a first embodiment of the present invention;
fig. 7 is a schematic flowchart of creating an index image list in example 1 according to a first embodiment of the present invention;
fig. 8 is a schematic flowchart of sorting index images in the index image list in example 1 according to a first embodiment of the present invention;
fig. 9 is a schematic flowchart of storing a real-time video in example 1 according to a first embodiment of the present invention;
fig. 10 is a schematic flowchart of a video extracting and splicing method in example 2 according to a first embodiment of the present invention;
fig. 11 is a block diagram of an apparatus in example 3 according to a second embodiment of the present invention;
fig. 12 is a block diagram of a video extracting and splicing apparatus in example 4 according to a second embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device in a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In case of conflict, it is intended that the embodiments of the present invention and the individual features of the embodiments may be combined with each other within the scope of the present invention.
Implementation mode one
Example 1
Referring to fig. 1, fig. 1 is a schematic flowchart of a video review method in embodiment 1 of the present invention. The video review method of embodiment 1 of the present invention includes:
s1: acquiring a request instruction for reviewing a target video corresponding to a target event;
specifically, a user can conveniently send a request instruction for reviewing a target video corresponding to a target event by clicking and pressing an APP on the mobile terminal, the target event includes some events caused by daily activities of an infant, for example, events related to actions such as crying and screaming, clapping and rolling of the infant on a bed, the video monitoring device records the target event, and generates an index tag associated with the target event at an APP end, for example, a video reviewing interface at the APP end displays an image corresponding to the target event.
S2: determining at least one continuous basic video associated with the target event according to the request instruction;
specifically, target events are recorded according to a time sequence, at least one continuous basic video associated with the target events is determined according to time information of the target events or recognizable action information of the target events, and real-time videos recorded according to time by the video monitoring equipment are stored to be the basic videos. It should be noted that, at least one continuous base video associated with the target event is determined, and the at least one continuous base video may be considered comprehensively according to the starting time of the target event, the duration of the target video, and a preset video composition rule, for example, each base video is recorded according to the duration of one minute, when the starting time of the target event is at the tenth second of the fifth minute, if the duration of the target video is thirty seconds, if the video data of thirty seconds is taken backwards from the tenth second of the fifth minute, that is, the fortieth seconds of the tenth second to the forty seconds of the fifth minute, the video data of the fifth section is all included in the base video of the fifth section, and therefore, in this case, only the fifth section of the base video may be extracted; however, similarly, if the duration of the target video is thirty seconds, and fifteen seconds of video data are taken from the tenth second of the fifth minute forward and fifteen seconds backward, the situation is different from the above situation because fifteen seconds of video data are taken from the tenth second of the fifth minute forward, and five seconds of video data are not in the fifth base video but in the last five seconds of the fourth base video, and two consecutive base videos of the fourth base video and the fifth base video are required to be taken to synthesize the target video. Further, when the target event start time is at fifty seconds of the fifth minute, if thirty seconds of video data are taken backward from the fifty seconds of the fifth minute, two consecutive base videos, that is, the fifth base video and the sixth base video, need to be taken.
S3: and synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, and outputting the target video corresponding to the target event.
Specifically, after video data associated with a target event is located, the video data is cut and spliced according to a preset video synthesis rule to synthesize a target video, and the target video is output to a user terminal, so that a user can view an event video which the user wants to see.
Specifically, a user triggers a request instruction for reviewing a target video corresponding to a target event by clicking and pressing an index tag of the target event in an App-end video review interface, determines a basic video associated with the target event through the request instruction, and synthesizes the target video to be reviewed through the basic video, so that on one hand, the user does not need to search videos related to the event and needing to be reviewed in a large number of recorded basic videos, and valuable time of the user is saved.
In one embodiment, referring to fig. 2, the S2 includes:
s21: acquiring label information of the target event and duration corresponding to the target video;
specifically, when a target event occurs, the corresponding base video is marked correspondingly, and the corresponding marking operation is to generate an event tag for recording more information and generate tag information corresponding to the target event. Specifically, the duration of the target video to be generated is less than or equal to the duration of the base video, and the meaning of making such a specification is that the time and the energy of the user are both limited, so the duration of the target video is not suitable to be too long, and if the duration of the target video is greater than that of the base video, the corresponding base video is directly extracted, and the splicing of the videos is not required to be involved, so the duration of the target video is preferably less than or equal to the duration of the base video.
S22: determining a first basic video where the target event is located according to the time information of the tag information;
specifically, since the tag information of the event tag includes more information including time information, and since the base videos are all continuously recorded in time sequence, the specific base video corresponding to the target event can be determined correspondingly according to the time information included in the tag information of the event.
S23: determining at least one base video associated with the target event according to the corresponding duration of the target video and the position of the target event in the first base video;
and the duration corresponding to the target video is less than or equal to the duration of the basic video.
Specifically, there may be more than one base video associated with the target event, for example, each base video may be recorded for a time of one minute, if the target video is set to have a duration of 30 seconds, if the event starts to occur at the 50 th second of one base video, the duration of 30 seconds spans the first base video, and the second base video following the first base video, specifically from the 50 th second of the first base video to the first 20 seconds of the second base video. Therefore, when determining the base video associated with the target event, more than one base video associated with the target event is determined, and at this time, an appropriate base video is selected according to the progress process of the target event to be viewed, and corresponding video data is extracted and spliced to be presented to the target video closely associated with the target event.
Referring to fig. 3, the S23 includes:
s231: dividing the first basic video into a plurality of video segments according to the duration corresponding to the target video;
specifically, in order to obtain video data more suitable for a target event, and enable the generated target video to be closer to the actual needs of a user, in the scheme, a first base video is divided into a plurality of video segments, and the video segments are gradually searched for, so that more suitable video data are obtained.
S232: determining a target video segment of a first basic video to which the target event belongs according to the time information of the target event;
specifically, in the actual operation process, only one basic video or two adjacent continuous basic videos need to be taken, and these two situations are slightly complicated in judgment, so that the relevant judgment process is faster and simpler. For example, the recording time of each base video is one minute, the set corresponding time length of the target video is 30 seconds, when the starting time of the target event occurs at the twenty-fifth second of the fifth minute, the fifth base video is located, and the fifth base video is segmented every fifteen seconds into four segmented videos which are respectively the fifth minute to the fourteenth second of the fifth minute; fifteenth second of fifth minute to twenty-ninth second of fifth minute; thirty-second to forty-fourth second of fifth minute; forty-fifth second of fifth minute to fifty-second of fifth minute; if the starting time of the target event is forward and backward for fifteen seconds, when the starting time of the target event is located in the first segmented video or the fourth segmented video, two continuous basic videos before and after the target event needs to be taken, and when the starting time of the target event is located in the second segmented video or the third segmented video, only one basic video at the position of the target event needs to be taken.
S233: determining at least one continuous base video associated with the target event according to the position information of the target video segment compared with the first base video and the corresponding duration of the target video.
Specifically, according to the position of the target video segment in the first base video and the time length of the target video, it can be inferred whether the target video can be completely extracted from the existing first base video or needs to be extracted from an adjacent base video. If the front and rear occurrence processes of the target event need to be completely understood, the front and rear basic videos can be extracted and spliced according to the time sequence. If the time span of the target event is larger, more video data can be extracted and spliced, so that the user has better video review experience.
In an embodiment, referring to fig. 4, the S231 includes:
s2311: acquiring a target time length corresponding to one half of the time length of the target video;
s2312: and segmenting the first basic video according to the target duration to obtain the plurality of video segments.
Specifically, the first basic video is segmented through the target duration, so that each video segment is not too short while the first basic video is reasonably segmented, the segmentation of the basic video is not too scattered, the cutting and splicing efficiency is improved, and the data processing pressure is reduced. In the process of generating the target video, two video segments associated with the target event are selected, and the two video segments are extracted and spliced to obtain the target video, so that the target video is generated more simply and more rapidly. And quickly judging whether the base video is required to be extracted or two adjacent base videos are required to be extracted according to the position of the starting time of the target event in a specific base video, namely the segmented video in which the starting time of the target event is specifically positioned.
In one embodiment, referring to fig. 5, the S3 includes:
s31: acquiring a preset video synthesis rule of the target video and a time length corresponding to the target video;
s32: extracting each frame image of target video data associated with the target event in each basic video according to the preset video synthesis rule, and synthesizing the target video;
and the total duration of each frame of image of the target video data is equal to the duration corresponding to the target video.
Specifically, the target video data are extracted according to continuous video frames, and the target video data are processed and the target video is generated by combining the set target video synthesis rule and the target video duration. As for the target video synthesis rule, the difficulty of video processing, the computational power of the processor, the actual requirements of the user, and the like can be combined to actually determine, for example, if the video data processing is difficult and the computational power of the processor is small, the continuous video frames can be intercepted and spliced at the moment, so that the target video can be generated quickly, and more system resource consumption is avoided.
To this end, in an embodiment, referring to fig. 6, the S1 includes:
s01, acquiring a time length threshold corresponding to the target event;
s02: timing the special event of the target area to generate a reference time length corresponding to the timing time length;
s03: and when the reference duration meets the requirement of the duration threshold, taking a key frame image of the special event corresponding to the reference duration as an index image of the target event corresponding to the special event.
Specifically, in order to avoid the target video from being generated too sensitively and avoid the trouble brought to the user by false alarm, in the scheme, the reference time length of the target event is recorded, and the index image of the target event is generated only under the condition that the reference time length meets the requirement of a certain time length threshold value, so that the subsequent user can conveniently call the index image. In addition, the video data corresponding to the reference time length can be directly used as the video data of the target video, or the corresponding video data is obtained from the reference time length forward or backward and is combined with the video data in the reference time length to generate the target video, so that the video review requirements of different users are met. It has been mentioned that the target events include events resulting from the daily activities of the infant, such as crying, flapping and rolling movements of the infant in bed, and more specifically, the special events are events further filtered out based on the target events, and are crying and screaming related events of the infant. Because crying of the baby is related to an event which is relatively concerned by a user, and the problem of adaptability and safety of the baby to the environment is involved, the user can better protect the baby, so that the baby is more adaptive to the environment and has more safety, and the attention of the user to a special event is higher generally, therefore, in the embodiment, a frame of image extracted from the special event is adopted as an index image of a target event corresponding to the special event, so that the user can be helped to better review the crying moment of the baby, the analysis of the crying and screaming reasons of the baby is facilitated, the pertinence is improved, and the scientific infant rearing is realized.
It should be further noted that, referring to fig. 7, said S03 includes:
s04: acquiring all index images in a specified time period;
s05: identifying each index image, and obtaining the confidence of each index image;
s06: and when the confidence of the index image meets the requirement of a confidence threshold, extracting the index image meeting the requirement of the confidence threshold and establishing an index image list.
Specifically, all of the index images include a plurality of index images of different categories, such as a category of index images of a baby crying or a category of index images of a baby crawling play; the time of a given time period is 24 hours a day, and more special events may occur in the time range, wherein the special events are events further screened from the target events, and are events of the same type, such as events related to crying and screaming of the baby, events related to crawling and playing of the baby on the bed, and the like. By comparing each index image with the images in the database, the judgment method of the index images is different, so that the identification of the index images has certain deviation, the confidence coefficient of each index image can be correspondingly calculated according to the deviation, and when the confidence coefficient of the index image meets the requirement of the threshold value of the confidence coefficient, the index images are extracted, so that the index images are accurately positioned to related basic videos according to the index images, and the association degree between the index images and the target videos is higher. For example, comparing all the acquired index images with the images in the database, calculating to obtain a similarity value between each index image and the image in the database as a confidence coefficient, for example, there is one index image, comparing the images in the other databases to obtain the confidence coefficient of 0.6, setting the threshold value of the confidence coefficient to 0.8, and discarding the index image because the confidence coefficient of the index image does not meet the requirement of the threshold value of the confidence coefficient because 0.6 is lower than 0.8; if the confidence of the other index image is 0.9, judging that the confidence of the index image meets the requirement of a confidence threshold value because 0.9 is higher than 0.8, extracting the index image, extracting all index images meeting the requirement of the confidence threshold value, and establishing an index image list.
Further, referring to fig. 8, the S06 includes:
s061: acquiring historical video watching data of a user, determining event content in a target video which the user is interested in, and determining historical key frame images in the event content;
specifically, the event content of the target video that the user is interested in can be determined through the historical video watching data of the user, for example, if the number of times that the user clicks a certain type of target video is large and the total playing time is long, the target video that the user is interested in is determined, the event content in the target video that the user is interested in is further determined, a key frame in the target video is extracted as a historical key frame image, so that whether the special event is interested in by the user or not can be identified subsequently, and specifically, the key frame image of the special event can be compared with the historical key frame image to determine the similarity degree between the key frame image and the historical key frame image.
S062: giving a weighted value to the similarity degree of the key frame image of the special event and the historical key frame image;
s063: and sorting each index image in the index image list according to the confidence coefficient of each index image and the weight value.
Specifically, if the similarity between the key frame image of the special event and the historical key frame image is 80%, a weight value of 0.8 is assigned thereto, and if the similarity between the key frame image of the special event and the historical key frame image is 100%, a weight value of 1 is assigned thereto. Sorting the index images according to the weight values and the confidence degrees of the index images, specifically, if the confidence degree of one index image is 0.6 and the weight value of the index image is 0.8, multiplying the confidence degree by the weight value to obtain 0.48; if the confidence of the other index image is 0.9 and the weight value is 1, the confidence is multiplied by the weight value to obtain 0.9, and the subsequent index image is arranged in front of the previous index image in the index image list. In this embodiment, in order to ensure that the degree of association between the index image that can be viewed by the user and the target video is closer, the confidence level of the index image is obtained, and each type of index image is sorted according to the confidence level. For example, in the index image of the type of baby crying, the index image with higher confidence may be arranged at the front position in the index image list, so that the user can click to the video with the highest relevance degree faster; or the index image with higher confidence coefficient is arranged at the position behind the index image list, and the position of the index image with higher confidence coefficient in the index image list is easy to search no matter behind or in front, so that a user can quickly search the closely related basic video, and then the more related target video is synthesized through the basic video. In addition, similar measures can be taken for one type of index images for crawling play of the baby, so that the index images with higher confidence coefficient are easy to find at the positions of the index bars, the problem of poor user experience caused by inaccurate identification comparison information of the index images can be solved, and by placing some of the index images with higher confidence coefficient in each type of index images at the protruding positions of the index image list, a user can generate more relevant target videos through the clicked index images, so that the matching degree of the index images at the protruding positions and each type of events is higher, and the user experience is better and the impression is better. In addition, by combining historical video watching data of the user, the method not only ensures that the correlation degree of the index image and the target video is close, but also can effectively screen out videos which are more interesting to the user.
Specifically, as a user may not completely watch each video in the process of watching a target video, corresponding watching progress and clicking times are provided for each target video in history, the user is determined to be more interested in a certain history video according to the clicking times of the user, and then the history videos with more clicking times and delayed watching progress are screened out as videos to be watched according to the watching progress of the user on the certain history video; the video to be watched is marked, so that the video to be watched is easily identified by the user, the user can conveniently review the event, and the index information of the video to be watched can be pushed to the user side to remind the user to check in time.
It should be further noted that, referring to fig. 9, the S1 includes:
s01': acquiring a video storage rule;
s02': storing real-time videos according to the video storage rule to obtain a plurality of basic videos;
the storage position of the basic video comprises a video acquisition terminal and/or a mobile terminal.
Specifically, according to the storage requirement of video data, the corresponding video storage rule is adopted, the real-time video collected by the video monitoring probe can be reasonably stored by adopting the video storage rule, and a plurality of basic videos are obtained, so that the video data can be stored more conveniently and reliably, and the subsequent searching and use are facilitated.
In summary, the video review method provided by this embodiment obtains a request instruction for reviewing a target video corresponding to a target event; determining at least one base video associated with the target event according to the request instruction; synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, outputting the target video corresponding to the target event, sending a request instruction for reviewing the video by a user, determining the base video associated with the target event through the request instruction, and synthesizing the target video to be reviewed through the base video, so that the user does not need to search videos related to the event to be reviewed in a large amount of recorded base videos, and valuable time of the user is saved, and on the other hand, the target video synthesizes the video data associated with the target event in the base video according to the preset video synthesis rule under the request instruction of the user only when the user needs to review, so as to generate the target video which is not stored in advance, the storage space of the device can be saved. In short, the target video has more generation modes, and on the basis of not deviating from the substantive content of the invention, the technical scheme of multiple extraction modes can be provided. It should be noted that simple changes to the technical solutions in the present application are not necessary for those skilled in the art to make creative efforts, and therefore, all fall within the protection scope of the present invention, and are not described herein again.
Example 2
In embodiment 1, a specific video segment is extracted and a corresponding video processing operation is performed to obtain a target video. When the duration of a target event spans the length of a segmented video, a plurality of basic videos need to be called and processed to generate the target video, which not only causes large extracted data and inconvenience, but also causes long extracted video length and insufficient event review focus, and finally affects the use experience of a user. In some embodiments, a method of video data extraction splitting and splicing is adopted, so that the obtained video is more suitable and more targeted as a target video. Therefore, embodiment 2 of the present invention provides a video extracting and splicing method, please refer to fig. 10, where the method includes:
s400: the basic video data is de-multiplexed into naked stream data;
specifically, the recorded basic video is in MP4 format, and the video data in MP4 format is decomposed into bare stream data h.264 or h.265 by technical means, where h.264 is a digital video compression format commonly proposed by the international organization for standardization and the international telecommunication union, which is the prior art and will not be described herein too much.
S410: marking the naked stream data according to the time sequence, and determining the video frame number;
specifically, marking bare stream data according to the temporal order involves a problem of a video frame rate, which is a frequency at which images in frame units appear continuously. In the scheme, the frame rate of the recorded basic video is 20 frames per second, 20 frames of images exist every 1 second, namely the frame number of the recorded basic video is the 20 th frame at the 1 st second, and the frame number of the recorded basic video is 20 multiplied by 2 to be 40 frames at the 2 nd second, so that the frame number of the recorded basic video is the several seconds multiplied by the frame rate, and the corresponding relation between the frame number of the naked stream data and the time is determined without doubt.
S420: and splicing the bare stream data associated with the target event according to the video frame number to determine target video data.
Specifically, a specific starting time point and a specific set time are selected to extract a corresponding frame number, so as to obtain corresponding target data. The target data can be stored locally for waiting for the user to extract and check, and can also be pushed to terminal equipment such as a mobile phone and a tablet of the user through a technical means, so that the user can check in time. More specifically, after the target data is collected, the target data can be recombined into MP4 format data to be stored locally, or the MP4 format data can be sent to the user end device to be viewed by the user, the MP4 format data sent to the user end device are arranged according to the time sequence, the target videos of a plurality of time nodes can be displayed, and the user end can select to view the target videos.
By adopting the video extraction and splicing method of the embodiment, the basic video data is de-duplicated into naked stream data; marking the naked stream data according to the time sequence, and determining the video frame number; and splicing the bare stream data associated with the target event according to the video frame number, determining target data, processing video data more flexibly, extracting video data at a specific moment for the review of the event to be cut and spliced flexibly, extracting and splicing the corresponding video data according to the development of the event, and obtaining a video related to the event which is more satisfactory for users.
Second embodiment
Example 3
An embodiment of the present invention further provides an apparatus, as shown in fig. 11, including:
an instruction acquisition module: a request instruction for acquiring a review target event;
basic video positioning module: the video processing device is used for determining at least one continuous basic video associated with the target event according to the request instruction;
the target video synthesis module: and the video synthesis module is used for synthesizing the video data associated with the target event in the base video according to a preset video synthesis rule and outputting the target video corresponding to the target event.
By adopting the device of the embodiment, a request instruction for reviewing the target video corresponding to the target event is obtained; determining at least one base video associated with the target event according to the request instruction;
and synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, and outputting the target video corresponding to the target event. The method comprises the steps that a user sends a request instruction for watching back videos, basic videos relevant to a target event are determined through the request instruction, and then the target videos to be watched are synthesized through the basic videos.
Example 4
In embodiment 3, a specific video segment is extracted and a corresponding video processing operation is performed to obtain a target video. When the duration of a target event spans the length of a segmented video, a plurality of basic videos need to be called and processed to generate the target video, which not only causes large extracted data and inconvenience, but also causes long extracted video length and insufficient event review focus, and finally affects the use experience of a user. In some embodiments, a method of video data extraction splitting and splicing is adopted, so that the obtained video is more suitable and more targeted as a target video. Therefore, embodiment 4 of the present invention provides a video extracting and splicing apparatus, please refer to fig. 12, where the apparatus includes:
the de-multiplexing module is used for de-multiplexing the basic video data into naked stream data;
the frame number marking module is used for marking the naked stream data according to the time sequence and determining the video frame number;
and the splicing module is used for splicing the bare stream data associated with the target event according to the video frame number to determine target video data.
By adopting the video extraction and splicing device of the embodiment, the basic video data is demultiplexed into naked stream data; marking the naked stream data according to the time sequence, and determining the video frame number; and splicing the naked stream data from the starting time point to the set time according to the video frame number, determining target data, processing video data more flexibly, extracting the video data at a specific time point for flexible cutting and splicing for event review, extracting and splicing the corresponding video data according to the development of the event, and obtaining the video related to the event which is more satisfactory for users.
The third embodiment is as follows:
the present invention provides an electronic device and storage medium, as shown in fig. 13, comprising at least one processor, at least one memory, and computer program instructions stored in the memory.
Specifically, the processor may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present invention, and the electronic device includes at least one of the following: the wearing equipment that has intelligent camera, the mobile device that has intelligent camera.
The memory may include mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is non-volatile solid-state memory. In a particular embodiment, the memory includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor reads and executes the computer program instructions stored in the memory to realize any one of the methods for optimizing the intelligent camera detection model by adopting edge calculation, the method for selecting the sample confidence threshold value and the method for self-training the model in the above embodiment modes.
In one example, the electronic device may also include a communication interface and a bus. The processor, the memory and the communication interface are connected through a bus and complete mutual communication.
The communication interface is mainly used for realizing communication among modules, devices, units and/or equipment in the embodiment of the invention.
A bus comprises hardware, software, or both that couple components of an electronic device to one another. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. A bus may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
In summary, embodiments of the present invention provide a video review method, a video extraction and splicing method, an apparatus, a device, and a storage medium.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for reviewing video, the method comprising:
s1: acquiring a request instruction for reviewing a target video corresponding to a target event;
s2: determining at least one continuous basic video associated with the target event according to the request instruction;
s3: and synthesizing video data associated with the target event in the base video according to a preset video synthesis rule to generate the target video, and outputting the target video corresponding to the target event.
2. The video review method of claim 1, wherein the S2 includes:
s21: acquiring label information of the target event and duration corresponding to the target video;
s22: determining a first basic video where the target event is located according to the time information of the tag information;
s23: determining at least one continuous base video associated with the target event according to the corresponding duration of the target video and the position of the target event in the first base video;
and the duration corresponding to the target video is less than or equal to the duration of the basic video.
3. The video review method of claim 2, wherein the S23 includes:
s231: dividing the first basic video into a plurality of video segments according to the duration corresponding to the target video;
s232: determining a target video segment of a first basic video to which the target event belongs according to the time information of the target event;
s233: determining at least one continuous base video associated with the target event according to the position information of the target video segment compared with the first base video and the corresponding duration of the target video.
4. The video review method of claim 3, wherein the S231 comprises:
s2311: acquiring a target time length corresponding to one half of the time length of the target video;
s2312: and segmenting the first basic video according to the target duration to obtain the plurality of video segments.
5. The video review method according to any one of claims 1 to 4, wherein the S3 includes:
s31: acquiring a preset video synthesis rule of the target video and a time length corresponding to the target video;
s32: extracting each frame image of target video data associated with the target event in each basic video according to the preset video synthesis rule, and synthesizing the target video;
and the total duration of each frame of image of the target video data is equal to the duration corresponding to the target video.
6. The video review method of claim 5, wherein the step S1 is preceded by:
s01, acquiring a time length threshold corresponding to the target event;
s02: timing the special event of the target area to generate a reference time length corresponding to the timing time length;
s03: and when the reference duration meets the requirement of the duration threshold, taking a key frame image of the special event corresponding to the reference duration as an index image of the target event corresponding to the special event.
7. The video review method of claim 6, wherein the step S03 is followed by the step of:
s04: acquiring all index images in a specified time period;
s05: identifying each index image, and obtaining the confidence of each index image;
s06: when the confidence of the index image meets the requirement of a confidence threshold, extracting the index image meeting the requirement of the confidence threshold and establishing an index image list.
8. The video review method of claim 7, wherein the S06 includes:
s061: acquiring historical video watching data of a user, determining event content in a target video which the user is interested in, and determining historical key frame images in the event content;
s062: giving a weighted value to the similarity degree of the key frame image of the special event and the historical key frame image;
s063: and sorting each index image in the index image list according to the confidence coefficient of each index image and the weight value.
9. An apparatus, comprising:
an instruction acquisition module: a request instruction for acquiring a review target event;
basic video positioning module: the video processing device is used for determining at least one continuous basic video associated with the target event according to the request instruction;
the target video synthesis module: and the video synthesis module is used for synthesizing the video data associated with the target event in the base video according to a preset video synthesis rule and outputting the target video corresponding to the target event.
10. An electronic device, comprising: at least one processor, at least one memory, and computer program instructions stored in the memory that, when executed by the processor, implement the method of any of claims 1-7.
11. A medium having stored thereon computer program instructions, which, when executed by a processor, implement the method of any one of claims 1-7.
CN202110612998.0A 2021-06-02 2021-06-02 Video review method, video review device, electronic equipment and medium Active CN113347502B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202310208827.0A CN116208821A (en) 2021-06-02 2021-06-02 Target video capturing method, device, equipment and medium based on image screening
CN202110612998.0A CN113347502B (en) 2021-06-02 2021-06-02 Video review method, video review device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110612998.0A CN113347502B (en) 2021-06-02 2021-06-02 Video review method, video review device, electronic equipment and medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310208827.0A Division CN116208821A (en) 2021-06-02 2021-06-02 Target video capturing method, device, equipment and medium based on image screening

Publications (2)

Publication Number Publication Date
CN113347502A true CN113347502A (en) 2021-09-03
CN113347502B CN113347502B (en) 2023-03-14

Family

ID=77472998

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310208827.0A Pending CN116208821A (en) 2021-06-02 2021-06-02 Target video capturing method, device, equipment and medium based on image screening
CN202110612998.0A Active CN113347502B (en) 2021-06-02 2021-06-02 Video review method, video review device, electronic equipment and medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310208827.0A Pending CN116208821A (en) 2021-06-02 2021-06-02 Target video capturing method, device, equipment and medium based on image screening

Country Status (1)

Country Link
CN (2) CN116208821A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129927A (en) * 2022-08-17 2022-09-30 广东龙眼数字科技有限公司 Monitoring video stream backtracking method, electronic equipment and storage medium
CN115714877A (en) * 2022-11-17 2023-02-24 耿弘毅 Multimedia information processing method and device, electronic equipment and storage medium
CN117676245A (en) * 2024-01-31 2024-03-08 深圳市积加创新技术有限公司 Context video generation method and device
CN117743635A (en) * 2024-02-01 2024-03-22 深圳市积加创新技术有限公司 Intelligent security service method and device
CN117830911A (en) * 2024-03-06 2024-04-05 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336795A (en) * 2013-06-09 2013-10-02 华中科技大学 Video indexing method based on multiple features
US20140341476A1 (en) * 2013-05-15 2014-11-20 Google Inc. Associating classifications with images
CN108769604A (en) * 2018-06-13 2018-11-06 深圳绿米联创科技有限公司 Processing method, device, terminal device and the storage medium of monitor video
CN110225369A (en) * 2019-07-16 2019-09-10 百度在线网络技术(北京)有限公司 Video selection playback method, device, equipment and readable storage medium storing program for executing
CN111881320A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Video query method, device, equipment and readable storage medium
CN111988638A (en) * 2020-08-19 2020-11-24 北京字节跳动网络技术有限公司 Method and device for acquiring spliced video, electronic equipment and storage medium
CN112200067A (en) * 2020-10-09 2021-01-08 宁波职业技术学院 Intelligent video event detection method, system, electronic equipment and storage medium
CN112235613A (en) * 2020-09-17 2021-01-15 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140341476A1 (en) * 2013-05-15 2014-11-20 Google Inc. Associating classifications with images
CN103336795A (en) * 2013-06-09 2013-10-02 华中科技大学 Video indexing method based on multiple features
CN108769604A (en) * 2018-06-13 2018-11-06 深圳绿米联创科技有限公司 Processing method, device, terminal device and the storage medium of monitor video
CN110225369A (en) * 2019-07-16 2019-09-10 百度在线网络技术(北京)有限公司 Video selection playback method, device, equipment and readable storage medium storing program for executing
CN111881320A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Video query method, device, equipment and readable storage medium
CN111988638A (en) * 2020-08-19 2020-11-24 北京字节跳动网络技术有限公司 Method and device for acquiring spliced video, electronic equipment and storage medium
CN112235613A (en) * 2020-09-17 2021-01-15 百度在线网络技术(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN112200067A (en) * 2020-10-09 2021-01-08 宁波职业技术学院 Intelligent video event detection method, system, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129927A (en) * 2022-08-17 2022-09-30 广东龙眼数字科技有限公司 Monitoring video stream backtracking method, electronic equipment and storage medium
CN115714877A (en) * 2022-11-17 2023-02-24 耿弘毅 Multimedia information processing method and device, electronic equipment and storage medium
CN115714877B (en) * 2022-11-17 2023-06-27 耿弘毅 Multimedia information processing method and device, electronic equipment and storage medium
CN117676245A (en) * 2024-01-31 2024-03-08 深圳市积加创新技术有限公司 Context video generation method and device
CN117676245B (en) * 2024-01-31 2024-06-11 深圳市积加创新技术有限公司 Context video generation method and device
CN117743635A (en) * 2024-02-01 2024-03-22 深圳市积加创新技术有限公司 Intelligent security service method and device
CN117743635B (en) * 2024-02-01 2024-06-11 深圳市积加创新技术有限公司 Intelligent security service method and device
CN117830911A (en) * 2024-03-06 2024-04-05 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium
CN117830911B (en) * 2024-03-06 2024-05-28 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium

Also Published As

Publication number Publication date
CN116208821A (en) 2023-06-02
CN113347502B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
CN113347502B (en) Video review method, video review device, electronic equipment and medium
US7949207B2 (en) Video structuring device and method
US8503523B2 (en) Forming a representation of a video item and use thereof
CN102314916B (en) Video processing method and system
CN102752574A (en) Video monitoring system and method
JPS6324786A (en) Method, apparatus and system for recognizing broadcasting segment
CN105659279B (en) Information processing apparatus, information processing method, and computer program
CN104850969A (en) Warning condition linkage management system for audio and video evidences of law enforcement instrument
CN110598008B (en) Method and device for detecting quality of recorded data and storage medium
CN111881320A (en) Video query method, device, equipment and readable storage medium
US20200402076A1 (en) Data processing method and apparatus, and storage medium
CN115103157A (en) Video analysis method and device based on edge cloud cooperation, electronic equipment and medium
CN101674466B (en) Multi-information fusion intelligent video monitoring fore-end system
CN111416960A (en) Video monitoring system based on cloud service
CN110543584A (en) method, device, processing server and storage medium for establishing face index
CN112419639A (en) Video information acquisition method and device
CN114139016A (en) Data processing method and system for intelligent cell
CN113473166A (en) Data storage system and method
US20170046343A1 (en) System and method for removing contextually identical multimedia content elements
CN109120896B (en) Security video monitoring guard system
US9275140B2 (en) Method of optimizing the search for a scene on the basis of a stream of images archived in a video database
CN113099283B (en) Method for synchronizing monitoring picture and sound and related equipment
CN114996080A (en) Data processing method, device, equipment and storage medium
KR20200007563A (en) Machine Learning Data Set Preprocessing Method for Energy Consumption Analysis
CN116189706A (en) Data transmission method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant