CN108769604B - Monitoring video processing method and device, terminal equipment and storage medium - Google Patents

Monitoring video processing method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN108769604B
CN108769604B CN201810609711.7A CN201810609711A CN108769604B CN 108769604 B CN108769604 B CN 108769604B CN 201810609711 A CN201810609711 A CN 201810609711A CN 108769604 B CN108769604 B CN 108769604B
Authority
CN
China
Prior art keywords
video
time
monitoring
video frame
dynamic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810609711.7A
Other languages
Chinese (zh)
Other versions
CN108769604A (en
Inventor
刘备
凌志文
赵子钧
廖子明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lutuo Technology Co Ltd
Original Assignee
Shenzhen Lutuo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lutuo Technology Co Ltd filed Critical Shenzhen Lutuo Technology Co Ltd
Priority to CN201810609711.7A priority Critical patent/CN108769604B/en
Publication of CN108769604A publication Critical patent/CN108769604A/en
Application granted granted Critical
Publication of CN108769604B publication Critical patent/CN108769604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the invention discloses a method and a device for processing a monitoring video, a terminal device and a storage medium, and relates to the technical field of monitoring. The processing method of the monitoring video comprises the following steps: when the monitoring video acquisition operation of a user is acquired, sending a monitoring video acquisition request to a server; receiving a surveillance video file returned by the server based on the surveillance video acquisition request, wherein the surveillance video file comprises: dynamic video frames and their time indices; and marking the time region with the dynamic video frame according to the time index of the dynamic video frame. The method can facilitate the user to quickly check the dynamic video frames in the monitoring video.

Description

Monitoring video processing method and device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of monitoring technologies, and in particular, to a method and an apparatus for processing a monitoring video, a terminal device, and a storage medium.
Background
With the development of the scientific and economic levels, monitoring technology is spread from the beginning to important places and now to all corners in life. Especially, the monitoring technology is widely applied as an important part in the smart home field which is emerging at present.
In the field of smart home, the whole monitoring process is generally recorded by a camera, and then the recorded monitoring video is stored. When the monitoring video needs to be looked up, related personnel can only watch the whole monitoring video but cannot skip the static picture in the monitoring video, so that the dynamic picture can be quickly looked up, and the looking-up efficiency is seriously influenced.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for processing a surveillance video, a terminal device, and a storage medium, so as to facilitate quick review of a dynamic video by a user.
In a first aspect, an embodiment of the present invention provides a method for processing a surveillance video, where the method includes: when the monitoring video acquisition operation of a user is acquired, sending a monitoring video acquisition request to a server; receiving a surveillance video file returned by the server based on the surveillance video acquisition request, wherein the surveillance video file comprises: dynamic video frames and their time indices; and marking the time region with the dynamic video frame according to the time index of the dynamic video frame.
In a second aspect, an embodiment of the present invention provides a device for processing a surveillance video, where the device includes: the system comprises a request sending module, a file receiving module and a video labeling module, wherein the request sending module is used for sending a monitoring video acquisition request to a server when the monitoring video acquisition operation of a user is acquired; the file receiving module is configured to receive a surveillance video file returned by the server based on the surveillance video acquisition request, where the surveillance video file includes: dynamic video frames and their time indices; and the video labeling module is used for labeling the time region with the dynamic video frame according to the time index of the dynamic video frame.
In a third aspect, an embodiment of the present invention provides a terminal device, including a memory and a processor, where the memory is coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor executes the method for processing a surveillance video provided in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium having a program code executable by a processor, where the program code causes the processor to execute the method for processing a surveillance video provided in the first aspect.
Compared with the prior art, the method, the device, the terminal device and the storage medium for processing the surveillance video provided by the first aspect of the invention send the surveillance video acquisition request to the server according to the surveillance video acquisition operation of the user, then receive the surveillance video file returned by the server based on the surveillance video acquisition request, and finally mark the time region of the dynamic video frame in the surveillance video file according to the time index of the dynamic video frame in the surveillance video file. Therefore, the processing method of the monitoring video marks the time region corresponding to the dynamic video frame during display according to the dynamic video frame and the time index thereof acquired from the server, so that when a user views the displayed monitoring video, the user can quickly search and view the dynamic video according to the marked time region, and the viewing efficiency of the user on the monitoring video file is improved.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram showing a structure of an interactive system provided in a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a processing method of monitoring video according to a first embodiment of the present invention;
fig. 3 is a flowchart illustrating a processing method of monitoring video according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a first interface provided by an embodiment of the invention;
FIG. 5 is a schematic diagram of a second interface provided by an embodiment of the invention;
FIG. 6 is a schematic diagram of a third interface provided by an embodiment of the invention;
fig. 7 is a flowchart illustrating a method for processing a surveillance video according to a third embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a fourth interface provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of a fifth interface provided by an embodiment of the invention;
fig. 10 is a block diagram showing a configuration of a surveillance video processing apparatus according to a fourth embodiment of the present invention;
fig. 11 is a block diagram showing another configuration of a surveillance video processing apparatus according to a fourth embodiment of the present invention;
fig. 12 is a block diagram illustrating a terminal device for executing a surveillance video processing method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Along with the application of monitoring technology to the field of smart home, the smart camera is more and more close to the life of people as an important part of the smart home system. The application of the device is also more and more diversified, for example, the security function, the nursing of the old, the lovely pet monitoring and the like. With the application of high-pixel image sensors, the images of the cameras become clearer, and video files become larger.
However, in the current smart home, when monitoring is implemented by the smart camera, the whole monitoring process is usually shot and recorded, and usually the inside of the room is still most of the time, so that a large number of still pictures often exist in the recorded monitoring video. This results in a user spending a lot of time viewing a still picture while viewing a recorded video file, and the still picture usually has no valuable information for the user, thus wasting a lot of time for the user.
For the above situation, there are two conventional methods for processing: the first method is to shorten the viewing time by fast forward playing, which requires the user to pay high attention during fast forward due to technical reasons, and thus, a phenomenon that part of the effective information is missed due to negligence of the viewer sometimes occurs; the second method is to skip part of the picture by manually dragging and dropping a progress bar for monitoring video playing by a user, so as to achieve the purpose of shortening the viewing time, but the method is more likely to miss required information.
Therefore, after finding that the monitoring technology in the smart home has the technical problems, the inventor proposes a method, an apparatus, a terminal device and a storage medium for processing a monitoring video in the embodiments of the present invention.
First, an interactive system corresponding to a terminal device executing the method for processing a surveillance video according to the embodiment of the present application is introduced below.
Referring to fig. 1, the interactive system 10 may include: terminal device 100, server 200 and smart home devices 300. The terminal device 100 and the smart home device 300 perform data communication or interaction with the server 200. The server 200 may be a web server, a database server, a cloud server, or the like. The terminal device 100 may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), or the like.
The smart home device 300 may include a camera, a gateway, and a plurality of sensors. The gateway may be located within the camera. The sensor may be a ZIGBEE sensor. The gateway in the camera can process messages of various Zigbee/Bluetooth sensors through a ZIGBEE module/Bluetooth module and the like, convert the received ZIGBEE signals into WIFI signals, and communicate with a server at the cloud through WIFI. The gateway is built in the camera, and the camera can be used as a central hub of the network information processing of the smart home device 300.
Further, the plurality of sensors may include: a human body sensor, a door and window sensor, a temperature and humidity sensor, a water immersion sensor, a natural gas alarm, a smoke alarm and the like. The human body sensor is used for detecting whether a person exists through infrared induction; a door sensor for detecting door opening and closing events; the temperature and humidity sensor is used for detecting the indoor temperature and humidity; the water immersion sensor is used for detecting whether water exists or not through the electrodes and detecting whether water leaks indoors or not; the smoke and natural gas sensor is used for detecting whether natural gas leakage exists or not, fire hazard and the like. Of course, the specific type and number of the sensors are not limited in the embodiments of the present application, and other types, such as a light intensity sensor, may also be included.
In this embodiment, the smart home devices 300 may send the acquired data, for example, videos to be captured by the cameras, the sensors may send the acquired data to the server 200, and the terminal device 100 may acquire the data uploaded to the server 200 by the smart home devices 300 from the server 200.
In addition, the user may set different trigger scenarios or automation through the APP of the terminal device 300, and when the trigger scenarios or the execution conditions of the automation are met, the camera starts recording, and sends the recording file to the server 200.
For example, the door/window sensor detects that the door is opened, triggers the camera to take a certain time of shooting, and uploads a monitoring video shot in the certain time to the server 200. The door and window sensor transmits the detected information to the ZIGEE module of the camera through the ZIGBEE module, the WIFI module of the camera transmits the information to the server 200 through WIFI, and the server 200 pushes the application message to the terminal device 100 after receiving the information. And simultaneously, recording the time when the door opening event occurs in a file by the camera.
According to the processing method, the processing device, the terminal device and the storage medium of the surveillance video, the dynamic video frame and the time index of the surveillance video are obtained from the server, and then the area of the dynamic video frame is marked according to the time index, so that a user can conveniently and rapidly look up the dynamic video. Embodiments in the present application will be described in detail below with reference to the accompanying drawings. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
First embodiment
Referring to fig. 2, a first embodiment of the invention provides a method for processing a surveillance video. According to the processing method of the monitoring video, the dynamic video frame and the time index thereof are obtained from the server, and then the time region with the dynamic video frame is labeled according to the time index, so that a user can quickly look up the dynamic video according to the labeling of the time region. The processing method of the monitoring video specifically comprises the following steps:
step S110: and when the monitoring video acquisition operation of the user is acquired, sending a monitoring video acquisition request to the server.
When the user is not indoors set by the intelligent household equipment and needs to check the monitoring video shot by the camera, the monitoring video shot by the camera can be obtained from the server by using the terminal equipment. Specifically, when a user needs to view a monitoring video, a monitoring video obtaining operation can be performed on the terminal device to obtain the monitoring video.
The terminal equipment can detect the operation of the user, generates a monitoring video acquisition request when detecting that the user performs the monitoring video acquisition operation, and sends the monitoring video acquisition request to the server.
The monitoring video acquiring request can carry identification information, so that the server can search the monitoring video corresponding to the identification information according to the monitoring video acquiring request, namely the monitoring video shot by the camera of the intelligent home device corresponding to the identification information. The identification information may be address information of the smart home device, such as a physical address and an IP address of the smart home device, or may be a device number of the smart home device.
In the embodiment of the application, the terminal device can be bound with the intelligent home device, so that the monitoring video acquisition request sent to the server each time can carry identification information of the intelligent home device.
Step S120: and receiving a monitoring video file returned by the server based on the monitoring video acquisition request.
In an embodiment of the present application, the monitoring video file includes: dynamic video frames and their time indices.
And after receiving the monitoring video acquisition request, the server returns the monitoring video to the terminal equipment. In the embodiment of the application, the server stores the monitoring video uploaded by the intelligent household equipment. The monitoring video uploaded by the intelligent household equipment can be the monitoring video shot by a camera.
Furthermore, the camera can select whether to save the video files according to a mobile detection algorithm in the process of shooting the video. Specifically, when a motion detection event is detected, the dynamic video frame corresponding to the motion detection event and the time index thereof are stored. Therefore, the camera does not store the still pictures without the motion detection events, the size of the monitoring video stored by the camera is reduced, and the storage space of the camera is saved. The camera sends the monitoring video to the server according to a certain frequency, so that the monitoring video stored by the camera only comprises the dynamic video frame and the time index thereof.
The motion detection algorithm may include: and extracting the characteristic values of the target frame and the current frame, then obtaining the difference value between the characteristic value of the target frame and the characteristic value of the current frame, and considering that the motion detection event is triggered when the difference value between the characteristic value of the target frame and the characteristic value of the current frame exceeds a threshold value preset by a motion detection algorithm.
Of course, the camera can also save and upload the monitoring video of the whole time period. However, in the monitoring process, the motion frame detection is also performed according to the motion detection algorithm, and the time index of the dynamic video frame corresponding to the motion detection event is separately stored. Therefore, the monitoring video uploaded to the server is a full-time shooting video (including a dynamic video frame when a dynamic detection event occurs), and the file uploaded to the server also includes a time index of the dynamic video frame. The time index of the dynamic video frame may be a shooting time corresponding to each frame of the dynamic video.
Step S130: and marking the time region with the dynamic video frame according to the time index of the dynamic video frame.
In one embodiment, after obtaining the surveillance video including the dynamic video frame and the time index thereof returned by the server, the terminal device marks the time region where the dynamic video frame exists in the time progress bar in the viewing interface of the displayed surveillance video according to the time index of the dynamic video frame. The progress bar is the speed, the completion degree, the size of the residual unfinished task amount and the processing time which may be needed when a computer processes a task in real time in a picture mode, and the time progress bar in the embodiment of the application is the time for displaying a played video file in a picture.
It can be understood that, when the monitoring video uploaded to the server by the camera only includes the dynamic video frames and the time indexes thereof, the time progress bar only includes the marked time regions in which the dynamic video frames exist, and the other time regions in the time progress bar do not have video frames.
When the monitoring video uploaded to the server by the camera is the monitoring video of the whole time period, the time region with the dynamic video frame is marked in the time progress bar, and the other regions except the marked time region with the dynamic video frame in the time progress bar are the time regions corresponding to the static pictures.
Therefore, after the time zone of the dynamic video frame in the progress bar is labeled, the user can know the time of the occurrence of the mobile detection event according to the label in the time progress bar, and can select the labeled time zone to check the dynamic video frame corresponding to the mobile detection event.
In another embodiment, the time zone in which the dynamic video frame exists may be marked according to the time index of the dynamic video frame, or the date in which the dynamic video frame exists may be marked according to the time index of the dynamic video in the calendar.
Furthermore, in a calendar displayed on a screen of the terminal device, the shooting time corresponding to each frame of video can be obtained according to the time index of each frame of dynamic video, and then the date with the dynamic video is marked according to the shooting time corresponding to each frame of video. Specifically, the date on which the dynamic video frame exists may be color-marked, for example, the date on which the dynamic video frame exists may be color-marked in a dotting manner below the date. Therefore, when the user views the date of the monitoring video, the user can quickly know the date of the dynamic video, and view the dynamic video frame of the date needing to be viewed, so that the time of the user is saved, and the user experience is improved.
In the method for processing a surveillance video according to the first embodiment of the present invention, after a surveillance video including a dynamic video frame and a time index thereof is obtained from a server, a time region where the dynamic video exists is marked according to the time index. The user can quickly know the time region of the dynamic video frame, namely the time of the motion detection event, according to the label, so that the user can conveniently select the labeled time region to check the dynamic video frame, the time for checking the static picture in the monitoring video by the user is saved, and the checking efficiency of the monitoring video is improved.
Second embodiment
Fig. 3 is a schematic flow chart illustrating a method for processing a surveillance video according to a second embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 3, the processing method of the surveillance video may specifically include the following steps:
step S210: and sending a video recording log acquisition request to the server.
In the embodiment of the application, when a user needs to check a monitoring video of a certain day, an operation of obtaining a video log may be performed in the terminal device, for example, a control corresponding to the video log in the display interface is clicked. After detecting the operation of acquiring the video log, the terminal equipment responds to the operation of acquiring the video log to generate a video log acquisition request and send the video log acquisition request to the server.
The video log obtaining request may also include identification information of the smart home device, so that the server may subsequently return a video log corresponding to the identification information, that is, a video log corresponding to a camera of the smart home device corresponding to the identification information.
Step S220: and receiving video recording log information returned by the server based on the video recording log acquisition request.
And after receiving the video recording log acquisition request, the server returns video recording log information to the terminal equipment according to the video recording log acquisition request. The video recording log information returned by the server may include the date of the existence of the monitoring video. Of course, the specific content of the video recording log information is not limited in the embodiment of the present application.
And the terminal equipment receives the video recording log information returned by the server so as to obtain the video recording log information.
Step S230: and based on the video recording log information, marking the date with the monitoring video in a calendar so that a user can determine the date needing to view the monitoring video based on the mark in the calendar.
And after the terminal equipment acquires the video recording log information, displaying the calendar according to the video recording log information. Specifically, the dates of the monitoring videos in the displayed calendar interface are labeled.
Further, the marking of the date of the existence of the monitoring video in the calendar may include: color marking the date of the existence of the monitoring video; the color-marked area is set as a selectable area.
It can be understood that, in the calendar after the labeling, the date of the existence of the monitoring video and the date of the nonexistence of the monitoring video have different color marks, so that the user can quickly know the date of the existence of the monitoring video according to the calendar after the labeling. In addition, the marked area is set as a selectable area, so that a user can select the date with the monitoring video to acquire the monitoring video corresponding to the date.
In the embodiment of the present application, the date on which the monitoring video exists may be marked by dotting below the date on which the monitoring video exists, and as shown in fig. 4, there are marked points with colors below the dates 2, 4, 6, 9, 10, 12, 17, 18, 20, 23, 24, and 26 on which the video files exist. Of course, the specific color marking mode is not limited in the embodiment of the present application, and may be other modes, for example, the date box may be selected and then color filled.
In addition, while the date with the monitoring video is set as the selectable area, the date without the monitoring video can also be set as the non-selectable area, so that the user can only select the date with the monitoring video in the calendar to acquire the monitoring video.
Step S240: when a selection operation of a user on the marked date is acquired, a monitoring video acquisition request is generated based on the selection operation, wherein the monitoring video acquisition request comprises the date corresponding to the selection operation.
After the date with the monitoring video in the calendar is labeled, the selection operation of the user in the calendar is detected, and when the selection operation of the user on the labeled date is detected, namely the clicking operation of the user on the labeled date is detected, the selection operation of the date is responded, and the monitoring video acquisition request is generated.
In the embodiment of the application, the generated monitoring video acquisition request carries identification information of the intelligent home device, and also includes a date corresponding to the selection operation, that is, a date selected by the user, so that the monitoring video of the date selected by the user can be acquired according to the monitoring video acquisition request.
Step S250: and sending the monitoring video acquisition request to the server.
After the monitoring video acquisition request is generated, the monitoring video acquisition request is sent to a server so as to acquire the monitoring video on the date selected by the user.
Step S260: and receiving a monitoring video file returned by the server based on the monitoring video acquisition request.
The surveillance video file includes: dynamic video frames and their time indices.
Step S270: and in the displayed time progress bar, setting a time area with the dynamic video frame as a first selectable area according to the time index corresponding to the dynamic video frame.
In the embodiment of the application, in the displayed progress bar, according to the time index of the dynamic video frame, when the time zone with the dynamic video frame is marked. Firstly, a time zone in which a dynamic video frame exists can be set as a first selectable zone in a progress bar of a displayed viewing interface, so that a user can select the time zone in which the dynamic video exists to view the dynamic video frame in the time zone selected by the user.
It can be understood that, when only a dynamic video frame and a time index thereof exist in a received monitoring video returned by the server, the first selectable regions existing in the time progress bar are all time regions corresponding to the dynamic video frame. When the received monitoring video returned by the server is the monitoring video of the whole period, the time area corresponding to the dynamic video frame may be set as the first selectable area, and the time area corresponding to the still picture may also be set as the selectable area.
Step S280: and displaying the first selectable area and other areas except the first selectable area in the timeline in different colors.
In the time progress bar, when a time region in which a dynamic video frame exists is marked, in addition to setting the time region corresponding to the dynamic video frame as a first selectable region, the first selectable region and other regions can be displayed in different colors in the progress bar, and the other regions are regions other than the first selectable region in the time progress bar.
As shown in fig. 5, when only a dynamic video frame and a time index thereof exist in a surveillance video, in the time progress bar 502, a time region corresponding to the dynamic video frame is a first selectable region 503, a region other than the first selectable region in the time progress bar is another region 504, and the first selectable region 503 and the another region 504 are displayed in different colors.
As shown in fig. 6, when the monitored video is a full-time monitored video, the time region corresponding to the dynamic video frame in the time progress bar 502 is a first selectable region 503, the other regions 504 except the first selectable region in the time progress bar are time regions corresponding to the still pictures, and the first selectable region 503 and the other regions 504 are displayed in different colors.
Therefore, in the time progress bar, the time zone corresponding to the dynamic video frame has different colors from other zones, so that a user can quickly know the time zone with the dynamic video frame, namely the time of motion detection.
In this embodiment of the present application, the method for processing a surveillance video may further include:
step S290: and displaying the monitoring video.
As one way, displaying the monitoring video may include: acquiring a video frame selection operation made by a user according to a label in the time progress bar; and displaying the video frame corresponding to the video frame selection operation.
It can be understood that, after the time region corresponding to the dynamic video in the progress bar is labeled, the selection operation of the user on the labeled time region may be detected, and when the video frame selection operation made by the user is detected, the video frame corresponding to the time region selected by the user is played and displayed.
The video frame selection operation may be an operation of moving the current playing time to a time region corresponding to the dynamic video frame in the progress bar by the user, and the dynamic video frame at the time position is played and displayed according to the time position to which the current playing time is moved. The video frame selection operation may also be an operation in which the user directly clicks a certain time region corresponding to the selected dynamic video frame, and the dynamic video frame in the time region is played and displayed according to the time region clicked by the user.
Of course, when the surveillance video is a full-time surveillance video, the user may select other time regions other than the labeled time region to obtain event occurrence information related to the motion detection event by combining the video frames before and after the motion detection event.
As another mode, displaying the monitoring video may include: and displaying all marked video frames in the time progress bar based on the time index.
It will be appreciated that only the annotated video frames may be displayed for playback. When the video frame is played and displayed, all the labeled video frames can be displayed based on the time index, so that a user only views the labeled video frames without spending time to view other video frames, and the time of the user is saved.
The marked video frames are displayed based on the time index, and the time index of each frame of dynamic video can be understood to include the shooting time of each frame of dynamic video, and each frame of dynamic video is displayed according to the precedence relationship of the shooting time of each frame of dynamic video, so that all the marked video frames are played with a video effect.
In the method for processing a surveillance video provided in the second embodiment of the present application, a video log is obtained from a server, a date where the surveillance video exists is marked in a calendar, so that a user can select a date where the surveillance video needs to be viewed according to the marked date, then a surveillance video obtaining request is sent to the server according to the selected date, a surveillance video is obtained, a time zone corresponding to a dynamic video frame is set as an optional zone in a time progress bar, and the time zone of the dynamic video frame is distinguished from other time zones in the time progress bar by colors, which additionally provides a display mode of the surveillance video. The processing method of the monitoring video can effectively save the viewing time of the user on the monitoring video and improve the viewing efficiency of the user on the monitoring video.
Third embodiment
Referring to fig. 7, fig. 7 is a flowchart illustrating a processing method of a surveillance video according to a third embodiment of the present application. As will be described in detail with respect to the flow shown in fig. 7, the processing method of the surveillance video may specifically include the following steps:
step S310: and when the monitoring video acquisition operation of the user is acquired, sending a monitoring video acquisition request to the server.
Step S320: and receiving a monitoring video file returned by the server based on the monitoring video acquisition request.
In this embodiment of the present application, the monitoring video file may further include, in addition to the dynamic video frame and the time index thereof: video frames corresponding to the abnormal events and time indexes thereof.
In this application embodiment, the smart home devices may detect abnormal events through various sensors, for example, a door and window sensor detects that a door is opened or closed, a temperature and humidity sensor detects that temperature and/or humidity are abnormal, and a smoke sensor detects a fire hazard. When an abnormal event is detected, the camera can store the monitoring video and the occurrence time of the abnormal event, namely, the video frame and the time index corresponding to the abnormal event are stored, and the video frame and the time index corresponding to the abnormal event are uploaded to the server.
Of course, if the camera is shooting at the full time, that is, the stored monitoring video is the monitoring video at the full time, the time index of the video frame corresponding to the abnormal event may be stored, and the time index of the video frame corresponding to the abnormal event may be sent to the server.
Therefore, the terminal device can acquire the monitoring video from the server, wherein the monitoring video comprises the video frame corresponding to the abnormal event and the time index thereof in addition to the dynamic video frame and the time index thereof.
Step S330: and marking the time region with the dynamic video frame in the displayed time progress bar according to the time index of the dynamic video frame.
Step S340: and marking the time region of the abnormal event in the displayed time progress bar according to the time index of the video frame corresponding to the abnormal event.
In the embodiment of the present application, step S140 may include:
in the displayed time progress bar, setting a time area where the abnormal event exists as a second selectable area according to the time index of the video frame corresponding to the abnormal event; and displaying the second selectable area and other areas except the second selectable area in the time progress bar in different colors.
The terminal device can label the time region corresponding to the dynamic video frame in the time progress bar, and label the time region of the video frame corresponding to the abnormal event in the time progress bar according to the time index of the video frame corresponding to the abnormal event.
It can be understood that, when the monitoring video uploaded to the server by the camera only includes a dynamic video frame and a time index thereof, and a video frame corresponding to an abnormal event and a time index thereof, the time progress bar includes, in addition to the time region marked with the dynamic video frame, a time region marked with the video frame corresponding to the abnormal event, and the other time regions in the time progress bar do not have the video frame.
When the monitoring video uploaded to the server by the camera is the monitoring video of the whole time period, the time region with the dynamic video frame is marked in the time progress bar, the time region with the video frame corresponding to the abnormal event is also marked, and other regions except the marked time region with the dynamic video frame and the marked time region with the video frame corresponding to the abnormal event in the time progress bar are the time regions corresponding to the static pictures.
Specifically, when the time region of the video frame corresponding to the abnormal event is labeled, the time region is set as a second selectable region, and the second selectable region is distinguished from other regions in different colors, so that a user can quickly know the time region where the abnormal event occurs, and can select the time region corresponding to the abnormal event so as to view the video frame corresponding to the abnormal event.
As shown in fig. 8, when only a dynamic video frame and a time index thereof exist in a surveillance video, in the time progress bar 502, a time region corresponding to the dynamic video frame is a second selectable region 503, a time region corresponding to an abnormal event is a second selectable region 505, regions other than the first selectable region in the time progress bar are other regions 504, and the first selectable region 503, the second selectable region 505, and the other regions 504 are all displayed in different colors.
As shown in fig. 9, when the monitored video is a full-time monitored video, in the time progress bar 502, the time region corresponding to the dynamic video frame is a first selectable region 503, the time region corresponding to the video frame of the abnormal event is a second selectable region 505, the other regions 504 except the first selectable region in the time progress bar are time regions corresponding to the static picture, and the first selectable region 503, the second selectable region 505, and the other regions 504 are all displayed in different colors.
In the method for processing a surveillance video according to the third embodiment of the present invention, after the time region of the progress bar in which the dynamic video frame exists and the time region of the video frame corresponding to the abnormal event exist are labeled, the user can know the time of the occurrence of the motion detection event and the abnormal event according to the label in the time progress bar, and can select the labeled time region to view the dynamic video frame corresponding to the motion detection event and/or the abnormal event.
Fourth embodiment
A fourth embodiment of the present invention provides a processing apparatus 400 for monitoring video, please refer to fig. 10, wherein the processing apparatus 400 for monitoring video includes: a request sending module 410, a file receiving module 420, and a video annotation module 430. The request sending module 410 is configured to send a surveillance video obtaining request to a server when obtaining a surveillance video obtaining operation of a user; the file receiving module 420 is configured to receive a surveillance video file returned by the server based on the surveillance video obtaining request, where the surveillance video file includes: dynamic video frames and their time indices; the video annotation module 430 is configured to annotate a time region in which the dynamic video frame exists according to the time index of the dynamic video frame.
In the embodiment of the present application, please refer to fig. 11, the video annotation module 430 may include: a first region setting unit 431 and a first color marking unit 432. The first area setting unit 431 is configured to set, in a displayed timeline, a time area where the dynamic video frame exists as a first selectable area according to a time index corresponding to the dynamic video frame; the first color marking unit 432 is configured to show the first selectable region and other regions of the timeline except the first selectable region in different colors.
In the embodiment of the present application, please refer to fig. 11, the processing apparatus 400 for monitoring video may further include: an exception annotation module 440. The exception annotation module 440 is configured to: and marking the time region of the abnormal event in the displayed time progress bar according to the time index of the video frame corresponding to the abnormal event.
Further, referring to fig. 11, the exception labeling module 440 includes: a second region setting unit 441 and a second color marking unit 442. The second area setting unit 441 is configured to set, in the displayed timeline, a time area where the abnormal event exists as a second selectable area according to the time index of the video frame corresponding to the abnormal event; the second color marking unit 442 is configured to display the second selectable region and other regions of the timeline except the second selectable region in different colors.
In this embodiment of the application, please refer to fig. 11, the processing apparatus for monitoring video may further include: a log request module 450, a log receiving module 460, and a log annotation module 470. The log request module is used for sending a video log acquisition request to the server; the log receiving module is used for receiving video log information returned by the server based on the video log obtaining request; and the log marking module is used for marking the date with the monitoring video in a calendar based on the video recording log information so that a user can determine the date needing to view the monitoring video based on the mark in the calendar.
Further, the request sending module 410 is specifically configured to: when a selection operation of a user on the marked date is acquired, a monitoring video acquisition request is generated based on the selection operation, wherein the monitoring video acquisition request comprises the date corresponding to the selection operation; and sending the monitoring video acquisition request to the server.
In this embodiment of the application, please refer to fig. 11, the processing apparatus for monitoring video may further include: a video presentation module 480. The video presentation module 480 may be configured to: acquiring a video frame selection operation made by a user according to the label in the time progress bar; and displaying the video frame corresponding to the video frame selection operation.
In the embodiment of the present application, the video presentation module 480 may also be configured to: and displaying all marked video frames in the time progress bar based on the time index.
To sum up, compared with the prior art, the method, the apparatus, the terminal device and the storage medium for processing a surveillance video according to the first aspect of the present invention send a surveillance video acquisition request to a server according to a surveillance video acquisition operation of a user, then receive a surveillance video file returned by the server based on the surveillance video acquisition request, and finally mark a time region in which a dynamic video frame in the surveillance video file exists according to a time index of the dynamic video frame in the surveillance video file. Therefore, according to the processing method of the monitoring video, the time region corresponding to the dynamic video frame is marked during displaying according to the dynamic video frame and the time index thereof obtained from the server, so that when a user views the displayed monitoring video, the user can quickly search the dynamic video and view the dynamic video according to the marked time region, and the viewing efficiency of the user on the monitoring video file is improved.
Referring to fig. 12, based on the foregoing method and apparatus for processing a surveillance video, an embodiment of the present invention further provides a terminal device 100, where the terminal device 100 may generally include one or more processors 102 (only one of which is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, and a power module 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 12 is merely an illustration and is not intended to limit the structure of the terminal 100. For example, the terminal device 100 may also include more or fewer components than shown in FIG. 12, or have a different correspondence than shown in FIG. 12.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 110. The peripheral interface 110 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 110 may include only a bus; in other examples, the peripheral interface 110 may also include other elements, such as one or more controllers. In addition, these controllers may be separate from the peripherals interface 110 and integrated within the processor 102 or corresponding peripherals.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal device 100 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.1 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The power module 108 is used to provide power supply to the processor 102 and other components. Specifically, the power module 108 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator, and any other components associated with the generation, management, and distribution of power within the terminal device 100.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (mobile terminal) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method for processing surveillance video, the method comprising:
when the monitoring video acquisition operation of a user is acquired, sending a monitoring video acquisition request to a server;
receiving a surveillance video file returned by the server based on the surveillance video acquisition request, wherein the surveillance video file comprises: dynamic video frames and their time indices;
in a displayed time progress bar, setting a time area with the dynamic video frame as a first selectable area according to a time index corresponding to the dynamic video frame, and setting a time area corresponding to a static picture in the monitoring video file as an unselected area, wherein the time progress bar is the playing time for displaying the monitoring video in a picture form;
displaying the first selectable area and other areas except the first selectable area in the timeline in different colors;
acquiring a video frame selection operation made by a user according to the label in the time progress bar;
and determining the time position to which the current playing time moves according to the video frame selection operation, and playing and displaying the dynamic video frame at the time position.
2. The method of claim 1, wherein monitoring the video file further comprises: the video frame corresponding to the abnormal event and the time index thereof, the method further comprises the following steps:
and marking the time region of the abnormal event in the displayed time progress bar according to the time index of the video frame corresponding to the abnormal event.
3. The method according to claim 2, wherein the marking the time region of the abnormal event in the displayed timeline according to the time index of the video frame corresponding to the abnormal event comprises:
in the displayed time progress bar, setting a time area where the abnormal event exists as a second selectable area according to the time index of the video frame corresponding to the abnormal event;
and displaying the second selectable area and other areas except the second selectable area in the time progress bar in different colors.
4. The method of claim 1, further comprising:
and in the calendar, marking the date with the dynamic video frame according to the time index of the dynamic video frame.
5. The method according to any one of claims 1 to 3, wherein before sending a surveillance video acquisition request to a server when acquiring a surveillance video acquisition operation of a user, the method further comprises:
sending a video log acquisition request to the server;
receiving video log information returned by the server based on the video log acquisition request;
and based on the video recording log information, marking the date with the monitoring video in a calendar so that a user can determine the date needing to view the monitoring video based on the mark in the calendar.
6. The method according to claim 5, wherein the sending a surveillance video acquisition request to a server when acquiring a surveillance video acquisition operation of a user comprises:
when a selection operation of a user on the marked date is acquired, a monitoring video acquisition request is generated based on the selection operation, wherein the monitoring video acquisition request comprises the date corresponding to the selection operation;
and sending the monitoring video acquisition request to the server.
7. The method according to any one of claims 1-3, further comprising:
and displaying all marked video frames in the time progress bar based on the time index.
8. A surveillance video processing apparatus, comprising: a request sending module, a file receiving module, a video marking module and a video display module, wherein,
the request sending module is used for sending a monitoring video obtaining request to the server when the monitoring video obtaining operation of the user is obtained;
the file receiving module is configured to receive a surveillance video file returned by the server based on the surveillance video acquisition request, where the surveillance video file includes: dynamic video frames and their time indices;
the video annotation module comprises a first area setting unit and a first color marking unit, wherein the first area setting unit is used for setting a time area with a dynamic video frame as a first selectable area and setting a time area corresponding to a static picture in a monitoring video file as an unselected area in a displayed time progress bar according to a time index corresponding to the dynamic video frame, the first color marking unit is used for displaying the first selectable area and other areas except the first selectable area in the time progress bar in different colors, and the time progress bar is used for displaying the playing time of the monitoring video in a picture form;
the video display module is used for acquiring a video frame selection operation made by a user according to the mark in the time progress bar, determining a time position to which the current playing time moves according to the video frame selection operation, and playing and displaying the dynamic video frame at the time position.
9. A terminal device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-7.
10. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any one of claims 1-7.
CN201810609711.7A 2018-06-13 2018-06-13 Monitoring video processing method and device, terminal equipment and storage medium Active CN108769604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810609711.7A CN108769604B (en) 2018-06-13 2018-06-13 Monitoring video processing method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810609711.7A CN108769604B (en) 2018-06-13 2018-06-13 Monitoring video processing method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108769604A CN108769604A (en) 2018-11-06
CN108769604B true CN108769604B (en) 2021-01-26

Family

ID=64022574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810609711.7A Active CN108769604B (en) 2018-06-13 2018-06-13 Monitoring video processing method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108769604B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102955A1 (en) * 2018-11-20 2020-05-28 浙江宇视科技有限公司 Monitoring video storage method and apparatus, and video storage device
CN110929097A (en) * 2019-11-19 2020-03-27 浙江大华技术股份有限公司 Video recording display method, device and storage medium
CN113038265B (en) * 2021-03-01 2022-09-20 创新奇智(北京)科技有限公司 Video annotation processing method and device, electronic equipment and storage medium
CN113286126A (en) * 2021-05-28 2021-08-20 Oppo广东移动通信有限公司 Monitoring data processing method, system and related device
CN113347502B (en) * 2021-06-02 2023-03-14 宁波星巡智能科技有限公司 Video review method, video review device, electronic equipment and medium
CN113747124A (en) * 2021-08-31 2021-12-03 深圳Tcl新技术有限公司 Video monitoring method and device, storage medium and electronic equipment
CN114040244B (en) * 2021-10-14 2024-01-05 北京激浊扬清文化科技有限公司 Method, system, equipment and medium for reducing error event in edge vision scene
CN114116094B (en) * 2021-11-10 2024-02-27 上海鹰觉科技有限公司 Method and system for automatically collecting samples

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015178234A1 (en) * 2014-05-22 2015-11-26 株式会社日立国際電気 Image search system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101370126B (en) * 2008-09-24 2012-02-08 中兴通讯股份有限公司 Broadcast control method and system for monitoring video
CN103945156A (en) * 2014-04-16 2014-07-23 深圳英飞拓科技股份有限公司 Alarm video replaying method and system
CN104573037B (en) * 2015-01-16 2018-03-09 北京中电兴发科技有限公司 A kind of method and system by more Color time axle quick search monitoring videos
US9898665B2 (en) * 2015-10-29 2018-02-20 International Business Machines Corporation Computerized video file analysis tool and method
US10063937B2 (en) * 2016-11-21 2018-08-28 Yieldmo, Inc. Methods for serving a video advertisement to a user

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015178234A1 (en) * 2014-05-22 2015-11-26 株式会社日立国際電気 Image search system

Also Published As

Publication number Publication date
CN108769604A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108769604B (en) Monitoring video processing method and device, terminal equipment and storage medium
CN107454465B (en) Video playing progress display method and device and electronic equipment
CN106296724B (en) Method and system for determining track information of target person and processing server
US10008092B2 (en) Method, apparatus, and storage medium for alerting cooking state of electric cooker
US20220067379A1 (en) Category labelling method and device, and storage medium
US10558511B2 (en) Method and device for evaluating system fluency, and UE
US20240112470A1 (en) Methods and systems for detection of anomalous motion in a video stream and for creating a video summary
CN108062507B (en) Video processing method and device
JP2016534666A (en) Video backup method, apparatus, program, and recording medium
CN110225141B (en) Content pushing method and device and electronic equipment
US11172428B2 (en) Broadcasting and discovering methods, broadcasting and discovering devices and storage medium
AU2018360470A1 (en) Intelligent self-powered camera
CN109005446A (en) A kind of screenshotss processing method and processing device, electronic equipment, storage medium
CN109714609A (en) Live information processing method, equipment and storage medium
CN107316207B (en) Method and device for acquiring display effect information
EP3737139A1 (en) Information reporting method and apparatus
US10178252B2 (en) Photographing process remaining time reminder method and system
CN108965606B (en) Method and device for determining ambient temperature
CN105488965A (en) Alarm method and device
CN109040672B (en) Video playing method and device
CA2994761C (en) Systems and methods for smart home data storage
US10885343B1 (en) Repairing missing frames in recorded video with machine learning
CN109544852B (en) Restaurant fire monitoring method and device
US20230215256A1 (en) Determining areas of interest in video based at least on a user's interactions with the video
CN109041101B (en) WIFI cut-off processing method, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant