WO2018086527A1 - 一种视频处理方法及装置 - Google Patents

一种视频处理方法及装置 Download PDF

Info

Publication number
WO2018086527A1
WO2018086527A1 PCT/CN2017/109915 CN2017109915W WO2018086527A1 WO 2018086527 A1 WO2018086527 A1 WO 2018086527A1 CN 2017109915 W CN2017109915 W CN 2017109915W WO 2018086527 A1 WO2018086527 A1 WO 2018086527A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame image
time period
attribute
image
frame
Prior art date
Application number
PCT/CN2017/109915
Other languages
English (en)
French (fr)
Inventor
董婷
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018086527A1 publication Critical patent/WO2018086527A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • This document relates to, but is not limited to, the field of communication technology, and in particular to a video processing method and apparatus.
  • Embodiments of the present invention provide a video processing method and apparatus.
  • the embodiment of the invention provides a video processing method, including:
  • Steps include:
  • Steps include:
  • the method further includes: before receiving the viewing instruction input by the user or after receiving the viewing instruction input by the user, the method further includes:
  • the step of determining an attribute of one or more time periods included in the target video includes:
  • the image attribute of the continuous M frame is a dynamic frame image, determining that the attribute of the time period corresponding to the continuous M frame image is a dynamic time period; wherein, M is an integer greater than 3;
  • the attribute of the other time period of the target video is determined to be a static time period.
  • the step of performing color value comparison on the multi-frame image to determine image attributes of each frame includes:
  • the color value comparison result satisfies the first preset condition, determining that the image attribute of the (N+1)th frame is a static frame image, and continuing to perform the color value of the N+2 frame image and the color value of the reference frame image Aligning until the determination of the image attribute of the last frame in the multi-frame image is completed;
  • the color value comparison result does not satisfy the first preset condition, determine that the image attribute of the (N+1)th frame is a dynamic frame image, and set the N+1th frame image as a reference frame image, and continue to be the Nth
  • the color value of the +2 frame image is compared with the color value of the reference frame image until the determination of the image attribute of the last frame in the multi-frame image is completed, wherein N is an integer greater than or equal to 1.
  • the step of comparing the color value of the (N+1)th frame image with the color value of the reference frame image includes:
  • the ratio of the number of pixels with different color values in the block at the same position to the total number of pixels in the block is less than the first preset value, determining that the block is a changed block; if the block at the same position The ratio of the number of pixels with different median values to the total number of pixels in the block is greater than or equal to the first preset value, and the block is determined to be an unaltered block;
  • the image of the (N+1)th frame is a dynamic frame image.
  • the step of acquiring the dynamic video from the start time and playing the acquired dynamic video includes:
  • the attribute of the time period in which the start time is located is a dynamic time period, acquire a frame preview image corresponding to the dynamic time segment and display the acquired one frame preview image;
  • Receiving a browsing instruction input by the user according to the preview image acquiring a dynamic video starting from the starting moment and playing the acquired dynamic video.
  • the method further includes at least one of the following steps:
  • the second identifier is used to identify the time period of the target video whose attribute is the dynamic time period.
  • the embodiment of the invention further provides a video processing device, including:
  • the instruction receiving module is configured to receive a viewing instruction input by the user, where the viewing instruction carries a starting moment;
  • An attribute determining module configured to determine an attribute of a time period in which the starting time is located
  • the processing module is configured to acquire information of the target video corresponding to the attribute of the time period, and process the acquired information of the target video.
  • the processing module includes:
  • the first processing sub-module is configured to acquire a frame image corresponding to the static time period when the attribute of the time period in which the starting time is located is a static time period, and display the acquired one-frame image.
  • the processing module further includes:
  • the second processing sub-module is configured to: when the attribute of the time period in which the starting time is located is a dynamic time period, acquire a dynamic video starting from the starting time, and play the acquired dynamic video.
  • the device further comprises:
  • the determining module is configured to determine an attribute of one or more time periods included in the target video; wherein the attributes of the time period are divided into a static time period or a dynamic time period.
  • the determining module includes:
  • a sampling sub-module configured to sample the target video to obtain a multi-frame image of the target video
  • An attribute determining submodule configured to perform color value comparison on the multi-frame image to determine an image attribute of each frame; wherein the image attribute is divided into a dynamic frame image or a static frame image;
  • the dynamic determination sub-module is configured to determine that the attribute of the time period corresponding to the continuous M-frame image is a dynamic time period if the image attribute of the continuous M-frame is a dynamic frame image; wherein M is an integer greater than 3;
  • the static determination submodule is configured to determine that the attribute of the other time period of the target video is a static time period if the target video further includes other time segments.
  • the attribute determining submodule includes:
  • a color value comparison unit configured to set an image of the Nth frame in the multi-frame image as a reference frame image, And comparing the color value of the N+1th frame image with the color value of the reference frame image;
  • a first determining unit configured to determine that the image attribute of the (N+1)th frame is a static frame image if the color value comparison result satisfies the first preset condition, and continue to color the N+2 frame image with the The color values of the reference frame image are compared until the determination of the image attribute of the last frame in the multi-frame image is completed;
  • a second determining unit configured to determine that the image attribute of the (N+1)th frame is a dynamic frame image, and set the N+1th frame image as a reference if the color value comparison result does not satisfy the first preset condition a frame image, continuing to compare a color value of the N+2th frame image with a color value of the reference frame image until a determination of an image attribute of a last frame in the multi-frame image is completed, wherein N is greater than Or an integer equal to 1.
  • color value comparison unit comprises:
  • a block subunit configured to divide the reference frame image and the (N+1)th frame image into N blocks that do not overlap each other; wherein N is an integer greater than zero;
  • a first comparison subunit configured to compare a color value of a pixel point of each of the blocks of the (N+1)th frame image with a color value of a block of the same position of the reference frame image
  • the ratio of the number of pixels having different color values in the block of the same position to the total number of pixels in the block is less than the first preset value, and determining that the block is a changed block; If the ratio of the number of pixels with different color values in the block at the same position to the total number of pixels in the block is greater than the first preset value, determining that the block is an unaltered block;
  • a calculating unit configured to integrate adjacent changed blocks in the (N+1)th frame image into a search block, and set a position corresponding to the search block in the Nth frame image as a reference block; and calculate the search block and The absolute value of the residual between the reference blocks;
  • the second comparison subunit is configured to determine that if the absolute value of the residual is greater than the second preset value, determining that the color value comparison result of the N+1th frame image and the reference frame image satisfies the first preset condition,
  • the image attribute of the (N+1)th frame is a static frame image; if the absolute value of the residual is less than or equal to the second preset value, determining a color value comparison result of the (N+1)th frame image and the reference frame image If the first preset condition is not satisfied, it is determined that the image attribute of the (N+1)th frame is a dynamic frame image.
  • the second processing submodule includes:
  • a preview unit configured to: if the attribute of the time period in which the starting time is located is a dynamic time period, acquire a frame preview image corresponding to the dynamic time segment and control the acquired one frame preview image display;
  • the playing unit is configured to receive a browsing instruction input by the user according to the preview image, acquire a dynamic video starting from the starting moment, and control the acquired dynamic video playing.
  • the device further comprises at least one of the following modules:
  • the first identifier module is configured to identify, by using the first identifier, a time period of the target video whose attribute is a static time period;
  • the second identifier module is configured to identify, by using the second identifier, a time period of the target video whose attribute is a dynamic time period.
  • the video that the user wants to view is processed according to the attribute of the time period in which the start time of the video that the user wants to view, and the attribute of the time segment is divided into static time.
  • Segment or dynamic time period when the user views the static time period, only one frame of image is displayed, and when the user views the dynamic time segment, the dynamic video is played; when the terminal device acquires or stores the target video, the static time period is only one frame image, minus The space occupied by the target video is small; and while ensuring that key information is not missed, the user is prevented from spending unnecessary time to view the video in the static time period, reducing the time of video browsing, and improving the efficiency of video browsing.
  • FIG. 1 is a flow chart showing the steps of a video processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of frame blocking in a video processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of frame comparison in a video processing method according to an embodiment of the present invention.
  • FIG. 4 is a structural diagram of a video processing apparatus according to an embodiment of the present invention.
  • the intelligent monitoring product performs compression storage processing on the collected video. Its mainstream storage technology is based on a single frame, which is based on the sacrifice of image quality to achieve storage space reduction.
  • This type of compression method is based on the storage mode of each frame of video data, thereby reducing the space occupied by the file.
  • the scale of compression is difficult to achieve a greater breakthrough with the improvement of technology.
  • an embodiment of the present invention provides a video processing method, including:
  • Step 11 Receive a viewing instruction input by a user, where the viewing instruction carries a starting moment.
  • the starting time is the starting time of the video that the user wants to view; for example, clicking "04:30" on the time axis of the target video, and the viewing signaling triggered at this time is from the 4th minute and 30 seconds of viewing the target video.
  • the video, that is, the start time carried in the view command is 4 minutes and 30 seconds.
  • Step 12 Determine an attribute of a time period in which the starting time is located
  • Step 13 Acquire information of the target video corresponding to the attribute of the time period, and process the obtained information of the target video.
  • the information of the target video may be obtained by the video collection unit, or may be obtained by other means, for example, downloading from a server, etc., and is not specifically limited herein.
  • the video capture unit may be a common CCD (charge coupled device) camera, or other video capture devices; wherein the CCD camera has a small size, is light in weight, is not affected by a magnetic field, and has Anti-vibration and impact characteristics.
  • the viewing instruction input by the user is input by the user according to his own needs, and is generally triggered by the time axis; for example, clicking "04:30" on the time axis of the target video, so that it is necessary to determine 4 minutes and 30 seconds.
  • the attribute of the time period in which the time segment is located, and the information of the target video is obtained according to the attribute of the time period; it should be noted that the information of the target video may be one frame of image or I think that a video is not specifically limited here.
  • step 13 when the attribute of the time period in which the start time is located is a static time period, step 13 includes:
  • Step 131 Acquire a frame image corresponding to the static time period, and display the acquired one frame image.
  • the attribute of the time period in which the starting time is located is a static time period
  • an image of one frame corresponding to the static time period is acquired, and the above image in the static time period is displayed.
  • step 13 when the attribute of the time period in which the starting time is located is a dynamic time period, step 13 includes:
  • Step 132 Acquire a dynamic video starting from the starting moment, and play the obtained dynamic video.
  • the dynamic video starting from the starting time is acquired and played; for example, playing a video starting from 4 minutes and 30 seconds, if the user has no other operations in the dynamic time period , then play to the end time of the dynamic time period; if there is user operation in the dynamic time period, the video is directly ended according to the user operation.
  • the method further includes:
  • Step 10 Determine an attribute of one or more time periods included in the target video; wherein the attributes of the time period are divided into a static time period or a dynamic time period.
  • the attributes of one or more time periods may be directly determined by a third party (a third party may be a server, a system, etc.), and the terminal may directly obtain; that is, the process of determining a static time period or a dynamic time period captures video data at the camera. After that, store before.
  • the attribute of one or more time periods may be determined by the terminal itself, that is, the process of determining the static time period or the dynamic time period is to save all the video data collected by the camera before the user previews, and only process the data during the preview to the user. Provides dynamic or static information for the selected time period.
  • step 10 includes:
  • step 101 the target video is sampled to obtain a multi-frame image of the target video.
  • the process of sampling the target video includes: sampling at a fixed time interval in the time axis based on the time axis to obtain a multi-frame image.
  • Step 102 performing color value comparison on the multi-frame image to determine an image attribute of each frame, where the image attribute is divided into a dynamic frame image or a static frame image;
  • Step 103 If the image attribute of the continuous M frame is a dynamic frame image, determine that the attribute of the time period corresponding to the continuous M frame image is a dynamic time period; where M is an integer greater than 3;
  • Step 104 If the target video further includes other time segments, determine that the attributes of the other time segments of the target video are static time segments.
  • color values of two frames of images are compared to determine image attributes of each frame; wherein the image attributes are divided into dynamic frame images or static frame images.
  • the static frame image indicates that the compared image is the same as the reference frame image; and the dynamic frame image indicates that the compared image is different from the reference frame image.
  • the consecutive M frames are dynamic frames
  • the time period corresponding to the consecutive M frames is a dynamic time segment, and the video in the time period is uploaded to the server for storage or storage to the local; and the attributes of other time segments are static.
  • any frame image in the static time period is uploaded to the cloud server for storage or stored locally.
  • the static time period means that the image of each time frame is the same or substantially the same, and the substantially identical definition is determined by a preset sensitivity; and the dynamic time period refers to the adjacent frame image in the time period.
  • the same or neither are substantially the same, that is, there is a difference between two adjacent frames of images.
  • the pending video is the activity of the pet dog at home between 9 am and 5 pm, assuming that the dog is sleeping between 12 noon and 3 pm, there is no moving position; while other dogs are in other time periods Moving around; between 9:00 and 12:00, it is a dynamic time period, between 12 noon and 3 pm is a static time period, and between 3 pm and 5 pm is a dynamic time period.
  • the images of any one frame are the same or substantially the same in the static time period, in order to save the storage space, only one frame of the image needs to be stored when the video is saved.
  • the user wants to view the video in the static time period, only the stored image of any one of the above frames is displayed, thereby reducing the time for browsing the video and improving the browsing efficiency.
  • the adjacent frame images in the dynamic time period are different or not the same, that is, there are differences between the adjacent two frames; in this case, in order to ensure that no key information is missed, the video in this time period needs to be stored.
  • the target video in the above-mentioned time period is displayed, so that the key information is not missed, and the guarantee is ensured. Video browsing efficiency.
  • the data may be saved locally or uploaded to the cloud server through wifi according to different user saving policies.
  • the storage location includes a cloud server, a local SD card storage, or a network attached storage NAS storage, which are not exemplified herein.
  • step 102 in the foregoing embodiment of the present invention includes:
  • Step 1021 Set an Nth frame image of the multi-frame image as a reference frame image, and compare a color value of the N+1th frame image with a color value of the reference frame image.
  • Step 1022 If the color value comparison result satisfies the first preset condition, determine that the image attribute of the (N+1)th frame is a static frame image, and continue to use the color value of the N+2 frame image and the reference frame image. The color values are compared until the determination of the image attributes of the last frame in the multi-frame image is completed;
  • Step 1023 If the color value comparison result does not satisfy the first preset condition, determine that the image attribute of the (N+1)th frame is a dynamic frame image, and set the N+1th frame image as a reference frame image, and continue Comparing the color value of the N+2th frame image with the color value of the reference frame image until the determination of the image attribute of the last frame in the multi-frame image is completed, wherein N is greater than or equal to Integer.
  • the video capture module stores the video data (ie, the target video) in the time period T1 to the local in time; takes the start frame of the T1 as the reference frame, and may also take the basic scene selected by the user. Is the reference frame.
  • the second frame image is compared with the reference frame image by a color value, and if the color value difference between the color value of the second frame image and the color value of the reference frame image is less than a threshold (ie, the color value comparison result satisfies the first Preset condition), the second frame is a non-abnormal frame, that is, the second frame image is a static frame image; if the color value difference between the color value of the second frame image and the color value of the reference frame image is greater than or If the threshold is equal to the threshold (that is, the color value comparison result does not satisfy the first preset condition), the second frame is an abnormal frame, that is, the second frame image is a dynamic frame image.
  • a threshold ie, the color value comparison result satisfies the first Preset condition
  • the second frame is set as the reference frame, that is, the second frame image is set as the reference frame image, and the third frame image and the second frame image are compared with the color value.
  • the first frame is still the reference frame.
  • the third frame image and the first frame image are compared with each other until the color value is compared. Determine the properties of the last frame of the image.
  • step 1021 includes:
  • the ratio of the number of pixels with different color values in the block at the same position to the total number of pixels in the block is less than the first preset value, determining that the block is a changed block; if the block at the same position The ratio of the number of pixels with different median values to the total number of pixels in the block is greater than or equal to the first preset value, and the block is determined to be an unaltered block;
  • the image of the (N+1)th frame is a dynamic frame image.
  • FIG. 2 is a block diagram of a frame, and the m ⁇ n uniform block of the model frame and the reference frame are complementarily overlapped, and the size of each block is m ⁇ n pixels; then the model frame and the reference frame are divided.
  • Block comparison if the color value comparison result of each pixel of the corresponding position is less than the set value, proceed to the next block comparison; if there is a corresponding position, the color value comparison result is greater than or equal to
  • the set value compares the vector difference between the two blocks, and if the comparison result is less than the set value, it is determined to be a static frame, and if the comparison result is greater than or equal to the set value, it is determined as a dynamic frame.
  • two frames of images are uniformly partitioned by mxn that do not overlap each other (each block is mxn pixels); and the color values of each pixel of the two frames corresponding to the block are compared, if the same position If the ratio of the number of pixels of different color values to the number of total pixels of the block is less than the limit value, the next block is continuously compared; if the color value of the same position is different, the number of pixels and the total pixel of the block One If the number scale is greater than or equal to the limit value, the block is marked as a change block; the traversal compares all the sub-blocks and marks the coordinates of the changed block, and the adjacent change blocks are integrated into one search block whose coordinates are (X2, Y2), and The coordinates of the reference block are (0, Y1), as shown in FIG.
  • DBVn represents the absolute value of the residual of the search block and the reference block n
  • DBV represents the sum of the current residual values
  • n represents the number of pixels
  • (x, y) represents the pixel coordinates
  • M1 represents the first pre-made threshold and M2 represents the second pre-made threshold.
  • DBV1 of the search block and the reference frame sub-block 1 when calculating the residual value DBV1 of the search block and the reference frame sub-block 1, if DBV1>M1 and DBV1 ⁇ M2, the comparison ends. If DBV1 is less than or equal to M1, or DBV1 is greater than or equal to M2, continue Selecting the second sub-block from the reference block n to calculate DBV2, if the condition is met, ending the matching. If the condition is not met, continue to select the remaining sub-block from the reference block n to calculate the residual value, and follow the above steps until the match is met. The prefabricated condition or the last subblock is matched.
  • the matching process ends abnormally, that is, the DBV between the sub-blocks is greater than the preset value, and the dynamic flag of the frame is set to 1, that is, the frame is a dynamic frame; and the frame is used as a new model frame, and the comparison is continued until The motion vector data between the two frames is smaller than the set value, and the dynamic flag of the frame is set to 0, that is, the frame is a static frame.
  • the dynamic mark bit of each frame is traversed. If the continuous t time period is 1, the video of the time period is saved, and the start time t1 and the end time t2 of the time period are recorded. At the same time, save the video between t1 and t2, and update the three-dimensional array of key time points: Ti[i, Ti1, Ti2]. Where i is the video tag, ie the i-th video, Ti1 is the start time of video i, and Ti2 is the end time of video i.
  • step 132 in the foregoing embodiment of the present invention includes:
  • Step 1321 If the attribute of the time period in which the start time is located is a dynamic time period, acquire a frame preview image corresponding to the dynamic time segment and display the acquired one frame preview image;
  • Step 1322 Receive a browsing instruction input by the user according to the preview image, acquire a dynamic video starting from the starting moment, and play the acquired dynamic video.
  • the time point array information is downloaded from the cloud and parsed. Download a preview of the still interval and the dynamic video interval. If there is no abnormality or no person moving in the monitored area during the time period, the user only consumes the traffic of one picture; if the time period is a dynamic time period, the user can decide whether to view according to the preview image in the dynamic time period. If you need to view it, the browsing command is triggered, and then the dynamic video in the time period is downloaded from the cloud.
  • the method further includes at least one of the following steps:
  • Step 14 Identify, by using the first identifier, a time period of the target video whose attribute is a static time period;
  • Step 15 Identify, by using the second identifier, a time period of the target video whose attribute is a dynamic time period.
  • a static time period is displayed in gray and a dynamic time period is highlighted.
  • the video processing method provided by the embodiment of the present invention may be completed before the camera captures the video data, or may be completed before the user previews, that is, the video data collected by the camera is all saved, and the frame ratio is only performed during the preview.
  • the user is provided with dynamic and static information of the video in the selected time period, and the user can have a targeted preview and download.
  • the video that the user wants to view is processed according to the attribute of the time period in which the start time of the video that the user wants to view, optionally, the time period.
  • the attribute is divided into a static time period or a dynamic time period.
  • an embodiment of the present invention provides a video processing apparatus, including:
  • the instruction receiving module 41 is configured to receive a viewing instruction input by the user, where the viewing instruction carries a starting moment;
  • An attribute determining module 42 is configured to determine an attribute of a time period in which the starting time is located;
  • the processing module 43 is configured to acquire information of the target video corresponding to the attribute of the time period, and process the acquired information of the target video.
  • the processing module 43 in the embodiment of the present invention includes:
  • a first processing submodule configured to when the attribute of the time period in which the starting time is located is static In the inter-segment, an image of one frame corresponding to the static time period is acquired, and the acquired image of the one frame is displayed.
  • processing module 43 in the embodiment of the present invention further includes:
  • the second processing sub-module is configured to: when the attribute of the time period in which the starting time is located is a dynamic time period, acquire a dynamic video starting from the starting time, and play the acquired dynamic video.
  • the device in the embodiment of the present invention further includes:
  • the determining module is configured to determine an attribute of one or more time periods included in the target video; wherein the attributes of the time period are divided into a static time period or a dynamic time period.
  • the determining module in the embodiment of the present invention includes:
  • a sampling sub-module configured to sample the target video to obtain a multi-frame image of the target video
  • An attribute determining submodule configured to perform color value comparison on the multi-frame image to determine an image attribute of each frame; wherein the image attribute is divided into a dynamic frame image or a static frame image;
  • the dynamic determination sub-module is configured to determine that the attribute of the time period corresponding to the continuous M-frame image is a dynamic time period if the image attribute of the continuous M-frame is a dynamic frame image; wherein M is an integer greater than 3;
  • the static determination submodule is configured to determine that the attribute of the other time period of the target video is a static time period if the target video further includes other time segments.
  • the attribute determining submodule in the embodiment of the present invention includes:
  • a color value comparison unit configured to set an Nth frame image of the multi-frame image as a reference frame image, and compare a color value of the (N+1)th frame image with a color value of the reference frame image;
  • a first determining unit configured to determine that the image attribute of the (N+1)th frame is a static frame image if the color value comparison result satisfies the first preset condition, and continue to color the N+2 frame image with the The color values of the reference frame image are compared until the determination of the image attribute of the last frame in the multi-frame image is completed;
  • a second determining unit configured to determine that the image attribute of the (N+1)th frame is a dynamic frame image, and set the N+1th frame image as a reference if the color value comparison result does not satisfy the first preset condition a frame image, continuing to compare the color value of the N+2th frame image with the color value of the reference frame image, Until the determination of the image attribute of the last frame in the multi-frame image is completed, wherein N is an integer greater than or equal to 1.
  • the color value comparison unit in the embodiment of the present invention includes:
  • a block subunit configured to divide the reference frame image and the (N+1)th frame image into N blocks that do not overlap each other; wherein N is an integer greater than zero;
  • a first comparison subunit configured to compare a color value of a pixel point of each of the blocks of the (N+1)th frame image with a color value of a block of the same position of the reference frame image
  • the ratio of the number of pixels having different color values in the block of the same position to the total number of pixels in the block is less than the first preset value, and determining that the block is a changed block; If the ratio of the number of pixels with different color values in the block at the same position to the total number of pixels in the block is greater than or equal to the first preset value, determining that the block is an unaltered block;
  • a calculating unit configured to integrate adjacent changed blocks in the (N+1)th frame image into a search block, and set a position corresponding to the search block in the Nth frame image as a reference block; and calculate the search block and The absolute value of the residual between the reference blocks;
  • the second comparison subunit is configured to determine that if the absolute value of the residual is greater than the second preset value, determining that the color value comparison result of the N+1th frame image and the reference frame image satisfies the first preset condition,
  • the image attribute of the (N+1)th frame is a static frame image; if the absolute value of the residual is less than or equal to the second preset value, determining a color value comparison result of the (N+1)th frame image and the reference frame image If the first preset condition is not satisfied, it is determined that the image attribute of the (N+1)th frame is a dynamic frame image.
  • the second processing submodule in the embodiment of the present invention includes:
  • a preview unit configured to: if the attribute of the time period in which the starting time is located is a dynamic time period, acquire a frame preview image corresponding to the dynamic time segment and control the acquired one frame preview image display;
  • the playing unit is configured to receive a browsing instruction input by the user according to the preview image, acquire a dynamic video starting from the starting moment, and control the acquired dynamic video playing.
  • the apparatus in the embodiment of the present invention further includes at least one of the following modules:
  • the first identifier module is configured to identify, by using the first identifier, a time period of the target video whose attribute is a static time period;
  • the second identifier module is configured to identify, by using the second identifier, a time period of the target video whose attribute is a dynamic time period.
  • the video processing apparatus performs corresponding processing on the video that the user wants to view according to the attribute of the time period in which the start time of the video that the user wants to view, and the attributes of the time period are divided into
  • the static time period or the dynamic time period when the user views the static time period, only one frame of image is displayed, and when the user views the dynamic time segment, the dynamic video is played; when the terminal device acquires or stores the target video, the static time segment is only one frame image.
  • the space occupied by the target video is reduced; and while ensuring that key information is not missed, the user is prevented from spending unnecessary time to view the video in the static time period, reducing the time of video browsing, and improving the efficiency of video browsing.
  • the video processing device provided by the embodiment of the present invention is a processing device that uses the video processing method provided by the foregoing embodiment, and all embodiments of the video processing method are applicable to the video processing device, and both can achieve the same or Similar benefits.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented by the processor to implement the method described in the foregoing embodiments.
  • computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage device, or may Any other medium used to store the desired information and that can be accessed by the computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media.
  • the embodiment of the present invention performs corresponding processing on the video that the user wants to view according to the attribute of the time period in which the start time of the video that the user wants to view, and the attribute of the time segment is divided into a static time segment or a dynamic time segment, and the user views Only one frame of image is displayed in the static time period, and the dynamic video is played when the user views the dynamic time segment; when the terminal device acquires or stores the target video, the static time period is only one frame image, which reduces the space occupied by the target video. And while ensuring that key information is not missed, the user is prevented from spending unnecessary time to view the video in the static time period, reducing the time of video browsing, and improving the efficiency of video browsing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

一种视频处理方法及装置,该方法包括:接收用户输入的查看指令,所述查看指令中携带一起始时刻(11);确定所述起始时刻所处的时间段的属性(12);获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理(13)。

Description

一种视频处理方法及装置 技术领域
本文涉及但不限于通信技术领域,特别是指一种视频处理方法及装置。
背景技术
在智能家居监控设备的使用产品过程中发现,所录制视频的回放相当的费时,且就家庭使用情景而言,大部分视频片段都是无变化,对用户无意义的。
智能监控行业的视频预览大多为定时向客户端推送图片,或者监控图像出现异常时向客户端发送报警信息。如果想通过手机端回放所录视频,均需要对视频进行完整的下载,既耗费有限的数据资源,也浪费时间,效率低下。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种视频处理方法及装置。
本发明实施例提供一种视频处理方法,包括:
接收用户输入的查看指令,所述查看指令中携带一起始时刻;
确定所述起始时刻所处的时间段的属性;
获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理。
其中,当所述起始时刻所处的时间段的属性为静态时间段时,所述获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理的步骤,包括:
获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
其中,当所述起始时刻所处的时间段的属性为动态时间段时,所述获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理的步骤,包括:
获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
其中,所述接收用户输入的查看指令之前或者接收用户输入的查看指令之后,所述方法还包括:
确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
其中,所述确定目标视频包含的一个或多个时间段的属性的步骤,包括:
对目标视频进行取样得到目标视频的多帧图像;
对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
其中,所述对所述多帧图像进行色值比对,确定每帧的图像属性的步骤,包括:
设置所述多帧图像中第N帧图像为基准帧图像,并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
其中,所述将第N+1帧图像的色值与所述基准帧图像的色值进行比对的步骤,包括:
将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或等于第一预设值,确定该分块为未变更块;
将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
其中,所述获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频的步骤,包括:
若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并显示获取的所述一帧预览图像;
接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并播放获取的所述动态视频。
其中,所述确定目标视频包含的一个或多个时间段的属性之后,所述方法还包括以下至少一种步骤:
利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
本发明实施例还提供一种视频处理装置,包括:
指令接收模块,设置为接收用户输入的查看指令,所述查看指令中携带一起始时刻;
属性确定模块,设置为确定所述起始时刻所处的时间段的属性;
处理模块,设置为获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理。
其中,所述处理模块包括:
第一处理子模块,设置为当所述起始时刻所处的时间段的属性为静态时间段时,获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
其中,所述处理模块还包括:
第二处理子模块,设置为当所述起始时刻所处的时间段的属性为动态时间段时,获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
其中,所述装置还包括:
确定模块,设置为确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
其中,所述确定模块包括:
取样子模块,设置为对目标视频进行取样得到目标视频的多帧图像;
属性确定子模块,设置为对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
动态确定子模块,设置为若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
静态确定子模块,设置为若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
其中,所述属性确定子模块包括:
色值比对单元,设置为设置所述多帧图像中第N帧图像为基准帧图像, 并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
第一确定单元,设置为若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
第二确定单元,设置为若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
其中,所述色值比对单元包括:
分块子单元,设置为将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
第一比对子单元,设置为将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
确定子单元,设置为若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或第一预设值,确定该分块为未变更块;
计算单元,设置为将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
第二比对子单元,设置为若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
其中,所述第二处理子模块包括:
预览单元,设置为若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并控制获取的所述一帧预览图像显示;
播放单元,设置为接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并控制获取的所述动态视频播放。
其中,所述装置还包括以下至少一种模块:
第一标识模块,设置为利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
第二标识模块,设置为利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
本发明实施例至少具有如下有益效果:
本发明实施例的视频处理方法及装置中,根据用户想要查看的视频的起始时刻所处的时间段的属性来对用户想要查看的视频进行相应处理,时间段的属性分为静态时间段或者动态时间段,用户查看静态时间段时仅显示一帧图像,而用户查看动态时间段时才播放动态视频;则终端设备获取或者存储目标视频时由于静态时间段仅为一帧图像,减小了目标视频所占的空间;且在保证关键信息不被错过的同时,避免用户花费不必要的时间去查看静态时间段内的视频,减少视频浏览的时间,提高视频浏览的效率。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1表示本发明实施例提供的视频处理方法的步骤流程图;
图2表示本发明实施例提供的视频处理方法中帧分块示意图;
图3表示本发明实施例提供的视频处理方法中帧比对示意图;
图4表示本发明实施例提供的视频处理装置的结构图。
本发明的实施方式
下面将结合附图及可选实施例进行详细描述。
为了减少用户所消耗的数据流量,或者减少存储空间的占用,智能监控产品会对采集到的视频进行压缩存储处理。其主流的存储技术基于单帧,是建立在牺牲画质的基础上来实现存储空间的缩小。
该类压缩方法基于视频数据每一帧的存储模式对其进行改进,进而减小文件所占空间。然而其压缩规模随着技术的提升而很难有更大限度的突破。另一方面,从用户的使用角度出发,除了提高存储空间(SD卡,云存储空间等)的使用效率外并无其他明显改进。比如如下场景:用户想在手机客户端查看早9点至下午5点家里宠物狗的活动情况,那么就必须看完8小时的视频才能完整了解该时间内的影像信息,若快进,则不能保证关键信息不被错过。
如图1所示,本发明实施例提供一种视频处理方法,包括:
步骤11,接收用户输入的查看指令,所述查看指令中携带一起始时刻。
该起始时刻为用户想要查看的视频的起始时刻;例如,点击目标视频的时间轴上的“04:30”,此时触发的查看信令为查看目标视频的第4分30秒起的视频,即查看指令中携带的起始时刻为第4分30秒。
步骤12,确定所述起始时刻所处的时间段的属性;
步骤13,获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理。
本步骤中,目标视频的信息可以通过视频采集单元获得,也可以通过其他途径获得,例如从一服务器下载等,在此不作具体限定。关于通过视频采集单元获得的方式,该视频采集单元可以为常见的CCD(电荷耦合元件)摄像头,也可以是其他视频采集设备;其中,CCD摄像头具有体积小、重量轻、不受磁场影响、具有抗震动和撞击等特性。
本发明实施例中,用户输入的查看指令是用户根据其自身的需求输入的,一般通过时间轴触发;例如,点击目标视频的时间轴上的“04:30”,从而需要判断4分30秒所处的时间段的属性,并根据其时间段的属性来获取目标视频的信息;需要说明的是,该目标视频的信息可以为一帧图像也可 以为一段视频,在此不作具体限定。
可选的,本发明的上述实施例中当所述起始时刻所处的时间段的属性为静态时间段时,步骤13包括:
步骤131,获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
即起始时刻所处的时间段的属性为静态时间段时,则获取与静态时间段对应的一帧图像,并显示静态时间段内的上述图像。
可选的,本发明的上述实施例中当所述起始时刻所处的时间段的属性为动态时间段时,步骤13包括:
步骤132,获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
即起始时刻所处的时间段的属性为动态时间段时,则获取从起始时刻开始的动态视频并播放;例如播放从4分30秒开始的视频,若动态时间段内用户没有其他操作,则播放至动态时间段的结束时刻;若动态时间段内有用户操作,则根据用户操作直接结束视频的播放。
可选的,本发明实施例中步骤11之前或者步骤11之后该方法还包括:
步骤10,确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
需要说明的是,一个或多个时间段的属性可以直接第三方(第三方可以为服务器、系统等)确定,终端可以直接获取;即确定静态时间段或者动态时间段的过程在摄像头捕获视频数据之后,存储之前。或者一个或多个时间段的属性可以由终端自身确定,即确定静态时间段或者动态时间段的过程是在用户预览之前,即将摄像头采集的视频数据全部保存,仅在预览时进行处理,给用户提供所选时间段内的动态信息或静态信息。
可选的,静态时间段和动态时间段中至少一种的确定方法都是相同的,本发明实施例利用帧比对的方式来确定静态时间段或者动态时间段。即步骤10包括:
步骤101,对目标视频进行取样得到目标视频的多帧图像。
可选的,对目标视频进行取样的过程包括:以时间轴为基准,在时间轴内以固定时间为间隔取样,得到多帧图像。
步骤102,对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
步骤103,若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
步骤104,若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
本发明实施例中将两帧图像的色值进行比对,从而确定每帧的图像属性;其中图像属性分为动态帧图像或者静态帧图像。静态帧图像则表明被比对的图像与基准帧图像相同;而动态帧图像则表明被比对的图像与基准帧图像不同。若连续M帧为动态帧,则该连续M帧对应的时间段的属性为动态时间段,则将该时间段内的视频上传至服务器存储或存储至本地;并认定其他时间段的属性为静态时间段,则上传静态时间段内的任意一帧图像至云服务器存储或者存储至本地。
可选的,静态时间段指该时间段内容每帧图像均相同或大致相同,其大致相同的定义由预先设定的敏感度确定;而动态时间段指该时间段内相邻帧图像均不相同或者均不大致相同,即相邻两帧图像之间存在差异。例如,待处理视频为早9点至下午5点之间家里宠物狗的活动情况,假设中午12点至下午3点之间该宠物狗在睡觉,没有挪动位置;而其他时间段内宠物狗均在走动;则早9点至中午12点之间为动态时间段,中午12点至下午3点之间为静态时间段,下午3点至下午5点之间为动态时间段。
由于静态时间段内,任意一帧图像均相同或大致相同,此时为了节省存储空间,保存视频时仅需存储任意一帧图像即可。相应的,用户想要查看静态时间段内的视频时仅显示已存储的上述任意一帧图像,从而减少浏览视频的时间,提高浏览效率。而由于动态时间段内相邻帧图像均不相同或者不大致相同,即相邻两帧图像之间存在差异;此时为了保证不错过任何关键信息,故需存储此时间段内的视频。相应的,用户想要查看动态时间段内的视频时则显示已存储的上述时间段内的目标视频,从而避免关键信息被错过,保证 视频浏览效率。
可选的,存储目标视频或者帧图像时,根据用户保存策略的不同,可以将数据保存于本地,或者通过wifi上传至云服务器等。存储位置包括云服务器、本地SD卡存储或者网络附属存储NAS存储,在此不一一举例。
可选的,本发明的上述实施例中步骤102包括:
步骤1021,设置所述多帧图像中第N帧图像为基准帧图像,并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
步骤1022,若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
步骤1023,若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
例如,N等于1时;视频采集模块以时间为单位,将时间段T1内的视频数据(即目标视频)存储至本地;取T1的起始帧为基准帧,也可以取用户选择的基础场景为基准帧。继而将第2帧图像与基准帧图像进行色值比对,若第2帧图像的色值与基准帧图像的色值之间的色值向量差小于阈值(即色值比对结果满足第一预设条件),则第2帧为非异常帧,即第2帧图像为静态帧图像;若所述第二帧图像的色值与基准帧图像的色值之间的色值向量差大于或者等于所述阈值(即色值比对结果不满足第一预设条件),则第2帧为异常帧,即第2帧图像为动态帧图像。
可选的,当第2帧为异常帧时,需将第2帧设为基准帧,即设置第2帧图像为基准帧图像,并将第3帧图像与第2帧图像进行色值比对,直到确定最后一帧图像的属性;而当第2帧为非异常帧时,则第1帧仍为基准帧,此时需将第3帧图像与第1帧图像进行色值比对,直到确定最后一帧图像的属性。
可选的,本发明的上述实施例中,步骤1021包括:
将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或等于第一预设值,确定该分块为未变更块;
将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
例如,首先设置基准帧为模型帧,第二帧图像为参考帧。如图2所示为帧分块示意图,将模型帧与参考帧进行互补重叠的m×n的均匀分块,每个分块的大小为m×n像素;然后将模型帧与参考帧进行分块比对,若对应位置的每个像素点的色值比对结果小于设定值,则继续进行下一分块的比对;若存在对应位置的像素点的色值比对结果大于或等于设定值,比较该两个字块的向量差,若比对结果小于设定值,则判定为静态帧,若比对结果大于或等于设定值,判定为动态帧。
可选的,将两帧图像进行互不重叠的mxn的均匀分块(每块大小为mxn像素);再将两帧图像对应分块的每个像素点的色值进行比对,若相同位置的色值不同像素点的个数与该块总像素点的个数比例小于限定值,则继续比对下一分块;若相同位置的色值不同像素点的个数与该块总像素点的个 数比例大于或等于限定值,则标记该块为变更块;遍历对比所有的子块并标记变更块的坐标,将相邻变更块整合为一个搜索块,其坐标为(X2,Y2),而参考块的坐标为(0,Y1),如图3所示;将参考帧内一个或多个子块进行编号,如1,2,3…;以此计算两帧间的残差值。其中,DBVn表示搜索块与参考块n的残差绝对值,DBV表示当前残差值之和,n表示像素个数,(x,y)表示像素坐标。M1表示第一预制阈值,M2表示第二预制阈值。
可选的,当计算搜索块与参考帧子块1的残差值DBV1时,若DBV1>M1且DBV1<M2,则比对结束,若DBV1小于或等于M1,或DBV1大于或等于M2,继续从参考块n中选择第二子块计算DBV2,若满足条件,则结束匹配,若不满足条件,继续从参考块n中选择剩余的子块计算残差值,并按照上述步骤处理,直到符合预制条件或最后一个子块匹配完毕。若匹配过程异常结束,即子块间DBV大于预设值,并将该帧的动态标记为置1,即该帧为动态帧;同时将该帧作为新的模型帧,继续进行比对,直到两帧间的运动向量数据小于设定值,并将该帧的动态标记为置0,即该帧为静态帧。
其次,遍历每个帧的动态标记位,若连续t时间段内均为1,则将该时间段的视频保存,并记录该时间段的起始时间t1及终止时间t2。同时保存t1至t2间的视频,并更新关键时间点三维数组:Ti[i,Ti1,Ti2]。其中,i为视频标记,即第i个视频,Ti1为视频i的起始时间,Ti2为视频i的终止时间。
可选的,本发明的上述实施例中步骤132包括:
步骤1321,若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并显示获取的所述一帧预览图像;
步骤1322,接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并播放获取的所述动态视频。
例如,用户在手机端选择预览时,从云端下载时间点数组信息并进行解析。下载静止区间及动态视频区间的预览图。若该时间段内所监控区域无异常情况或无人员走动,用户只消耗一张图片的流量;若该时间段为动态时间段,用户可根据该动态时间段内的预览图像决定是否需要查看,若需要查看则触发浏览指令,继而从云端下载该时间段内的动态视频。
可选的,为了用户能够简单明了的区分所述静态时间段和所述动态时间段,故所述方法还包括以下至少一种步骤:
步骤14,利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
步骤15,利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
例如,静态时间段利用灰色显示,而动态时间段利用高亮显示。用户点击灰色区域,可预览该时间段内的静止图片信息;而用户单击高亮区域,可预览该时间段内的视频。
需要说明的是,本发明实施例提供的视频处理方法可以在摄像头捕获视频数据之后,存储之前完成;也可以在用户预览之前完成,即摄像头采集的视频数据全部保存,仅在预览时进行帧比对,给用户提供所选时间段内的视频动态、静态信息,用户可以有针对性的预览及下载。
综上,本发明实施例提供的视频处理方法中,根据用户想要查看的视频的起始时刻所处的时间段的属性来对用户想要查看的视频进行相应处理,可选的,时间段的属性分为静态时间段或者动态时间段,用户查看静态时间段时仅显示一帧图像,而用户查看动态时间段时才播放动态视频;则终端设备获取或者存储目标视频时由于静态时间段仅为一帧图像,减小了目标视频所占的空间;且在保证关键信息不被错过的同时,避免用户花费不必要的时间去查看静态时间段内的视频,减少视频浏览的时间,提高视频浏览的效率。
如图4所示,本发明实施例提供一种视频处理装置,包括:
指令接收模块41,设置为接收用户输入的查看指令,所述查看指令中携带一起始时刻;
属性确定模块42,设置为确定所述起始时刻所处的时间段的属性;
处理模块43,设置为获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理。
可选的,本发明实施例中所述处理模块43包括:
第一处理子模块,设置为当所述起始时刻所处的时间段的属性为静态时 间段时,获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
可选的,本发明实施例中所述处理模块43还包括:
第二处理子模块,设置为当所述起始时刻所处的时间段的属性为动态时间段时,获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
可选的,本发明实施例中所述装置还包括:
确定模块,设置为确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
可选的,本发明实施例中所述确定模块包括:
取样子模块,设置为对目标视频进行取样得到目标视频的多帧图像;
属性确定子模块,设置为对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
动态确定子模块,设置为若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
静态确定子模块,设置为若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
可选的,本发明实施例中所述属性确定子模块包括:
色值比对单元,设置为设置所述多帧图像中第N帧图像为基准帧图像,并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
第一确定单元,设置为若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
第二确定单元,设置为若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对, 直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
可选的,本发明实施例中所述色值比对单元包括:
分块子单元,设置为将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
第一比对子单元,设置为将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
确定子单元,设置为若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或等于第一预设值,确定该分块为未变更块;
计算单元,设置为将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
第二比对子单元,设置为若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
可选的,本发明实施例中所述第二处理子模块包括:
预览单元,设置为若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并控制获取的所述一帧预览图像显示;
播放单元,设置为接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并控制获取的所述动态视频播放。
可选的,本发明实施例中所述装置还包括以下至少一种模块:
第一标识模块,设置为利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
第二标识模块,设置为利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
综上,本发明实施例提供的视频处理装置中据用户想要查看的视频的起始时刻所处的时间段的属性来对用户想要查看的视频进行相应处理,,时间段的属性分为静态时间段或者动态时间段,用户查看静态时间段时仅显示一帧图像,而用户查看动态时间段时才播放动态视频;则终端设备获取或者存储目标视频时由于静态时间段仅为一帧图像,减小了目标视频所占的空间;且在保证关键信息不被错过的同时,避免用户花费不必要的时间去查看静态时间段内的视频,减少视频浏览的时间,提高视频浏览的效率。
需要说明的是,本发明实施例提供的视频处理装置是应用上述实施例提供的视频处理方法的处理装置,上述视频处理方法的所有实施例均适用于该视频处理装置,且均能达到相同或相似的有益效果。
本发明实施例还提供了一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述实施例所述的方法。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理单元的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除 和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
工业实用性
本发明实施例根据用户想要查看的视频的起始时刻所处的时间段的属性来对用户想要查看的视频进行相应处理,时间段的属性分为静态时间段或者动态时间段,用户查看静态时间段时仅显示一帧图像,而用户查看动态时间段时才播放动态视频;则终端设备获取或者存储目标视频时由于静态时间段仅为一帧图像,减小了目标视频所占的空间;且在保证关键信息不被错过的同时,避免用户花费不必要的时间去查看静态时间段内的视频,减少视频浏览的时间,提高视频浏览的效率。

Claims (18)

  1. 一种视频处理方法,包括:
    接收用户输入的查看指令,所述查看指令中携带一起始时刻(11);
    确定所述起始时刻所处的时间段的属性(12);
    获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理(13)。
  2. 根据权利要求1所述的方法,其中,所述获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理的步骤(13),包括:
    当所述起始时刻所处的时间段的属性为静态时间段时,获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
  3. 根据权利要求1所述的方法,其中,所述获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理的步骤(13),包括:
    当所述起始时刻所处的时间段的属性为动态时间段时,获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
  4. 根据权利要求1所述的方法,所述方法还包括:
    所述接收用户输入的查看指令之前或者接收用户输入的查看指令之后,确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
  5. 根据权利要求4所述的方法,其中,所述确定目标视频包含的一个或多个时间段的属性的步骤,包括:
    对目标视频进行取样得到目标视频的多帧图像;
    对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
    若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
    若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
  6. 根据权利要求5所述的方法,其中,所述对所述多帧图像进行色值比对,确定每帧的图像属性的步骤,包括:
    设置所述多帧图像中第N帧图像为基准帧图像,并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
    若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
    若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
  7. 根据权利要求6所述的方法,其中,所述将第N+1帧图像的色值与所述基准帧图像的色值进行比对的步骤,包括:
    将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
    将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
    若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或等于第一预设值,确定该分块为未变更块;
    将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
    若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静 态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
  8. 根据权利要求3所述的方法,其中,所述获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频的步骤,包括:
    若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并显示获取的所述一帧预览图像;
    接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并播放获取的所述动态视频。
  9. 根据权利要求4所述的方法,所述方法还包括以下至少一种步骤:
    所述确定目标视频包含的一个或多个时间段的属性之后,利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
    所述确定目标视频包含的一个或多个时间段的属性之后,利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
  10. 一种视频处理装置,包括:
    指令接收模块(41),设置为接收用户输入的查看指令,所述查看指令中携带一起始时刻;
    属性确定模块(42),设置为确定所述起始时刻所处的时间段的属性;
    处理模块(43),设置为获取与所述时间段的属性对应的目标视频的信息,并对获取的所述目标视频的信息进行处理。
  11. 根据权利要求10所述的装置,其中,所述处理模块(43)包括:
    第一处理子模块,设置为当所述起始时刻所处的时间段的属性为静态时间段时,获取与所述静态时间段对应的一帧图像,并显示获取的所述一帧图像。
  12. 根据权利要求10所述的装置,其中,所述处理模块(43)还包括:
    第二处理子模块,设置为当所述起始时刻所处的时间段的属性为动态时间段时,获取从所述起始时刻开始的动态视频,并播放获取的所述动态视频。
  13. 根据权利要求10所述的装置,所述装置还包括:
    确定模块,设置为确定目标视频包含的一个或多个时间段的属性;其中,时间段的属性分为静态时间段或者动态时间段。
  14. 根据权利要求13所述的装置,其中,所述确定模块包括:
    取样子模块,设置为对目标视频进行取样得到目标视频的多帧图像;
    属性确定子模块,设置为对所述多帧图像进行色值比对,确定每帧的图像属性;其中,所述图像属性分为动态帧图像或者静态帧图像;
    动态确定子模块,设置为若连续M帧的图像属性为动态帧图像,确定连续M帧图像对应的时间段的属性为动态时间段;其中,M为大于3的整数;
    静态确定子模块,设置为若所述目标视频还包括其他时间段,确定所述目标视频的其他时间段的属性为静态时间段。
  15. 根据权利要求14所述的装置,其中,所述属性确定子模块包括:
    色值比对单元,设置为设置所述多帧图像中第N帧图像为基准帧图像,并将第N+1帧图像的色值与所述基准帧图像的色值进行比对;
    第一确定单元,设置为若色值比对结果满足第一预设条件,确定所述第N+1帧的图像属性为静态帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定;
    第二确定单元,设置为若色值对比结果不满足所述第一预设条件,确定所述第N+1帧的图像属性为动态帧图像,并设置所述第N+1帧图像为基准帧图像,继续将第N+2帧图像的色值与所述基准帧图像的色值进行比对,直至完成所述多帧图像中的最后一帧的图像属性的确定,其中,N为大于或者等于1的整数。
  16. 根据权利要求15所述的装置,其中,所述色值比对单元包括:
    分块子单元,设置为将所述基准帧图像和第N+1帧图像分别划分成互不重叠的N个分块;其中,N为大于零的整数;
    第一比对子单元,设置为将第N+1帧图像的每个分块的像素点的色值分别与所述基准帧图像的相同位置的分块的像素点的色值进行比对;
    确定子单元,设置为若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例小于第一预设值,确定该分块为变更块;若相同位置的分块中色值不同的像素点的个数占该分块中总像素点的个数的比例大于或等于第一预设值,确定该分块为未变更块;
    计算单元,设置为将第N+1帧图像中相邻的变更块整合为搜索块,并将第N帧图像中与所述搜索块对应的位置设置为参考块;并计算所述搜索块与参考块之间的残差绝对值;
    第二比对子单元,设置为若所述残差绝对值大于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果满足第一预设条件,则所述第N+1帧的图像属性为静态帧图像;若所述残差绝对值小于或等于第二预设值,确定第N+1帧图像与所述基准帧图像的色值比对结果不满足第一预设条件,则确定所述第N+1帧的图像属性为动态帧图像。
  17. 根据权利要求12所述的装置,其中,所述第二处理子模块包括:
    预览单元,设置为若所述起始时刻所处的时间段的属性为动态时间段,获取与所述动态时间段对应的一帧预览图像并控制获取的所述一帧预览图像显示;
    播放单元,设置为接收用户根据所述预览图像输入的浏览指令,获取从所述起始时刻开始的动态视频并控制获取的所述动态视频播放。
  18. 根据权利要求13所述的装置,所述装置还包括以下至少一种模块:
    第一标识模块,设置为利用第一标识对属性为静态时间段的目标视频的时间段进行标识;
    第二标识模块,设置为利用第二标识对属性为动态时间段的目标视频的时间段进行标识。
PCT/CN2017/109915 2016-11-08 2017-11-08 一种视频处理方法及装置 WO2018086527A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611034680.4A CN108062507B (zh) 2016-11-08 2016-11-08 一种视频处理方法及装置
CN201611034680.4 2016-11-08

Publications (1)

Publication Number Publication Date
WO2018086527A1 true WO2018086527A1 (zh) 2018-05-17

Family

ID=62110131

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109915 WO2018086527A1 (zh) 2016-11-08 2017-11-08 一种视频处理方法及装置

Country Status (2)

Country Link
CN (1) CN108062507B (zh)
WO (1) WO2018086527A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672154A (zh) * 2020-12-15 2021-04-16 上海信联信息发展股份有限公司 直播视频播放方法、装置、服务器和计算机可读存储介质
CN113129360A (zh) * 2019-12-31 2021-07-16 北京字节跳动网络技术有限公司 视频内对象的定位方法、装置、可读介质及电子设备
CN114283356A (zh) * 2021-12-08 2022-04-05 上海韦地科技集团有限公司 一种移动图像的采集分析系统及方法
CN115514985A (zh) * 2022-09-20 2022-12-23 广东省宏视智能科技有限公司 一种视频处理方法、装置、电子设备及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110662106B (zh) * 2019-09-18 2021-08-27 浙江大华技术股份有限公司 一种录像回放的方法及装置
CN110809166B (zh) * 2019-10-31 2022-02-11 北京字节跳动网络技术有限公司 视频数据处理方法、装置和电子设备
CN111050132B (zh) * 2019-12-17 2021-10-15 浙江大华技术股份有限公司 监控设备的监控预览图生成方法、装置、终端设备及存储装置
CN113535993A (zh) * 2021-07-30 2021-10-22 北京字跳网络技术有限公司 作品封面显示方法、装置、介质和电子设备
CN114374845B (zh) * 2021-12-21 2022-08-02 北京中科智易科技有限公司 自动压缩加密的存储系统和设备
CN115886717B (zh) * 2022-08-18 2023-09-29 上海佰翊医疗科技有限公司 一种眼裂宽度的测量方法、装置和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101060624A (zh) * 2007-05-08 2007-10-24 杭州华三通信技术有限公司 视频数据的处理方法及存储设备
CN104394345A (zh) * 2014-12-10 2015-03-04 马人欢 一种安防监控视频存储与回放方法
CN104394379A (zh) * 2014-12-05 2015-03-04 北京厚吉科技有限公司 监控录像快速预览系统及快速预览方法
US20150125130A1 (en) * 2013-11-01 2015-05-07 Alpha Networks Inc. Method for network video recorder to accelerate history playback and event locking
CN105025269A (zh) * 2015-07-26 2015-11-04 杜春辉 一种低流量传输图像的方法
CN105227927A (zh) * 2015-10-15 2016-01-06 桂林电子科技大学 一种监控视频数据存储方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6548203B2 (ja) * 2013-03-18 2019-07-24 任天堂株式会社 情報処理プログラム、情報処理装置、情報処理システム、および、パノラマ動画表示方法
CN106027893A (zh) * 2016-05-30 2016-10-12 广东欧珀移动通信有限公司 控制Live Photo生成的方法、装置及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101060624A (zh) * 2007-05-08 2007-10-24 杭州华三通信技术有限公司 视频数据的处理方法及存储设备
US20150125130A1 (en) * 2013-11-01 2015-05-07 Alpha Networks Inc. Method for network video recorder to accelerate history playback and event locking
CN104394379A (zh) * 2014-12-05 2015-03-04 北京厚吉科技有限公司 监控录像快速预览系统及快速预览方法
CN104394345A (zh) * 2014-12-10 2015-03-04 马人欢 一种安防监控视频存储与回放方法
CN105025269A (zh) * 2015-07-26 2015-11-04 杜春辉 一种低流量传输图像的方法
CN105227927A (zh) * 2015-10-15 2016-01-06 桂林电子科技大学 一种监控视频数据存储方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129360A (zh) * 2019-12-31 2021-07-16 北京字节跳动网络技术有限公司 视频内对象的定位方法、装置、可读介质及电子设备
CN113129360B (zh) * 2019-12-31 2024-03-08 抖音视界有限公司 视频内对象的定位方法、装置、可读介质及电子设备
CN112672154A (zh) * 2020-12-15 2021-04-16 上海信联信息发展股份有限公司 直播视频播放方法、装置、服务器和计算机可读存储介质
CN114283356A (zh) * 2021-12-08 2022-04-05 上海韦地科技集团有限公司 一种移动图像的采集分析系统及方法
CN114283356B (zh) * 2021-12-08 2022-11-29 上海韦地科技集团有限公司 一种移动图像的采集分析系统及方法
CN115514985A (zh) * 2022-09-20 2022-12-23 广东省宏视智能科技有限公司 一种视频处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108062507A (zh) 2018-05-22
CN108062507B (zh) 2024-02-27

Similar Documents

Publication Publication Date Title
WO2018086527A1 (zh) 一种视频处理方法及装置
US8738622B2 (en) Processing captured images having geolocations
US8031238B2 (en) Image-capturing apparatus, image-capturing method, and computer program product
CN103297682A (zh) 运动图像拍摄设备和使用摄像机装置的方法
KR102180474B1 (ko) 백업 정보를 디스플레이 하여 이미지 파일을 관리하는 장치 및 방법
CN105185121A (zh) 一种虚拟卡口并行识别车牌的方法
US20220415147A1 (en) Devices, systems, and methods for remote video retrieval
US20160261906A1 (en) Method and system for synchronizing usage information between device and server
US20140082208A1 (en) Method and apparatus for multi-user content rendering
CN111582024B (zh) 视频流处理方法、装置、计算机设备和存储介质
TWI589158B (zh) 監控資料的原始畫面儲存系統及其儲存方法
US9955162B2 (en) Photo cluster detection and compression
JP2019012466A (ja) ドライブレコーダ運用システム、ドライブレコーダ、運用方法および運用プログラム
US10194072B2 (en) Method and apparatus for remote detection of focus hunt
JP2012527801A (ja) デジタル画像を撮像する方法及び撮像装置
US10165223B2 (en) Pre-selectable video file playback system and method, and computer program product
US11398091B1 (en) Repairing missing frames in recorded video with machine learning
US10382717B2 (en) Video file playback system capable of previewing image, method thereof, and computer program product
CN113596582A (zh) 一种视频预览方法、装置及电子设备
WO2017101125A1 (zh) 一种监控系统中的图像管理方法及系统
JP2005184095A (ja) 撮像装置、動画撮影方法、及び撮影制御プログラム
US12014612B2 (en) Event detection, event notification, data retrieval, and associated devices, systems, and methods
CN112905821B (zh) 图像显示方法、装置、设备及存储介质
US10861495B1 (en) Methods and systems for capturing and transmitting media
KR101380501B1 (ko) 모바일 단말을 기반으로 한 영상 관제 시스템 및 이의 관제 영상 녹화 중계 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17869832

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17869832

Country of ref document: EP

Kind code of ref document: A1