CN111385512B - Video data processing method and device - Google Patents

Video data processing method and device Download PDF

Info

Publication number
CN111385512B
CN111385512B CN201811638680.4A CN201811638680A CN111385512B CN 111385512 B CN111385512 B CN 111385512B CN 201811638680 A CN201811638680 A CN 201811638680A CN 111385512 B CN111385512 B CN 111385512B
Authority
CN
China
Prior art keywords
target area
target
original image
video frame
restoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811638680.4A
Other languages
Chinese (zh)
Other versions
CN111385512A (en
Inventor
石立用
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201811638680.4A priority Critical patent/CN111385512B/en
Publication of CN111385512A publication Critical patent/CN111385512A/en
Application granted granted Critical
Publication of CN111385512B publication Critical patent/CN111385512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video data processing method and a device, wherein the method comprises the following steps: storing a target area original image and a source attribute of the target area original image in a video frame, and shielding a target area in the video frame; the source attribute of the target area original image is used for identifying a target video frame to which the target area original image belongs and a position of the target area original image in the target video frame; and restoring the video frame with the shielding target area according to the target area original image. The method can realize the recovery of the video picture with the occlusion under the condition of meeting the video occlusion requirement.

Description

Video data processing method and device
Technical Field
The present application relates to video monitoring technologies, and in particular, to a method and an apparatus for processing video data.
Background
With the rapid development of video monitoring technology, video monitoring is more and more widely deployed. In some specific video monitoring scenes, there may be a case where a specific area in a video picture is occluded. For example, OSD (On-Screen Display) time is superimposed On a specific area of some video pictures or/and information about a control location of a monitoring device, such as a certain layer of a certain market On a certain street in a certain city. The information brings great convenience for viewing the monitoring picture, but the superimposed information can block a specific area of the video picture, and when the specific area contains important information, the original picture needs to be restored.
Therefore, how to recover a video picture with a certain area being blocked is a technical problem to be solved urgently.
Disclosure of Invention
In view of the above, the present application provides a video data processing method and apparatus.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided a video data processing method, including:
storing a target area original image and a source attribute of the target area original image in a video frame, and shielding a target area in the video frame; the target area is an area needing to be shielded in a video frame, and the source attribute of the target area original image is used for identifying a target video frame to which the target area original image belongs and the position of the target area original image in the target video frame;
and restoring the video frame with the shielding target area according to the target area original image.
According to a second aspect of embodiments of the present application, there is provided a video data processing apparatus comprising:
the device comprises a storage unit, a processing unit and a processing unit, wherein the storage unit is used for storing a target area original image and a source attribute of the target area original image in a video frame, the target area is an area needing to be shielded in the video frame, and the source attribute of the target area original image is used for identifying a target video frame to which the target area original image belongs and a position in the target video frame;
the shielding unit is used for shielding a target area in the video frame;
and the recovery unit is used for recovering the video frame with the shielded target area according to the original image of the target area.
According to a third aspect of the embodiments of the present application, there is provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the video data processing method when executing the program stored in the memory.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-mentioned video data processing method steps
According to the video data processing method, the target area original image in the video frame and the source attribute of the target area original image are stored, the target area in the video frame is shielded, and then the video frame with the shielded target area is restored according to the target area original image, so that the video image with the shielded video image is restored under the condition that the video shielding requirement is met.
Drawings
Fig. 1 is a schematic flow chart diagram illustrating a video data processing method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating occlusion of video data according to an exemplary embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a process of matching artwork in a target area according to an exemplary embodiment of the present application;
FIG. 4 is a flowchart illustrating matching of a target area artwork to a video frame according to an exemplary embodiment of the present application;
FIG. 5 is a flow chart illustrating video frame recovery according to an exemplary embodiment of the present application;
FIG. 6 is a schematic flow chart diagram illustrating a video data processing apparatus according to an exemplary embodiment of the present application;
FIG. 7 is a schematic flow chart diagram of a video data processing apparatus according to yet another exemplary embodiment of the present application;
fig. 8 is a schematic flow chart diagram illustrating a video data processing apparatus according to yet another exemplary embodiment of the present application;
fig. 9 is a schematic flow chart diagram illustrating a video data processing apparatus according to yet another exemplary embodiment of the present application;
fig. 10 is a schematic flow chart diagram of a video data processing apparatus according to yet another exemplary embodiment of the present application;
fig. 11 is a schematic flow chart diagram illustrating a video data processing apparatus according to yet another exemplary embodiment of the present application;
fig. 12 is a schematic diagram illustrating a hardware structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a video data processing method according to an embodiment of the present disclosure is shown in fig. 1, where the video data processing method may include the following steps:
s100, storing a target area original image and source attributes of the target area original image in a video frame, and shielding a target area in the video frame; the target area is an area needing to be shielded in the video frame, and the source attribute of the target area original image is used for identifying the target video frame to which the target area original image belongs and the position of the target area original image in the target video frame.
In the embodiment of the present application, for any video frame that needs to be occluded, an original image of an area that needs to be occluded (referred to as a target area in this document) in the video frame may be acquired, for example, the original image of the target area may be deducted from the video frame, and the original image of the target area and source attributes of the original image of the target area may be saved.
The source attributes of the target area artwork may include a frame attribute for identifying the video frame (referred to herein as the target video frame) to which the target area artwork belongs and an area attribute for identifying the position of the target area artwork in the target video frame.
For example, the identifier of the video capture device that acquires the target video frame and the timestamp of the target video frame may be used as the frame attributes of the target area original image, and the coordinates of the target area original image in the target video frame of the specified size may be used as the area attributes of the target area original image.
It should be noted that, in the embodiment of the present application, after the target area in the video frame is occluded, the occluded video frame may also be saved.
In order to improve video security, the video frame after being shielded and the original image of the target area may be separately stored, for example, in different files.
And S110, restoring the video frame with the shielding target area according to the original image of the target area.
In the embodiment of the application, when the video frame with the shielding of the target area needs to be restored, the video frame with the shielding of the target area can be restored according to the stored original image of the target area.
It can be seen that, in the method flow shown in fig. 1, when a video frame is subjected to target area occlusion, the target area original image and the source attribute of the target area original image are stored, and when a video frame with an occluded target area is restored, the video frame with an occluded target area is restored according to the stored target area original image, so that under the condition that a video occlusion requirement is met, restoration of a video frame with an occluded target area is realized.
In an embodiment of the present application, the restoring, according to the original image of the target area, the video frame with the blocked target area may include:
determining a matched first target area original image according to the recovery condition;
and restoring the first video frame with the shielding target area according to the first target area original image.
In this embodiment, the recovery condition is a video frame for determining that there is an occlusion in the target area that needs to be recovered.
The recovery condition may include a time range or attribute information of a target (such as a human face or a license plate) to be recovered.
For example, when the recovery condition includes a time range, the target area original image with the time stamp information within the time range may be determined as the matched target area original image, and the video frame with the target area matched with the target area original image and being blocked may be determined as the video frame to be recovered, according to the time stamp information of the stored target area original image.
For example, when the recovery condition includes attribute information of the target to be recovered, the saved target area original image may be queried according to the recovery condition to determine the target area original image matching the recovery condition, and determine a video frame in which the target area matching the target area original image is blocked as the video frame to be recovered.
The attribute information of the target to be restored may include, but is not limited to, one or more of the following:
the image of the target to be restored, the model of the target to be restored and the characteristic information of the target to be restored.
It should be noted that, in this embodiment, when the recovery condition is the picture of the target to be recovered, the picture of the target to be recovered and the stored target area original image may be respectively modeled, and a model corresponding to the picture of the target to be recovered and a model of the target area original image are compared to determine the target area original image matched with the picture of the target to be recovered;
when the recovery condition is the model of the target to be recovered, modeling the stored original image of the target area, and comparing the model of the target to be recovered with the model of the original image of the target area to determine the original image of the target area matched with the model of the target to be recovered;
when the recovery condition is the feature information of the target to be recovered (such as the license plate number of the license plate), the stored original image of the target area may be analyzed to extract the feature information of the original image of the target area, and the feature information of the target to be recovered is compared with the feature information of the original image of the target area to determine the original image of the target area matched with the feature information of the target to be recovered.
In this embodiment, when the matching target area original image (referred to as a first target area original image herein) is determined according to the restoration condition, a video frame (referred to as a first video frame herein) in which the matching target area is occluded may be restored according to the first target area original image.
In an example, the restoring the first video frame with the matching target area occluded according to the first target area original graph may include:
determining a first video frame matched with the first target area original image according to the source attribute of the first target area original image;
and restoring the first video frame according to the first target area original image.
In this example, after the first target area artwork is determined, a first video frame that matches the first target area artwork may be determined based on frame features in the source attribute of the first target area artwork.
For a first video frame matched with any first target area original image, the position of the first target area original image in the first video frame can be determined according to the area attribute of the first target area original image, and the first video frame can be restored according to the first target area original image.
It should be noted that, in the embodiment of the present application, for a video frame with any target area being blocked, when a plurality of target areas exist in the video frame, a part or all of the target areas of the video frame may be restored according to the original image of the target areas, that is, a part of the target areas existing in the video frame is allowed to be kept in a blocked state, and another part of the target areas is in a restored state, so that flexibility of video restoration is improved.
In addition, in the embodiment of the present application, for a video frame having a plurality of target areas, the video frame may be restored multiple times, and the restored target areas may be different each time the video frame is restored, so that the flexibility of video restoration may be improved.
For example, assuming that target areas 1 to 3 exist in a video frame, only the target area 1 may be restored in one restoration of the video frame; in another recovery of the video frame, the target areas 2 and 3 may be recovered; in another recovery of the video frame, the target areas 1 and 3 may be recovered.
In an embodiment of the present application, the restoring of a part or all of the target area of the video frame according to the target area original image may include:
determining a corresponding target recovery strategy according to the identity authentication information carried in the received recovery request; the recovery strategy comprises recovering part or all of a target area of a video frame;
and recovering the video frame according to the target recovery strategy and the target area original image.
In this embodiment, when video frame recovery is required, identity authentication information carried in a received recovery request may be acquired, and a corresponding recovery policy (referred to as a target recovery policy herein) may be determined according to the identity authentication information.
In the embodiment of the present application, the recovery policy may include recovering a part or all of the target area of the video frame.
In an example, determining a corresponding target recovery policy according to authentication information carried in a received recovery request may include:
determining a target authority level corresponding to the identity authentication information carried in the recovery request;
and determining a corresponding target recovery strategy according to the target authority level.
In this example, the corresponding relationship between the permission levels and the recovery policies may be configured in advance, for example, the recovery policy corresponding to the high permission level is to recover all the target areas of the video frame; the recovery strategy corresponding to the low authority level is to recover a part of target areas of the video frames.
Accordingly, when a recovery request is received, the authentication information carried in the recovery request may be obtained, and the authority level (referred to as a target authority level herein) corresponding to the authentication information may be determined.
After the target permission level is determined, the corresponding relation between the preset permission level and the recovery strategy can be inquired according to the target permission level so as to determine the target recovery strategy corresponding to the target permission level.
In this embodiment, after the target restoration policy is determined, the video frame may be restored according to the target restoration policy and the target area original image.
In one example, restoring the video frame according to the target restoration policy and the target area original image may include:
when the target recovery strategy is to recover a part of target areas of the video frame, acquiring attribute information of a target to be recovered;
and determining the matched target area original image according to the attribute information of the target to be restored, and restoring the video frame according to the matched target area original image.
In this example, when the target recovery policy is to recover a partial target area of a video frame, the attribute information of the target to be recovered may be acquired.
For example, the attribute information of the target to be restored carried in the received request to be restored may be acquired, or prompt information may be output to prompt the input of the attribute information of the target to be restored, and the attribute information of the target to be restored input in response to the prompt information may be received.
After the attribute information of the target to be restored is obtained, the matched target area original image may be determined according to the attribute information of the target to be restored, and the video frame may be restored according to the matched target area original image, for specific implementation, refer to relevant implementation in the foregoing embodiments, which is not described herein again.
It should be noted that, in this embodiment, when the target recovery policy is to recover all target areas of the video frame, all target areas of the video frame may be recovered according to the saved target area original image, and specific implementation thereof is not described herein again.
In another embodiment of the present application, the restoring of part or all of the target area of the video frame according to the target area original drawing may include:
determining a corresponding target authority level according to the identity authentication information carried in the received recovery request;
restoring the video frame according to the original image of the target area with the authority level not exceeding the target authority level; or, restoring the target area with the authority level not exceeding the target authority level in the video frame according to the original image of the target area.
In this embodiment, when video frame recovery is required, the authentication information carried in the received recovery request may be obtained, and the authority level (i.e., the target authority level) corresponding to the authentication information may be determined.
In one example, permission levels may be set in advance for each target area original image, for example, different permission levels may be set for different types of target area original images.
For example, when the original image of the target area includes a human face, the corresponding permission level is a high permission level; when the original image of the target area does not include a human face (such as license plate information or other trademark or brand information), the corresponding permission level is a low permission level.
Accordingly, in this example, after the target permission level is determined, the original image of the target area with the permission level not exceeding the target permission level may be obtained from the saved original images of the target area, and the video feature may be restored for the original image of the target area with the permission level not exceeding the target permission level.
In another example, different permission levels may be set in advance for each target area in the video frame, for example, different permission levels may be set for each target area according to the type of target present in the target area.
Accordingly, in this example, after the target permission level is determined, a target area with the permission level not exceeding the target permission level in the video frame may be determined, and the target area with the permission level not exceeding the target permission level in the video frame may be restored according to the saved target area original image.
For example, the permission of the current user to restore the occlusion is determined to be limited level one, and assuming that the limited level one only allows all occlusions of one target in the video to be restored, the faces of 15 persons in 200 frames of the video of the whole video are all occluded, and after the user selects the target person a for restoring red clothes through mouse clicking, touch screen, voice indication or the like, the faces of 19 frames of the target person a in 200 frames are all restored. For another example, when an instruction to restore occlusion is received, it is determined that the current authority is level two, and if the face level of 5 persons and the face level of 4 persons are 1 and 2 in 100 frames of the entire video, and if the authority level two is higher than the face level 1 and the authority level two is at the same level as the face level 2, the faces of 9 persons at the face levels 1 and 2 are restored after the instruction is received. It is understood that the definition of the level may be adjusted according to the requirement, and in an alternative embodiment, the level authority with the higher sequence number may be defined to be higher, the level authority with the higher sequence number may also be defined to be lower, for example, the authority of the browsing authority 1 is set to be higher than the browsing authority 2, and the level authority may also be set by other naming manners, for example, the authority of the code "sky" is higher than the code "white cloud", which is not limited herein.
In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, the technical solutions provided in the embodiments of the present application are described below with reference to specific examples.
In this embodiment, the flow of occlusion and recovery of video frames is as follows:
1. occlusion of video frames
As shown in fig. 2, in this embodiment, the occlusion process of the video frame may include the following steps:
step S200, judging whether a target area needing to be shielded exists in any video frame. If yes, go to step S210; otherwise, ending the current flow.
In this embodiment, the target area to be occluded may include a fixed occlusion area and a dynamic occlusion area.
The fixed occlusion area may be a pre-configured area that needs to be occluded, such as an area for superimposing OSD time or/and a deployment location of the monitoring device.
The dynamic occlusion region may be determined by analyzing the video picture, for example, if a human face in the video picture needs to be occluded (for example, playing mosaic), the human face appearing in the video picture may be identified by analyzing the video picture, and the corresponding region may be determined as the region that needs to be occluded.
Step S210, the target area original image and the frame feature and area feature of the target area original image are saved.
In this embodiment, after the target area that needs to be blocked in the video frame is determined, the original image of the target area may be deducted from the video frame, and the original image of the target area may be saved.
In this embodiment, when storing any one of the target area masters, feature information (i.e., frame features) for identifying a video frame to which the target area master belongs and feature information (i.e., area features) for identifying a position of the target area master in the frame to which the target area master belongs may be stored in association with each other.
And step S220, shielding the target area in the video frame.
In this embodiment, after the target area original image in the video frame is acquired and the target area original image and the frame features and the area features of the target area original image are saved, the target area in the video frame may be blocked.
It should be noted that, in this embodiment, for any video frame, when a plurality of target areas to be blocked exist in the video frame, for any target area, the original image of the target area and the blocking of the target area may be stored and blocked in the manners described in step S210 to step S220, which are not described herein in detail.
2. Recovery of video frames
1. Matching of target area original drawings
In this embodiment, when video frame restoration is required, the original image of the target area for video frame restoration may be determined first.
As shown in fig. 3, in this embodiment, taking face restoration as an example, that is, restoring a face occluded in a video frame as an example, assuming that a restoration condition is a target face image, the matching process of the original image of the target area may include the following steps:
and step S300, acquiring an original image set of the target area.
In this embodiment, the set of target area artwork may include saved target area artwork.
And step S310, acquiring a target face image set.
In this embodiment, the set of target face images may include one or more frames of face images.
Step S320, determining whether the target area original image set and the target face image set are both non-empty. If yes, go to step S330; otherwise, the current flow is ended.
And step S330, selecting one element from the target area original image set or the target face image set, and matching with all elements in the other set.
In this embodiment, when the target area original image set and the target face image set are both non-empty, one element may be selected from one set and matched with all elements in the other set.
For example, one element (i.e., the target face image) may be selected from the target face image set and matched with all elements in the target area original image set.
The specific implementation of face image matching may refer to related descriptions in the prior art, and details of the embodiments of the present application are not repeated herein.
For convenience of description, in the following description, an example is taken in which a target face image is selected from a target face image set, and the selected target face image is respectively matched with each target area original image in a target area original image set.
Step S340, determining whether there is an element in the other set whose similarity to the selected element exceeds a preset similarity threshold. If yes, go to step S350; otherwise, the selected element is deleted from the set, and the process goes to step S320.
In this embodiment, when the similarity between each target area original image in the target area original image set and the selected target face image does not exceed the preset similarity threshold, the selected target face image is deleted from the target face image set, and when the target face image set is not empty, the target face image is reselected.
Step S350, adding the matched target area original image to the set to be restored, deleting the selected elements and the matched elements from the respective sets, and proceeding to step S320.
In this embodiment, when there is a target area original image whose similarity with the selected target face image exceeds a preset similarity threshold in the target area original image set, the matched target area original image (i.e., the target area original image whose similarity with the selected target face image exceeds the preset similarity threshold) may be added to the set to be restored, and the matched target area original image and the selected target face image may be deleted from the respective sets.
It should be noted that, for any target face image, there may be one or more matching target area artwork.
2. Matching of target area original image and video frame
In this embodiment, after the matched target area original image (the target area original image added to the set to be restored) is obtained according to the method flow shown in fig. 3, video frame matching may be performed according to the target area original image in the set to be restored, so as to determine a video frame that can be restored.
As shown in fig. 4, in this embodiment, the process of matching the target area artwork with the video frame may include the following steps:
and S400, acquiring a video frame set.
In this embodiment, the set of video frames may include saved video data.
In order to ensure the fluency of the video data, the stored video data may include video frames without occlusion.
And step S410, acquiring a set to be recovered.
In this embodiment, the set to be restored includes the original image of the target area matched according to the flow shown in fig. 3.
Step S420, judging whether the video frame set and the set to be recovered are both non-empty; if yes, go to step S430; otherwise, the current flow is ended.
In this embodiment, in order to ensure the continuity of the video in the to-be-processed set, when the to-be-restored set is empty but the video frame set is not empty, all the video frames in the video frame set may be added to the to-be-processed set.
The video frames in the set to be processed are ordered according to the time from first to last (i.e. the time stamps are from small to large).
And step S430, selecting the elements with the highest ranking from the two sets respectively.
In this embodiment, the frame characteristics of the original image in the target area are taken as the time stamp, that is, the frame characteristics of the original image in the target area are taken as the time stamp of the video frame to which the original image belongs; wherein the timestamps are different for different video frames.
The elements in the video frame set and the to-be-restored set are ordered in time from first to last (i.e., according to the smaller timestamp from the larger).
Step S440, determining whether the selected video frame matches the original image of the selected target area. If yes, go to step S450; otherwise, go to step S470.
In this embodiment, after selecting a video frame from the video frame set and selecting a target area original image from the set to be restored, the time stamp of the selected video frame and the time stamp of the selected target area original image may be compared; if the two are the same, determining that the selected video frame is matched with the original image of the selected target area; otherwise, the selected video frame is determined to be not matched with the selected target area original image.
And S450, binding the matched video frame and the target area original image, and adding the video frame and the target area original image to a set to be processed.
In this embodiment, when it is determined that the selected video frame matches the selected target area original image, the selected video frame and the selected target area original image may be bound and added to the set to be processed.
It should be noted that, in this embodiment, when there are multiple video frames (to which target area masters are bound) with the same timestamp in the set to be processed, the multiple video frames with the same timestamp may be merged into one video frame (to which multiple occluded target areas are present) to which multiple target area masters are bound.
Step S460, delete the original image of the selected target area from the collection to be restored, and go to step S420.
Step S470 is to determine whether the selected video frame is earlier than the original image of the selected target area. If yes, go to step S480; otherwise, go to step S490.
Step S480, adding the selected video frame to the set to be processed, deleting the selected video frame from the video frame set, and proceeding to step S420.
In this embodiment, because the elements in the video frame set and the set to be restored are all sorted in the order of the timestamps from small to large, when the selected video frame is earlier than the selected target area original image, that is, the timestamp of the selected video frame is smaller than the selected target area original image, the timestamp of the video frame will be smaller than the timestamps of all the target area original images in the set to be restored, that is, the video frame will not be matched with all the target area original images in the set to be restored, and at this time, the selected video frame may be deleted from the video frame set.
In addition, in order to ensure the continuity of the video in the to-be-processed set, the selected video frame may be added to the to-be-processed set.
Step S490, delete the original image of the selected target area from the collection to be restored, and go to step S420.
In this embodiment, when the selected target area original image is earlier than the selected video frame, that is, the timestamp of the selected target area original image is smaller than the timestamp of the selected video frame, it may be determined that the target area original image is not matched with all video frames in the video frame set, and at this time, the selected target area original image may be deleted from the set to be restored.
3. Recovery of video frames
In this embodiment, when the video frame matched with the target area original image is determined according to the flow of fig. 4, the video frame may be restored according to the binding between the video frame and the target area original image.
As shown in fig. 5, in this embodiment, the recovery process of the video frame may include the following steps:
and S500, acquiring a set to be processed.
Step S510, determining whether the to-be-processed set is empty. If yes, ending the current process; otherwise, go to step S520.
Step S520, selecting a first video frame in the set to be processed.
In this embodiment, the video frames in the to-be-processed set are ordered in time from first to last.
Step S530, determining whether the video frame is bound with the target area original image. If yes, go to step S540; otherwise, the video frame is output and deleted from the set to be processed, and the process goes to step S510.
And step S540, restoring the video frame according to the area characteristics of the original image of the target area.
In this embodiment, for a video frame to which the target area original image is bound, the position of the target area original image in the video frame may be determined according to the area characteristics of the target area original image bound to the video frame, and the target area original image may be restored to the video frame according to the determined position.
For any video frame, when a plurality of target area original images are bound to the video frame, the corresponding target areas of the video frame can be restored sequentially according to the target area original images.
Step S550, the restored video frame is output, and the process goes to step S510.
In this embodiment, for each video frame in the set to be processed, recovery and output may be performed in a frame-by-frame recovery and frame-by-frame output manner, so that continuity of the video is ensured while video frame recovery is achieved.
In the embodiment of the application, the target area original image in the video frame and the source attribute of the target area original image are saved, the target area in the video frame is shielded, and then the video frame with the shielded target area is restored according to the target area original image, so that the restoration of the video frame with the shielded target area is realized under the condition that the requirement of video shielding is met.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 6, a schematic structural diagram of a video data processing apparatus according to an embodiment of the present application is shown in fig. 6, where the video data processing apparatus may include:
a saving unit 610, configured to save a target area original image and a source attribute of the target area original image in a video frame, where the target area is an area that needs to be blocked in the video frame, and the source attribute of the target area original image is used to identify a target video frame to which the target area original image belongs and a position in the target video frame;
an occlusion unit 620, configured to occlude a target area in the video frame;
and a restoring unit 630, configured to restore, according to the target area original image, a video frame with a target area being blocked.
In an alternative embodiment, as shown in fig. 7, the apparatus further comprises:
a first determining unit 640, configured to determine a matched first target area original image according to a recovery condition;
the restoring unit 630 is specifically configured to restore the first video frame with the matched target area being blocked according to the first target area original image.
In an optional implementation manner, the recovery condition is attribute information of the target to be recovered;
the restoring unit 630 is specifically configured to query the saved target area original image according to the attribute information of the target to be restored, so as to determine the target area original image matched with the attribute information of the target to be restored.
In an optional embodiment, the attribute information of the target to be restored includes one or more of the following:
the image of the target to be restored, the model of the target to be restored and the characteristic information of the target to be restored.
In an alternative embodiment, as shown in fig. 8, the apparatus further comprises:
a second determining unit 650, configured to determine, according to the source attribute of the first target area original image, a first video frame matched with the first target area original image;
the restoring unit 630 is specifically configured to restore the first video frame according to the first target area original image.
In an optional implementation manner, the restoring unit 630 is specifically configured to restore, when multiple target areas exist in a video frame in which any target area is occluded, a part of or all of the target areas of the video frame according to the target area original drawing.
In an alternative embodiment, as shown in fig. 9, the apparatus further comprises:
a third determining unit 660, configured to determine a corresponding target recovery policy according to the authentication information carried in the received recovery request; the recovery strategy comprises recovering part or all of a target area of a video frame;
the restoring unit 630 is specifically configured to restore the video frame according to the target restoration policy and the target area original image.
In an optional implementation manner, the third determining unit 660 is specifically configured to determine a target permission level corresponding to the identity authentication information; and determining a corresponding target recovery strategy according to the target authority level.
In an alternative embodiment, as shown in fig. 10, the apparatus further comprises:
an obtaining unit 670, configured to obtain attribute information of a target to be restored when the target restoration policy is to restore a partial target area of a video frame;
and the restoring unit is specifically configured to determine a matched target area original image according to the attribute information of the target to be restored, and restore the video frame according to the matched target area original image.
In an alternative embodiment, as shown in fig. 11, the apparatus further comprises:
a fourth determining unit 680, configured to determine a corresponding target permission level according to the authentication information carried in the received recovery request;
the restoring unit 630 is specifically configured to restore the video frame according to the original image of the target area whose authority level does not exceed the target authority level; or restoring the target area with the authority level not exceeding the target authority level in the video frame according to the target area original image.
Fig. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure. The electronic device may include a processor 1201, a communication interface 1202, a memory 1203, and a communication bus 1204. The processor 1201, the communication interface 1202, and the memory 1203 communicate with each other via a communication bus 1204. Wherein, the memory 1203 stores a computer program; the processor 1201 can execute the video data processing method described above by executing the program stored on the memory 1203.
The memory 1203 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, memory 1202 may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
Embodiments of the present application also provide a machine-readable storage medium, such as the memory 1203 in fig. 12, storing a computer program, which can be executed by the processor 1201 in the electronic device shown in fig. 12 to implement the video data processing method described above.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (20)

1. A method of processing video data, comprising:
storing a target area original image and a source attribute of the target area original image in a video frame, shielding a target area in the video frame, and storing the shielded video frame; the source attribute of the target area original image is used for identifying a target video frame to which the target area original image belongs and a position of the target area original image in the target video frame;
restoring the video frame with the shielding target area according to the target area original image; for any target area original image, the restoring a video frame with a blocked target area according to the target area original image includes:
and restoring the target video frame according to the position of the target area original image in the target video frame to which the target area original image belongs.
2. The method of claim 1, wherein the restoring the video frame with the occluded target area according to the target area original image comprises:
determining a matched first target area original image according to the recovery condition;
and restoring the first video frame with the shielding target area according to the first target area original image.
3. The method according to claim 2, wherein the recovery condition is attribute information of an object to be recovered;
the determining of the matched first target area original image according to the recovery condition includes:
and inquiring the saved target area original image according to the attribute information of the target to be restored so as to determine the target area original image matched with the attribute information of the target to be restored.
4. The method of claim 3, wherein the attribute information of the target to be restored comprises one or more of:
the image of the target to be restored, the model of the target to be restored and the characteristic information of the target to be restored.
5. The method of claim 2, wherein the restoring the first video frame with the matching target area occluded according to the first target area artwork comprises:
determining a first video frame matched with the first target area original image according to the source attribute of the first target area original image;
and restoring the first video frame according to the first target area original image.
6. The method of claim 1, wherein the restoring the video frame with the occluded target area according to the target area original image comprises:
and for a video frame with any one blocked target area, when a plurality of target areas exist in the video frame, restoring part or all of the target areas of the video frame according to the target area original image.
7. The method of claim 6, wherein said restoring some or all of the target area of the video frame based on the target area artwork comprises:
determining a corresponding target recovery strategy according to the identity authentication information carried in the received recovery request; the recovery strategy comprises recovering part or all of a target area of a video frame;
and recovering the video frame according to the target recovery strategy and the target area original image.
8. The method according to claim 7, wherein the determining a corresponding target recovery policy according to the authentication information carried in the received recovery request includes:
determining a target authority level corresponding to the identity authentication information;
and determining a corresponding target recovery strategy according to the target authority level.
9. The method of claim 7 or 8, wherein the restoring the video frame according to the target restoration policy and the target area artwork comprises:
when the target recovery strategy is to recover a part of target areas of the video frame, acquiring attribute information of a target to be recovered;
and determining a matched target area original image according to the attribute information of the target to be restored, and restoring the video frame according to the matched target area original image.
10. The method of claim 6, wherein said restoring some or all of the target area of the video frame according to the target area artwork comprises:
determining a corresponding target authority level according to the identity authentication information carried in the received recovery request;
restoring the video frame according to the original image of the target area with the authority level not exceeding the target authority level; or, restoring the target area with the authority level not exceeding the target authority level in the video frame according to the target area original image.
11. A video data processing apparatus, comprising:
the device comprises a storage unit, a processing unit and a processing unit, wherein the storage unit is used for storing a target area original image and a source attribute of the target area original image in a video frame, the target area is an area needing to be shielded in the video frame, and the source attribute of the target area original image is used for identifying a target video frame to which the target area original image belongs and a position in the target video frame;
the shielding unit is used for shielding a target area in the video frame;
the storage unit is also used for storing the shielded video frames;
the recovery unit is used for recovering the video frames with the sheltered target area according to the original image of the target area; and for any target area original image, restoring the target video frame according to the position of the target area original image in the target video frame to which the target area original image belongs.
12. The apparatus of claim 11, further comprising:
the first determining unit is used for determining the matched first target area original image according to the recovery condition;
the restoring unit is specifically configured to restore the first video frame with the blocked target area according to the first target area original image.
13. The apparatus according to claim 12, wherein the recovery condition is attribute information of a target to be recovered;
the recovery unit is specifically configured to query the stored target area original image according to the attribute information of the target to be recovered, so as to determine the target area original image matched with the attribute information of the target to be recovered.
14. The apparatus of claim 13, wherein the attribute information of the target to be recovered comprises one or more of:
the image of the target to be restored, the model of the target to be restored and the characteristic information of the target to be restored.
15. The apparatus of claim 12, further comprising:
the second determining unit is used for determining a first video frame matched with the first target area original image according to the source attribute of the first target area original image;
the restoring unit is specifically configured to restore the first video frame according to the first target area original image.
16. The apparatus of claim 11,
the restoring unit is specifically configured to, for a video frame with any one target area being blocked, restore, when multiple target areas exist in the video frame, a part or all of the target areas of the video frame according to the target area original image.
17. The apparatus of claim 16, further comprising:
a third determining unit, configured to determine a corresponding target recovery policy according to the authentication information carried in the received recovery request; the recovery strategy comprises recovering part or all of a target area of a video frame;
and the restoring unit is specifically configured to restore the video frame according to the target restoring policy and the target area original image.
18. The apparatus of claim 17,
the third determining unit is specifically configured to determine a target permission level corresponding to the identity authentication information; and determining a corresponding target recovery strategy according to the target authority level.
19. The apparatus of claim 17 or 18, further comprising:
the acquisition unit is used for acquiring the attribute information of the target to be recovered when the target recovery strategy is used for recovering a part of target area of the video frame;
and the restoring unit is specifically configured to determine a matched target area original image according to the attribute information of the target to be restored, and restore the video frame according to the matched target area original image.
20. The apparatus of claim 16, further comprising:
the fourth determining unit is used for determining the corresponding target authority level according to the identity authentication information carried in the received recovery request;
the recovery unit is specifically configured to recover the video frame according to an original image of a target area with an authority level not exceeding the target authority level; or restoring the target area with the authority level not exceeding the target authority level in the video frame according to the target area original image.
CN201811638680.4A 2018-12-29 2018-12-29 Video data processing method and device Active CN111385512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811638680.4A CN111385512B (en) 2018-12-29 2018-12-29 Video data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811638680.4A CN111385512B (en) 2018-12-29 2018-12-29 Video data processing method and device

Publications (2)

Publication Number Publication Date
CN111385512A CN111385512A (en) 2020-07-07
CN111385512B true CN111385512B (en) 2022-11-01

Family

ID=71222464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811638680.4A Active CN111385512B (en) 2018-12-29 2018-12-29 Video data processing method and device

Country Status (1)

Country Link
CN (1) CN111385512B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101389005A (en) * 2007-09-11 2009-03-18 华为技术有限公司 Method and apparatus blocking special position of image
EP2102584A1 (en) * 2006-12-20 2009-09-23 Scanalyse Pty Ltd A system and method for orientating scan cloud data relative to base reference data
EP2605983A1 (en) * 2010-08-20 2013-06-26 Skylife Technology Holdings LLC Supply packs and methods and systems for manufacturing supply packs
CN108024144A (en) * 2017-11-28 2018-05-11 网宿科技股份有限公司 Video broadcasting method, terminal and computer-readable recording medium
CN108040230A (en) * 2017-12-19 2018-05-15 司马大大(北京)智能系统有限公司 A kind of monitoring method and device for protecting privacy
CN108447080A (en) * 2018-03-02 2018-08-24 哈尔滨工业大学深圳研究生院 Method for tracking target, system and storage medium based on individual-layer data association and convolutional neural networks
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2102584A1 (en) * 2006-12-20 2009-09-23 Scanalyse Pty Ltd A system and method for orientating scan cloud data relative to base reference data
CN101389005A (en) * 2007-09-11 2009-03-18 华为技术有限公司 Method and apparatus blocking special position of image
EP2605983A1 (en) * 2010-08-20 2013-06-26 Skylife Technology Holdings LLC Supply packs and methods and systems for manufacturing supply packs
CN108024144A (en) * 2017-11-28 2018-05-11 网宿科技股份有限公司 Video broadcasting method, terminal and computer-readable recording medium
CN108040230A (en) * 2017-12-19 2018-05-15 司马大大(北京)智能系统有限公司 A kind of monitoring method and device for protecting privacy
CN108447080A (en) * 2018-03-02 2018-08-24 哈尔滨工业大学深圳研究生院 Method for tracking target, system and storage medium based on individual-layer data association and convolutional neural networks
CN109040824A (en) * 2018-08-28 2018-12-18 百度在线网络技术(北京)有限公司 Method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN111385512A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
US10062406B2 (en) Video masking processing method and apparatus
WO2021179898A1 (en) Action recognition method and apparatus, electronic device, and computer-readable storage medium
KR101611440B1 (en) Method and apparatus for processing image
US11449544B2 (en) Video search device, data storage method and data storage device
US20100289924A1 (en) Imager that adds visual effects to an image
EP2742442B1 (en) A method for detecting a copy of a reference video, corresponding apparatus for extracting a spatio-temporal signature from video data and corresponding computer readable storage medium
KR20180035869A (en) Method, device, terminal device and storage medium
CN102156707A (en) Video abstract forming and searching method and system
CN104980681A (en) Video acquisition method and video acquisition device
JP2008146191A (en) Image output device and image output method
JP7419080B2 (en) computer systems and programs
US20190012363A1 (en) Information processing device, data processing method therefor, and recording medium
JP5192437B2 (en) Object region detection apparatus, object region detection method, and object region detection program
JP6234146B2 (en) RECORDING CONTROL DEVICE, RECORDING CONTROL METHOD, AND PROGRAM
US8823833B2 (en) Imager that adds visual effects to an image and records visual effects information in an image file
JP6214762B2 (en) Image search system, search screen display method
US20200026866A1 (en) Method and device for covering private data
CN111385512B (en) Video data processing method and device
JP2007213183A (en) Device, method, and program for classifying digital image data
CN106250426A (en) A kind of photo processing method and terminal
JP5962383B2 (en) Image display system and image processing apparatus
US8682834B2 (en) Information processing apparatus and information processing method
JP4888111B2 (en) Subject recognition device, image search method, and subject recognition program
CN110876092B (en) Video abstract generation method and device, electronic equipment and readable storage medium
JP2008020944A (en) Image processing method, program, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant