WO2020108573A1 - 视频图像遮挡方法、装置、设备及存储介质 - Google Patents
视频图像遮挡方法、装置、设备及存储介质 Download PDFInfo
- Publication number
- WO2020108573A1 WO2020108573A1 PCT/CN2019/121644 CN2019121644W WO2020108573A1 WO 2020108573 A1 WO2020108573 A1 WO 2020108573A1 CN 2019121644 W CN2019121644 W CN 2019121644W WO 2020108573 A1 WO2020108573 A1 WO 2020108573A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- video image
- targets
- information
- frame
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 230000000903 blocking effect Effects 0.000 title claims abstract description 20
- 238000011156 evaluation Methods 0.000 claims description 42
- 238000001514 detection method Methods 0.000 claims description 24
- 238000012790 confirmation Methods 0.000 claims description 23
- 239000000284 extract Substances 0.000 claims description 16
- 230000015654 memory Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 10
- 238000003384 imaging method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 7
- 238000012544 monitoring process Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 6
- 238000013075 data extraction Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Definitions
- the present disclosure relates to the technical field of video processing, and in particular, to a video image occlusion method, device, equipment, and storage medium.
- a method for realizing video image occlusion in the related art is as follows: the video under the current monitoring scene is collected by a camera, and the same fixed area in each frame of the video image of the video is masked according to a pre-configured fixed area.
- the above technology blocks the fixed area in the video image by configuring the fixed area, so as to achieve the effect of blocking the target in the fixed area.
- the position of the target that needs to be blocked changes, such as moving out of the fixed area, when the fixed area in the video image is still blocked, the target that needs to be blocked will not be blocked, resulting in privacy leakage. Therefore, it is urgent A video image occlusion method is needed to accurately and effectively occlude the target that needs to be occluded.
- Embodiments of the present disclosure provide a video image occlusion method, device, equipment, and storage medium, which can solve the problem that the target that needs to be occlusion in the related art is not occlusion.
- the technical solution is as follows:
- a video image occlusion method includes:
- the motion trajectory information of each target appearing in the video based on the multi-frame video image of the video, the motion trajectory information of each target including the position information and size information of each target in the multi-frame video image;
- the area where the first target is located in the multi-frame video image is blocked, and the area where the first target is located is position information and size information of the first target The corresponding area.
- the multi-frame video image based on the video, and acquiring the movement track information of each target appearing in the video includes:
- target detection is performed on the video image to determine multiple targets in the video image
- the method further includes:
- the method further includes:
- the position information and size information of each object in the plurality of objects in the video image are stored in correspondence with the unique identifier of each object.
- the method further includes:
- the video image is a video image other than the first frame video image in the multi-frame video image
- determine a known target and an unknown target among the multiple targets the known target is the video image
- the method further includes:
- the position information and size information of the unknown target in the video image are stored in correspondence with the unique identification of the unknown target.
- the method further includes:
- the image features of the multiple targets are stored in correspondence with the unique identifiers of the multiple targets.
- the extracting image features of the plurality of targets in the video image includes:
- the evaluation information including at least one of posture, size, imaging condition, occlusion and shooting angle;
- the determining the first target that needs to be blocked among the respective targets includes:
- the target corresponding to the first selection event is determined to be the first target, and the first selection event is used to select a target to be blocked from each of the targets;
- a target other than the target corresponding to the second selection event is determined as the first target, and the second selection event is used to select from the various targets that do not need to be blocked aims.
- the blocking the area where the first target is located in the multi-frame video image according to the movement trajectory information of the first target includes:
- the region where the target and the first target are located in the multi-frame video image is blocked.
- the goal of determining that the first goal is the same real goal as the first goal includes:
- the respective targets and the first target determine the targets that are the same real target among the respective targets and the first target.
- the determining, according to the similarity between each target and the first target, the target that is the same real target among the various targets and the first target includes:
- the target corresponding to the target confirmation event is determined to be the same real target as the first target, and the target confirmation event is used to select from the respective targets
- the first goal is the goal of the same real goal.
- the determining, according to the similarity between each target and the first target, the target that is the same real target among the various targets and the first target includes:
- the target whose similarity to the first target is greater than a preset threshold is determined to be the same real target as the first target.
- a video image blocking device in a second aspect, includes:
- An obtaining module used to obtain the motion trajectory information of each target appearing in the video based on the multi-frame video image of the video, and the motion trajectory information of each target includes the position of each target in the multi-frame video image Information and size information;
- a determining module configured to determine a first target that needs to be occluded among the various targets
- the occlusion module is configured to occlude the area where the first target is located in the multi-frame video image according to the movement trajectory information of the first target, and the area where the first target is located is the area of the first target The area corresponding to the location information and size information.
- the acquiring module is configured to perform target detection on the video image for each frame of video images in the multi-frame video image to determine multiple targets in the video image; The position information and size information of the multiple objects in the video image.
- the device further includes:
- a first generating module configured to generate a unique identifier of each target in the plurality of targets when the video image is the first frame of video images in the multi-frame video images;
- the first storage module is configured to store position information and size information of each of the plurality of objects in the video image in correspondence with the unique identification of each object.
- the device further includes:
- the determining module is further configured to determine a known target and an unknown target among the multiple targets when the video image is a video image other than the first frame video image in the multi-frame video image.
- the known target is a target included in the previous video image of the video image
- the unknown target is a target that is not included in the previous video image
- a second generation module used to generate a unique identifier of the unknown target
- a second storage module for storing the position information and size information of the known target in the video image in correspondence with the unique identification of the known target; storing the position of the unknown target in the video image The information and size information are stored in correspondence with the unique identification of the unknown target.
- the device further includes:
- An extraction module for extracting image features of the multiple targets in the video image
- the third storage module is configured to store the image features of the multiple targets in correspondence with the unique identifiers of the multiple targets.
- the extraction module is used to obtain evaluation information of the plurality of targets in the video image, where the evaluation information includes at least one of posture, size, imaging condition, occlusion, and shooting angle Select the target whose evaluation information meets the preset evaluation conditions from the multiple targets; extract the image features of the selected target.
- the determination module is used to display the various targets; when a first selection event is detected, the target corresponding to the first selection event is determined as the first target, so The first selection event is used to select a target to be blocked from the respective targets; when a second selection event is detected, a target other than the target corresponding to the second selection event is determined as the first target, The second selection event is used to select a target that does not require occlusion from the respective targets.
- the occlusion module is used to determine the target that is the same real target as the first target among the various targets; based on the target and the first target in the multi-frame video The position information and size information in the image block the area where the target and the first target are located in the multi-frame video image.
- the determination module is used to compare the image features of the first target and the image features of the respective targets to obtain the similarity between the respective targets and the first target; According to the similarity between the respective targets and the first target, determine the targets that are the same real target among the respective targets and the first target.
- the determination module is configured to arrange and display the various targets according to the similarity between the respective targets and the first target, and the greater the similarity, the higher the alignment; when detected In the target confirmation event, the target corresponding to the target confirmation event is determined to be the same target as the first target.
- the target confirmation event is used to select the first target from the respective targets Goals for the same real goal.
- the determining module is configured to determine, according to the similarity between each target and the first target, a target whose similarity to the first target is greater than a preset threshold as the The first goal is the goal of the same real goal.
- an electronic device including a processor and a memory; the memory is used to store at least one instruction; the processor is used to execute at least one instruction stored on the memory to implement the first The method steps described in any aspect of the aspect.
- a computer-readable storage medium where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor to implement any one of the implementation manners of the first aspect Method steps.
- a computer program product containing instructions, which when run on a computer, causes the computer to implement the method steps described in any one of the implementation manners of the first aspect above.
- the movement trajectory information of each target appearing in the video Since the movement trajectory information of the target may include the position information and size information of the target in the multi-frame video image, after determining the first target to be blocked, the first target The motion trajectory information of the multi-frame video image blocks the area where the first target is located, and the area where the first target is located in each frame of the video image may be different, so as to achieve accurate and effective shielding of the target that needs to be blocked.
- FIG. 1 is a flowchart of a video image occlusion method provided by an embodiment of the present disclosure
- FIG. 2 is a flowchart of a video image occlusion method provided by an embodiment of the present disclosure
- FIG. 3 is a schematic flowchart of generating an occlusion video according to an embodiment of the present disclosure
- FIG. 4 is a schematic structural diagram of a video image blocking device according to an embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of a video image occlusion device provided by an embodiment of the present disclosure.
- FIG. 6 is a schematic structural diagram of a video image blocking device according to an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of a video image blocking device according to an embodiment of the present disclosure.
- FIG. 8 is a schematic structural diagram of an electronic device 800 provided by an embodiment of the present disclosure.
- the video image occlusion method provided by an embodiment of the present disclosure may be performed by an electronic device.
- the electronic device may be equipped with a camera, or may be connected to the camera through a data cable to perform video monitoring through the camera to obtain video, namely The electronic device may have a video capture function.
- the electronic device can also receive the video to be processed from other front-end devices.
- the electronic device may be a smart camera device, a computer device, or the like, which is not limited in the embodiments of the present disclosure.
- FIG. 1 is a flowchart of a video image occlusion method provided by an embodiment of the present disclosure. Referring to Figure 1, the method includes:
- the motion trajectory information of the first target block the area where the first target is located in the multi-frame video image, and the area where the first target is located corresponds to the position information and size information of the first target region.
- the movement trajectory information of each target appearing in the video is acquired. Since the movement trajectory information of the target may include the position information and size information of the target in the multi-frame video image, the first target that needs to be blocked is determined Afterwards, the area where the first target is located in the multi-frame video image can be blocked according to the movement trajectory information of the first target, and the area where the first target is located in each frame of the video image can be different, so that the target that needs to be blocked Perform accurate and effective occlusion.
- acquiring the movement track information of each target appearing in the video includes:
- target detection is performed on the video image to determine multiple targets in the video image
- the method further includes:
- the method further includes:
- the position information and size information of each object in the plurality of objects in the video image are stored in correspondence with the unique identification of each object.
- the method further includes:
- the video image is a video image other than the first frame video image in the multi-frame video image
- determine a known target and an unknown target among the multiple targets the known target is the previous frame video of the video image
- the target contained in the image, the unknown target is a target not included in the previous video image
- the method further includes:
- the position information and size information of the unknown target in the video image are stored in correspondence with the unique identification of the unknown target.
- the method further includes:
- the image features of the multiple targets are stored in correspondence with the unique identifiers of the multiple targets.
- extracting image features of the multiple targets in the video image includes:
- the evaluation information including at least one of posture, size, imaging condition, occlusion condition, and shooting angle;
- determining the first target that needs to be blocked among the various targets includes:
- the target corresponding to the first selection event is determined as the first target, and the first selection event is used to select a target that needs to be occluded from each target;
- a target other than the target corresponding to the second selection event is determined as the first target, and the second selection event is used to select a target that does not require occlusion from the respective targets.
- blocking the area where the first target is located in the multi-frame video image according to the movement trajectory information of the first target includes:
- the region where the target and the first target are located in the multi-frame video image is blocked.
- the determination that each of the goals is the same as the first goal with the first goal includes:
- the target that is the same as the first target among the various targets is determined.
- determining the target that is the same real target as the first target in each target includes:
- the various targets are arranged and displayed, the greater the similarity, the higher the ranking;
- the target corresponding to the target confirmation event is determined to be the same target as the first target, and the target confirmation event is used to select from the various targets the same as the first target The goal of a real goal.
- determining the target that is the same real target as the first target in each target includes:
- the target whose similarity to the first target is greater than a preset threshold is determined to be the same real target as the first target.
- FIG. 2 is a flowchart of a video image occlusion method provided by an embodiment of the present disclosure. Referring to Figure 2, the method includes:
- the motion trajectory information of each target includes the position information and size information of each target in the multi-frame video image.
- Each target may include multiple types, for example, the type of the target may be a person, a face or a human body. Of course, the type of the target may also be a thing. The embodiment of the present disclosure does not specifically limit the type of the target.
- the multi-frame video image may include all the video images in the video, or the multi-frame video image may also include part of the video images in the video.
- the video may be extracted from the video according to a preset acquisition strategy Acquiring the multi-frame video image, for example, the first frame video image to the n-th frame video image in the video can be acquired to obtain the multi-frame video image, where n can be set according to actual needs.
- this step 201 and subsequent steps may be performed by an electronic device.
- the electronic device Before performing this step 201, the electronic device needs to acquire the multi-frame video image of the video. Taking the electronic device having a video collection function as an example, the electronic device can perform video collection on a certain monitoring area to obtain the video, and obtain multi-frame video images of the video. Of course, the electronic device can also receive the video sent by the front-end device (such as a surveillance camera), and acquire multi-frame video images of the video.
- the front-end device such as a surveillance camera
- this step 201 may include: for each video image in the multi-frame video image, performing target detection on the video image to determine multiple targets in the video image; acquiring the multiple targets Position information and size information in the video image.
- the electronic device performs target detection on the first frame video image to determine the first frame After multiple targets in the video image, a unique identification of each of the multiple targets can be generated, and the unique identification can be represented by an ID (Identification, identification). After acquiring the position information and size information of multiple targets in the first frame of the video image, the electronic device may uniquely identify the position information and size information of each target of the multiple targets in the video image and each target Corresponding storage.
- the electronic device For the video image currently subject to target detection, when the video image is a video image other than the first frame video image in the multi-frame video image, the electronic device performs target detection on the video image to determine how many in the video image After each target, the known target and the unknown target among the multiple targets can be determined, and a unique identification of the unknown target can be generated. After acquiring the position information and size information of multiple targets in the video image, the electronic device may store the position information and size information of the known target in the video image corresponding to the unique identifier of the known target, and store the unknown The position information and size information of the target in the video image are stored corresponding to the unique identification of the unknown target.
- the known target is the target contained in the video image of the previous frame of the video image, that is, the target whose position information and size information in the previous frame of the video image have been obtained
- the unknown target is the target A target that is not included in a frame of video image, that is, a target that has not obtained its position information and size information in the previous frame of video image.
- the electronic device For the first frame of the multi-frame video image, the electronic device will generate a unique identifier for all detected targets after performing target detection on the first frame of the video image. For each frame of video image after the first frame of video image, after detecting the target of the current video image, the electronic device will determine which of all the detected targets are the targets that have been detected in the previous frame of video image (i.e. Known targets), which are the targets that were not detected in the previous video image (ie unknown targets), for unknown targets, the electronic device will consider them as a new target, so for this unknown target, a new unique logo.
- the above process is actually the process of target detection and target tracking.
- the target enters the video screen at the first moment and leaves the video screen at the second moment
- the target can be detected in the video image collected at the first moment, and is collected at the moment between the first moment and the second moment
- the target can also be detected in the obtained video image, but the target will not be detected in the video image collected at the second moment.
- the electronic device may extract the video image from the video image.
- Image features of multiple targets and store the image features of the multiple targets in correspondence with the unique identifiers of the multiple targets.
- the electronic device may use a feature extraction model to extract image features that can describe the target in the image, and the image features may be represented by a string of binary codes.
- the feature extraction model can be obtained by training with a large number of samples using machine learning methods.
- the electronic device may extract image features for each of the multiple targets, or extract image features only for some of the targets. For example, the electronic device can first evaluate the target in the video image, and then extract the image feature of the target that meets the preset evaluation condition.
- the target that meets the preset evaluation condition can generally extract the image feature that accurately describes the target. It can reduce the resource consumption caused by meaningless image feature extraction.
- the preset evaluation condition can be set by the user according to actual needs, or can be set by the electronic device by default, which is not limited in the embodiments of the present disclosure.
- the electronic device may obtain evaluation information of multiple targets in the video image.
- the evaluation information includes posture, size, imaging conditions, occlusion, and At least one of the shooting angles; from the plurality of targets, select targets whose evaluation information meets preset evaluation conditions; and extract image features of the selected targets.
- the posture refers to the posture of the target in the image, such as sitting, standing, etc.
- the size refers to the size of the target imaged in the image
- the imaging conditions can include whether the target is imaged in the image with or without shadow
- the occlusion can be Including different degrees of occlusion, such as no occlusion, partial occlusion, and severe occlusion
- the shooting angle can include shooting height, shooting direction, and shooting distance.
- the target whose evaluation information meets the preset evaluation conditions is selected from the multiple targets, including: according to the evaluation information of the multiple targets and the multiple targets Type, select the goal that the evaluation information meets the preset evaluation conditions.
- two types of targets, human face and human body can correspond to different preset evaluation conditions.
- Each target in this step 201 may be each target to which all IDs belong, and the targets with different IDs in each target may be the same real target, that is, the same real target may have multiple IDs.
- the electronic device considers the target as multiple targets and generates multiple IDs through the target tracking algorithm.
- target A if target A enters the monitoring area at time t0, the electronic device generates ID1 for target A and tracks it. If the target A leaves the monitoring area at time t1, the position information, size information, and image characteristics of the target A in the video image collected before time t1 will be stored in correspondence with the ID1 as a target.
- target A enters the monitoring area again, the electronic device considers the target A to be a new target, generates a new ID2 for the target A, tracks it and re-extracts image features.
- target A may include two IDs ID1 and ID2.
- each video image in multiple video images includes multiple targets.
- some video images in the multiple video images may only include
- the realization principle of a goal at this time is the same as or similar to the realization of the above-mentioned video image including multiple goals, that is, the same treatment can be performed on the goal according to the method provided by the embodiments of the present disclosure, which will not be described here.
- determining the first target that needs to be blocked among the various targets includes: displaying each target; when a first selection event is detected, determining the target corresponding to the first selection event as the first target A target, the first selection event is used to select a target to be occluded from each target; when a second selection event is detected, a target other than the second target is determined as the first target, the second selection event It is used to select the second target that does not require occlusion from the respective targets.
- each target in step 201 it may be each target to which all IDs belong.
- a partial image of each target may be displayed on the user interaction interface, such as a partial image containing the target captured from a certain frame of video image. If the electronic device has no display function, the partial image of each target may be sent to the user device, and the user device displays the partial image of each target on the user interaction interface.
- the electronic device interacts with the user and displays all the objects in the video to the user through the user interaction interface.
- the user can browse to select the target that needs to be blocked or does not need to be blocked.
- the electronic device can Select and determine a first target that needs to be blocked or a second target that does not need to be blocked.
- the first target is an irrelevant target that the user does not want to pay attention to or is not interested in
- the second target is a target that the user wants to pay attention to or interested in.
- the electronic device considers the target to be multiple targets, and some of the targets may be the same real target.
- the electronic device can find the target that is the same real target as the first target from the respective targets through target comparison.
- the target comparison may be to use a preset calculation method to obtain the similarity between the image features of the two targets.
- the electronic device may determine the target that is the same real target as the first target through the similarity between each target and the first target. Specifically, the electronic device may compare the image features of the first target and the image features of the respective targets to obtain the similarity between the respective targets and the first target; according to the similarity between the respective targets and the first target , Determine that each of the targets is the same as the first target.
- the electronic device can obtain one or more image features of the first target stored corresponding to the ID according to the ID of the first target, and then compare the image features of all targets in the video with the first target.
- the image features of a target are compared in pairs to calculate the similarity.
- the electronic device may use the Euclidean distance to calculate the similarity between two image features. The smaller the Euclidean distance, the greater the similarity.
- the calculation of the similarity is not limited to the Euclidean distance. For example, it may be a cosine similarity.
- the embodiment of the present disclosure does not specifically limit the calculation method of the similarity.
- the electronic device determines, according to the similarity between each target and the first target, that the target is the same real target as the first target among the various targets, including but not limited to the following two possible implementation modes:
- the first way is to arrange and display each target according to the similarity between the target and the first target.
- the target is determined to be the same real target as the first target, and the target confirmation event is used to select a target that is the same real target as the first target from the various targets.
- the electronic device determines the target that is the same as the real target according to the user's confirmation operation. For the case where there are multiple IDs for a real target described in step 201, by displaying the compared targets in order of similarity from high to low, the user confirms whether the top target is based on the comparison result The same real target, and make selections, such as selecting multiple targets (multiple IDs) that are the same real target as the first target.
- a target whose similarity to the first target is greater than a preset threshold is determined to be the same real target as the first target.
- the preset threshold may be set by the user according to actual needs, or may be set by the electronic device by default, which is not limited in the embodiments of the present disclosure.
- the electronic device determines the target that is the same real target as the first target according to the similarity between each target and the first target, which can reduce user operations.
- the electronic device may consider that the target and the first target are the same real target, and are both targets that need to be blocked.
- the area where the target is located is the area corresponding to the position information and size information of the target.
- the electronic device can obtain the stored first target and the motion trajectories of all targets that are the same real target as the first target Information for privacy masking operations.
- Privacy occlusion refers to the use of some technical means to block sensitive targets in pictures or videos.
- the electronic device can block all targets that are the same real target as the first target in each frame of the multi-frame image, thereby generating a blocked video.
- Blocked video refers to blocking a specific target in the original video After getting the video.
- the electronic device may block the target in each frame of the video image according to the position information and size information of the target in each frame of the video image.
- the blocking method includes but is not limited to superimposing an opaque blocking block on the corresponding area.
- the blocking block may be set to a target color or a mosaic form as long as it can play a blocking role.
- step 203 and step 204 are a possible implementation manner of blocking the area where the first target is located in the multi-frame video image according to the movement trajectory information of the first target, where the first target is located
- the area of is the area corresponding to the position information and size information of the first target. By covering all the targets that are the same real target as the first target, the comprehensiveness of the blocking can be ensured.
- occlusion of the area where the first target is located in the multi-frame video image may also include other possible implementation manners, for example, directly based on the first The motion trajectory information of the target determines the area to be blocked in the multi-frame video image, and then blocks the determined area in the multi-frame video image.
- FIG. 3 is a schematic flowchart of generating an occlusion video according to an embodiment of the present disclosure.
- the entire process may be implemented by a data extraction unit, a data storage unit, an occlusion processing unit, and a user interaction unit, where the data extraction unit includes target detection, target tracking, target evaluation, and target feature extraction of video.
- the data extraction unit is responsible for extracting all the targets in a given video and tracking them.
- Each tracked target generates a unique identification ID.
- each target is tracked in time sequence. For example, each target is acquired in each The position and size in the frame video image. During target tracking, the target is evaluated in real time.
- the evaluation indicators for real-time evaluation may include but not limited to posture, size, imaging conditions, occlusion factors, angle, etc., and select one that meets the evaluation conditions according to the target type Or multiple targets to extract image features.
- the data storage unit may store data in the form of metadata (Metadata), and store the ID, motion track information, and image characteristics of each target extracted by the data extraction unit.
- the occlusion processing unit combined with the user interaction unit provides a semi-automatic privacy occlusion operation, including providing a target preview, the user selects the target of interest, obtains the image features of the target, performs feature comparison, manually compares and screens the results, and then obtains The trajectory information of the target finally generates an occlusion video.
- the embodiment of the present disclosure takes steps 201 to 204 as an example for execution by an electronic device, that is, the electronic device may be integrated with the data extraction unit, data storage unit, and occlusion processing unit in FIG. 3 Various functions realized by the user interaction unit.
- the steps 201 to 204 may also be performed by different devices, that is, the functions implemented by the units in FIG. 3 may be integrated on different devices, respectively.
- a simple occlusion method that allows the same real target to enter and exit the same scene multiple times can also be achieved.
- the method can also be used to occlusion the same target in different scenarios. For example, if the different scenes may be scenes where different cameras shoot different videos in the same monitoring area, different videos for different scenes can be blocked by the above steps 201 to 204 to the same target in the monitoring area.
- the technical solution provided by the embodiments of the present disclosure can significantly increase the amount of video processing Reduce user manual operation costs.
- the movement trajectory information of each target appearing in the video is acquired. Since the movement trajectory information of the target may include the position information and size information of the target in the multi-frame video image, the first target that needs to be blocked is determined Afterwards, the area where the first target is located in the multi-frame video image can be blocked according to the movement trajectory information of the first target, and the area where the first target is located in each frame of the video image can be different, so that the target that needs to be blocked Perform accurate and effective occlusion.
- FIG. 4 is a schematic structural diagram of a video image blocking device provided by an embodiment of the present disclosure. 4, the device includes:
- the obtaining module 401 is used to obtain the motion trajectory information of each target appearing in the video based on the multi-frame video image of the video, and the motion trajectory information of each target includes the position information and size of each target in the multi-frame video image information;
- the determining module 402 is used to determine the first target that needs to be occluded among the various targets;
- the occlusion module 403 is configured to occlude the area where the first target is located in the multi-frame video image according to the movement track information of the first target, and the area where the first target is located is the position information and size of the first target The area corresponding to the information.
- the acquisition module is used to perform target detection on each video image in the multi-frame video image to determine multiple targets in the video image; acquire the multiple targets in Position information and size information in the video image.
- the device further includes:
- the first generating module 404 is configured to generate a unique identifier of each target in the multiple targets when the video image is the first frame of the multiple frame video images;
- the first storage module 405 is configured to store position information and size information of each of the plurality of objects in the video image in correspondence with the unique identification of each object.
- the device further includes:
- the determining module 402 is further configured to determine a known target and an unknown target among the multiple targets when the video image is a video image other than the first frame video image in the multi-frame video image, the known target is the The target contained in the previous video image of the video image, the unknown target is a target not included in the previous video image;
- the second generation module 406 is used to generate a unique identifier of the unknown target
- the second storage module 407 is configured to store the position information and size information of the known target in the video image corresponding to the unique identification of the known target; the position information and size information of the unknown target in the video image Store corresponding to the unique identifier of the unknown target.
- the device further includes:
- An extraction module 408, configured to extract image features of the multiple targets in the video image
- the third storage module 409 is configured to store the image features of the multiple targets in correspondence with the unique identifiers of the multiple targets.
- the extraction module 408 is used to obtain evaluation information of the plurality of targets in the video image, the evaluation information including at least one of posture, size, imaging condition, occlusion, and shooting angle; from the Among multiple targets, select targets whose evaluation information meets preset evaluation conditions; extract image features of the selected targets.
- the determination module 402 is used to display each target; when a first selection event is detected, the target corresponding to the first selection event is determined as the first target, the first selection The event is used to select a target to be blocked from the various targets; when a second selection event is detected, a target other than the target corresponding to the second selection event is determined as the first target, and the second selection event is used to Select the target that does not require occlusion from each target.
- the occlusion module 403 is used to determine the target that is the same real target as the first target in each target; according to the position information of the target and the first target in the multi-frame video image And size information, to block the area where the target and the first target are located in the multi-frame video image.
- the determination module 402 is used to compare the image features of the first target and the image features of the respective targets to obtain the similarity between the respective targets and the first target; according to the respective targets The similarity to the first target determines the target that is the same real target among the various targets and the first target.
- the determination module 402 is used to arrange and display the various targets according to the similarity between the respective targets and the first target, and the greater the similarity, the higher the alignment; when a target confirmation event is detected At this time, the target corresponding to the target confirmation event is determined to be the same real target as the first target, and the target confirmation event is used to select a target that is the same real target as the first target from the various targets.
- the determination module 402 is configured to determine, according to the similarity of the various targets and the first target, a target whose similarity to the first target is greater than a preset threshold as the first target The goal of the same real goal.
- the movement trajectory information of each target appearing in the video is acquired. Since the movement trajectory information of the target may include the position information and size information of the target in the multi-frame video image, the first target that needs to be blocked is determined Afterwards, the area where the first target is located in the multi-frame video image can be blocked according to the movement trajectory information of the first target, and the area where the first target is located in each frame of the video image can be different, so that the target that needs to be blocked Perform accurate and effective occlusion.
- the video image occlusion device provided in the above embodiments only uses the division of the above functional modules as an example to illustrate the video image occlusion.
- the above functions can be allocated by different functional modules according to needs. That is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
- the video image occlusion device and the video image occlusion method embodiment provided in the above embodiments belong to the same concept. For the specific implementation process, see the method embodiments, and details are not described here.
- the electronic device 800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU). 801 and one or more memories 802, where at least one instruction is stored in the memory 802, and the at least one instruction is loaded and executed by the processor 801 to implement the video image occlusion method provided by the foregoing method embodiments.
- the electronic device 800 may also have components such as a wired or wireless network interface, a keyboard, and an input-output interface for input and output.
- the electronic device 800 may also include other components for implementing device functions, which will not be repeated here.
- a computer-readable storage medium storing at least one instruction, for example, a memory storing at least one instruction, and when the at least one instruction is executed by a processor, the video image in the above embodiment is implemented Occlusion method.
- the computer-readable storage medium may be read-only memory (Read-Only Memory, ROM), random-access memory (Random Access Memory, RAM), read-only compact disc (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, optical data storage devices, etc.
- the above program may be stored in a computer-readable storage medium.
- the storage medium can be read-only memory, magnetic disk or optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (24)
- 一种视频图像遮挡方法,其特征在于,所述方法包括:基于视频的多帧视频图像,获取所述视频内出现的各个目标的运动轨迹信息,所述各个目标的运动轨迹信息包括所述各个目标在所述多帧视频图像中的位置信息和尺寸信息;确定所述各个目标中需要遮挡的第一目标;根据所述第一目标的运动轨迹信息,对所述多帧视频图像中所述第一目标所在的区域进行遮挡,所述第一目标所在的区域为所述第一目标的位置信息和尺寸信息所对应的区域。
- 根据权利要求1所述的方法,其特征在于,所述基于视频的多帧视频图像,获取所述视频内出现的各个目标的运动轨迹信息,包括:对于所述多帧视频图像中的每帧视频图像,对所述视频图像进行目标检测,确定所述视频图像中的多个目标;获取所述多个目标在所述视频图像中的位置信息和尺寸信息。
- 根据权利要求2所述的方法,其特征在于,所述对所述视频图像进行目标检测,确定所述视频图像中的多个目标之后,所述方法还包括:当所述视频图像为所述多帧视频图像中的第一帧视频图像时,生成所述多个目标中每个目标的唯一标识;所述获取所述多个目标在所述视频图像中的位置信息和尺寸信息之后,所述方法还包括:将所述多个目标中每个目标在所述视频图像中的位置信息和尺寸信息与所述每个目标的唯一标识对应存储。
- 根据权利要求2所述的方法,其特征在于,所述对所述视频图像进行目标检测,确定所述视频图像中的多个目标之后,所述方法还包括:当所述视频图像为所述多帧视频图像中除第一帧视频图像以外的视频图像时,确定所述多个目标中的已知目标和未知目标,所述已知目标为所述视频图像的上一帧视频图像中包含的目标,所述未知目标为所述上一帧视频图像中未 包含的目标;生成所述未知目标的唯一标识;所述获取所述多个目标在所述视频图像中的位置信息和尺寸信息之后,所述方法还包括:将所述已知目标在所述视频图像中的位置信息和尺寸信息与所述已知目标的唯一标识对应存储;将所述未知目标在所述视频图像中的位置信息和尺寸信息与所述未知目标的唯一标识对应存储。
- 根据权利要求3或4所述的方法,其特征在于,所述对所述视频图像进行目标检测,确定所述视频图像中的多个目标之后,所述方法还包括:提取所述视频图像中所述多个目标的图像特征;将所述多个目标的图像特征与所述多个目标的唯一标识对应存储。
- 根据权利要求5所述的方法,其特征在于,所述提取所述视频图像中所述多个目标的图像特征,包括:获取所述视频图像中所述多个目标的评价信息,所述评价信息包括姿态、尺寸、成像条件、遮挡情况和拍摄角度中至少一项;从所述多个目标中,选取评价信息满足预设评价条件的目标;提取所选取的目标的图像特征。
- 根据权利要求1所述的方法,其特征在于,所述确定所述各个目标中需要遮挡的第一目标,包括:将所述各个目标进行展示;当检测到第一选择事件时,将所述第一选择事件对应的目标确定为所述第一目标,所述第一选择事件用于从所述各个目标中选择需要遮挡的目标;当检测到第二选择事件时,将除所述第二选择事件对应的目标以外的目标确定为所述第一目标,所述第二选择事件用于从所述各个目标中选择不需要遮挡的目标。
- 根据权利要求1或7所述的方法,其特征在于,所述根据所述第一目标的运动轨迹信息,对所述多帧视频图像中所述第一目标所在的区域进行遮挡,包括:确定所述各个目标中与所述第一目标为同一个真实目标的目标;根据所述目标和所述第一目标在所述多帧视频图像中的位置信息和尺寸信息,对所述多帧视频图像中所述目标和所述第一目标所在的区域进行遮挡。
- 根据权利要求8所述的方法,其特征在于,所述确定所述各个目标中与所述第一目标为同一个真实目标的目标,包括:将所述第一目标的图像特征和所述各个目标的图像特征进行比对,获取所述各个目标与所述第一目标的相似度;根据所述各个目标与所述第一目标的相似度,确定所述各个目标中与所述第一目标为同一个真实目标的目标。
- 根据权利要求9所述的方法,其特征在于,所述根据所述各个目标与所述第一目标的相似度,确定所述各个目标中与所述第一目标为同一个真实目标的目标,包括:根据所述各个目标与所述第一目标的相似度,将所述各个目标进行排列展示,相似度越大排列越靠前;当检测到目标确认事件时,将所述目标确认事件对应的目标确定为与所述第一目标为同一个真实目标的目标,所述目标确认事件用于从所述各个目标中选择与所述第一目标为同一个真实目标的目标。
- 根据权利要求9所述的方法,其特征在于,所述根据所述各个目标与所述第一目标的相似度,确定所述各个目标中与所述第一目标为同一个真实目标的目标,包括:根据所述各个目标与所述第一目标的相似度,将与所述第一目标的相似度大于预设阈值的目标确定为与所述第一目标为同一个真实目标的目标。
- 一种视频图像遮挡装置,其特征在于,所述装置包括:获取模块,用于基于视频的多帧视频图像,获取所述视频内出现的各个目标的运动轨迹信息,所述各个目标的运动轨迹信息包括所述各个目标在所述多帧视频图像中的位置信息和尺寸信息;确定模块,用于确定所述各个目标中需要遮挡的第一目标;遮挡模块,用于根据所述第一目标的运动轨迹信息,对所述多帧视频图像中所述第一目标所在的区域进行遮挡,所述第一目标所在的区域为所述第一目标的位置信息和尺寸信息所对应的区域。
- 根据权利要求12所述的装置,其特征在于,所述获取模块用于对于所述多帧视频图像中的每帧视频图像,对所述视频图像进行目标检测,确定所述视频图像中的多个目标;获取所述多个目标在所述视频图像中的位置信息和尺寸信息。
- 根据权利要求13所述的装置,其特征在于,所述装置还包括:第一生成模块,用于当所述视频图像为所述多帧视频图像中的第一帧视频图像时,生成所述多个目标中每个目标的唯一标识;第一存储模块,用于将所述多个目标中每个目标在所述视频图像中的位置信息和尺寸信息与所述每个目标的唯一标识对应存储。
- 根据权利要求13所述的装置,其特征在于,所述装置还包括:所述确定模块还用于当所述视频图像为所述多帧视频图像中除第一帧视频图像以外的视频图像时,确定所述多个目标中的已知目标和未知目标,所述已知目标为所述视频图像的上一帧视频图像中包含的目标,所述未知目标为所述上一帧视频图像中未包含的目标;第二生成模块,用于生成所述未知目标的唯一标识;第二存储模块,用于将所述已知目标在所述视频图像中的位置信息和尺寸信息与所述已知目标的唯一标识对应存储;将所述未知目标在所述视频图像中的位置信息和尺寸信息与所述未知目标的唯一标识对应存储。
- 根据权利要求14或15所述的装置,其特征在于,所述装置还包括:提取模块,用于提取所述视频图像中所述多个目标的图像特征;第三存储模块,用于将所述多个目标的图像特征与所述多个目标的唯一标识对应存储。
- 根据权利要求16所述的装置,其特征在于,所述提取模块用于获取所述视频图像中所述多个目标的评价信息,所述评价信息包括姿态、尺寸、成像条件、遮挡情况和拍摄角度中至少一项;从所述多个目标中,选取评价信息满足预设评价条件的目标;提取所选取的目标的图像特征。
- 根据权利要求12所述的装置,其特征在于,所述确定模块用于将所述各个目标进行展示;当检测到第一选择事件时,将所述第一选择事件对应的目标确定为所述第一目标,所述第一选择事件用于从所述各个目标中选择需要遮挡的目标;当检测到第二选择事件时,将除所述第二选择事件对应的目标以外的目标确定为所述第一目标,所述第二选择事件用于从所述各个目标中选择不需要遮挡的目标。
- 根据权利要求12或18所述的装置,其特征在于,所述遮挡模块用于确定所述各个目标中与所述第一目标为同一个真实目标的目标;根据所述目标和所述第一目标在所述多帧视频图像中的位置信息和尺寸信息,对所述多帧视频图像中所述目标和所述第一目标所在的区域进行遮挡。
- 根据权利要求19所述的装置,其特征在于,所述确定模块用于将所述第一目标的图像特征和所述各个目标的图像特征进行比对,获取所述各个目标与所述第一目标的相似度;根据所述各个目标与所述第一目标的相似度,确定所述各个目标中与所述第一目标为同一个真实目标的目标。
- 根据权利要求20所述的装置,其特征在于,所述确定模块用于根据所述各个目标与所述第一目标的相似度,将所述各个目标进行排列展示,相似度越大排列越靠前;当检测到目标确认事件时,将所述目标确认事件对应的目标确定为与所述第一目标为同一个真实目标的目标,所述目标确认事件用于从所 述各个目标中选择与所述第一目标为同一个真实目标的目标。
- 根据权利要求20所述的装置,其特征在于,所述确定模块用于根据所述各个目标与所述第一目标的相似度,将与所述第一目标的相似度大于预设阈值的目标确定为与所述第一目标为同一个真实目标的目标。
- 一种电子设备,其特征在于,包括处理器和存储器;所述存储器,用于存放至少一条指令;所述处理器,用于执行所述存储器上所存放的至少一条指令:基于视频的多帧视频图像,获取所述视频内出现的各个目标的运动轨迹信息,所述各个目标的运动轨迹信息包括所述各个目标在所述多帧视频图像中的位置信息和尺寸信息;确定所述各个目标中需要遮挡的第一目标;根据所述第一目标的运动轨迹信息,对所述多帧视频图像中所述第一目标所在的区域进行遮挡,所述第一目标所在的区域为所述第一目标的位置信息和尺寸信息所对应的区域。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有至少一条指令,所述至少一条指令被处理器执行:基于视频的多帧视频图像,获取所述视频内出现的各个目标的运动轨迹信息,所述各个目标的运动轨迹信息包括所述各个目标在所述多帧视频图像中的位置信息和尺寸信息;确定所述各个目标中需要遮挡的第一目标;根据所述第一目标的运动轨迹信息,对所述多帧视频图像中所述第一目标所在的区域进行遮挡,所述第一目标所在的区域为所述第一目标的位置信息和尺寸信息所对应的区域。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811435929.1A CN111241872B (zh) | 2018-11-28 | 2018-11-28 | 视频图像遮挡方法及装置 |
CN201811435929.1 | 2018-11-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020108573A1 true WO2020108573A1 (zh) | 2020-06-04 |
Family
ID=70854339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/121644 WO2020108573A1 (zh) | 2018-11-28 | 2019-11-28 | 视频图像遮挡方法、装置、设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111241872B (zh) |
WO (1) | WO2020108573A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807445A (zh) * | 2021-09-23 | 2021-12-17 | 城云科技(中国)有限公司 | 案卷复核方法、装置及电子装置、计算机程序产品 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111654700B (zh) * | 2020-06-19 | 2022-12-06 | 杭州海康威视数字技术股份有限公司 | 一种隐私遮蔽处理方法、装置、电子设备及监控系统 |
CN111985419B (zh) * | 2020-08-25 | 2022-10-14 | 腾讯科技(深圳)有限公司 | 视频处理方法及相关设备 |
CN112188058A (zh) * | 2020-09-29 | 2021-01-05 | 努比亚技术有限公司 | 一种视频拍摄方法、移动终端以及计算机存储介质 |
CN115546900B (zh) * | 2022-11-25 | 2023-03-31 | 浙江莲荷科技有限公司 | 风险识别方法、装置、设备及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060158527A1 (en) * | 2004-12-15 | 2006-07-20 | Kang Kyun H | Method and apparatus for controlling privacy mask display |
CN105007395A (zh) * | 2015-07-22 | 2015-10-28 | 深圳市万姓宗祠网络科技股份有限公司 | 一种连续记录视频、影像的隐私处理方法 |
CN105957001A (zh) * | 2016-04-18 | 2016-09-21 | 深圳感官密码科技有限公司 | 一种隐私保护方法及装置 |
CN107820041A (zh) * | 2016-09-13 | 2018-03-20 | 华为数字技术(苏州)有限公司 | 隐私遮挡方法及装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6336872B2 (ja) * | 2014-09-29 | 2018-06-06 | Kddi株式会社 | オブジェクト追跡方法、装置およびプログラム |
JP6474126B2 (ja) * | 2015-02-27 | 2019-02-27 | Kddi株式会社 | オブジェクト追跡方法、装置およびプログラム |
EP3378227A4 (en) * | 2015-11-18 | 2019-07-03 | Jorg Tilkin | PRIVACY PROTECTION IN VIDEO SURVEILLANCE SYSTEMS |
CN106162091A (zh) * | 2016-07-28 | 2016-11-23 | 乐视控股(北京)有限公司 | 一种视频监控方法及装置 |
CN106358069A (zh) * | 2016-10-31 | 2017-01-25 | 维沃移动通信有限公司 | 一种视频数据处理方法及移动终端 |
CN107240120B (zh) * | 2017-04-18 | 2019-12-17 | 上海体育学院 | 视频中运动目标的跟踪方法及装置 |
CN107564034A (zh) * | 2017-07-27 | 2018-01-09 | 华南理工大学 | 一种监控视频中多目标的行人检测与跟踪方法 |
-
2018
- 2018-11-28 CN CN201811435929.1A patent/CN111241872B/zh active Active
-
2019
- 2019-11-28 WO PCT/CN2019/121644 patent/WO2020108573A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060158527A1 (en) * | 2004-12-15 | 2006-07-20 | Kang Kyun H | Method and apparatus for controlling privacy mask display |
CN105007395A (zh) * | 2015-07-22 | 2015-10-28 | 深圳市万姓宗祠网络科技股份有限公司 | 一种连续记录视频、影像的隐私处理方法 |
CN105957001A (zh) * | 2016-04-18 | 2016-09-21 | 深圳感官密码科技有限公司 | 一种隐私保护方法及装置 |
CN107820041A (zh) * | 2016-09-13 | 2018-03-20 | 华为数字技术(苏州)有限公司 | 隐私遮挡方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113807445A (zh) * | 2021-09-23 | 2021-12-17 | 城云科技(中国)有限公司 | 案卷复核方法、装置及电子装置、计算机程序产品 |
CN113807445B (zh) * | 2021-09-23 | 2024-04-16 | 城云科技(中国)有限公司 | 案卷复核方法、装置及电子装置、可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111241872A (zh) | 2020-06-05 |
CN111241872B (zh) | 2023-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020108573A1 (zh) | 视频图像遮挡方法、装置、设备及存储介质 | |
US11594029B2 (en) | Methods and systems for determining ball shot attempt location on ball court | |
US11810321B2 (en) | Methods and systems for multiplayer tagging using artificial intelligence | |
US9600760B2 (en) | System and method for utilizing motion fields to predict evolution in dynamic scenes | |
US11263446B2 (en) | Method for person re-identification in closed place, system, and terminal device | |
US20180225852A1 (en) | Apparatus and method for generating best-view image centered on object of interest in multiple camera images | |
US20130335635A1 (en) | Video Analysis Based on Sparse Registration and Multiple Domain Tracking | |
AU2017272325A1 (en) | System and method of generating a composite frame | |
CN111274928A (zh) | 一种活体检测方法、装置、电子设备和存储介质 | |
US9384400B2 (en) | Method and apparatus for identifying salient events by analyzing salient video segments identified by sensor information | |
Chakraborty et al. | A real-time trajectory-based ball detection-and-tracking framework for basketball video | |
JP2016163328A (ja) | 情報処理装置、情報処理方法、およびプログラム | |
WO2023045183A1 (zh) | 图像处理 | |
Pidaparthy et al. | Keep your eye on the puck: Automatic hockey videography | |
CN112954443A (zh) | 全景视频的播放方法、装置、计算机设备和存储介质 | |
CN112511859A (zh) | 一种视频处理方法、装置和存储介质 | |
González et al. | Single object long-term tracker for smart control of a ptz camera | |
US10372994B2 (en) | Method, system and apparatus for selecting a video frame | |
CN114037923A (zh) | 一种目标活动热点图绘制方法、系统、设备及存储介质 | |
Xu et al. | Fast and accurate object detection using image cropping/resizing in multi-view 4K sports videos | |
Nieto et al. | An automatic system for sports analytics in multi-camera tennis videos | |
AU2015258346A1 (en) | Method and system of transitioning between images | |
Stefański et al. | The problem of detecting boxers in the boxing ring | |
Chen et al. | Multi-sensored vision for autonomous production of personalized video summaries | |
CN115334241B (zh) | 对焦控制方法、装置、存储介质及摄像设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19889653 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19889653 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19889653 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.12.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19889653 Country of ref document: EP Kind code of ref document: A1 |