CN113051437B - Target duplicate removal method and device, electronic equipment and storage medium - Google Patents

Target duplicate removal method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113051437B
CN113051437B CN201911382709.1A CN201911382709A CN113051437B CN 113051437 B CN113051437 B CN 113051437B CN 201911382709 A CN201911382709 A CN 201911382709A CN 113051437 B CN113051437 B CN 113051437B
Authority
CN
China
Prior art keywords
target
calibration
slice
video
time period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911382709.1A
Other languages
Chinese (zh)
Other versions
CN113051437A (en
Inventor
张鹏国
李文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201911382709.1A priority Critical patent/CN113051437B/en
Publication of CN113051437A publication Critical patent/CN113051437A/en
Application granted granted Critical
Publication of CN113051437B publication Critical patent/CN113051437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/162Delete operations

Abstract

The embodiment of the invention discloses a target duplicate removal method, a target duplicate removal device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first slice video and a second slice video which are adjacent in any two time sequences, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time; respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a tracking result to obtain a first calibration result and a second calibration result; and according to the first calibration result and the second calibration result, performing target duplicate removal processing on the first slice video and the second slice video. Therefore, the effect of target duplicate removal among different slice videos is achieved, and the finally obtained target is optimal.

Description

Target duplicate removal method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of monitoring, in particular to a target duplicate removal method and device, electronic equipment and a storage medium.
Background
Existing video stream analysis is mainly classified into three types: the method comprises the steps of live video files, video files and offline video files, wherein when the video files and the offline video files are intelligently analyzed, in order to improve the analysis speed, the video files or the video files are sliced, and then the video files or the video files are dispatched to different computing units to be subjected to distributed computing.
At present, the video is often sliced in a time-sharing manner, and then each slice video is analyzed. However, this approach has certain disadvantages: repeated targets exist in the analysis results of different slice videos, and the smaller the slice granularity is, the more the repeated targets are obtained through analysis.
Disclosure of Invention
The embodiment of the invention provides a target duplicate removal method, a target duplicate removal device, electronic equipment and a storage medium, and aims to solve the technical problem that duplicate targets exist in analysis results of different slice videos in the prior art.
In a first aspect, an embodiment of the present invention provides a target deduplication method, where the method includes:
acquiring a first slice video and a second slice video which are adjacent in any two time sequences, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
and performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
In a second aspect, an embodiment of the present invention further provides an object de-weighting apparatus, where the apparatus includes:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first slice video and a second slice video which are adjacent in any two time sequences, a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
the calibration processing module is used for respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
and the duplication removal processing module is used for carrying out target duplication removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a target deduplication method as described in any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the target deduplication method according to any one of the embodiments of the present invention.
In the embodiment of the invention, for two slice videos with adjacent time sequences, the time of the tail of the first slice video and the time of the head of the second slice video are overlapped, the overlapped time is respectively marked as the first calibration time period and the second calibration time period, namely the contents of the sub-slice videos in the first calibration time period and the second calibration time period are the same, the target calibration processing is respectively carried out on the sub-slice videos in the first calibration time period and the sub-slice videos in the second calibration time period, whether the targets in the sub-slice videos in the first calibration time period and the sub-slice videos in the second calibration time period are discarded is determined according to the calibration result, so that the aim of removing the duplicate of different slice video targets is realized, and the finally obtained target is ensured to be optimal.
Drawings
FIG. 1 is a schematic flow chart illustrating a target deduplication method according to a first embodiment of the present invention;
FIG. 2a is a flowchart illustrating a target deduplication method according to a second embodiment of the present invention;
FIG. 2b is a comparison between before and after video slicing according to a second embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a target deduplication machine according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a target deduplication method according to an embodiment of the present invention, which is applicable to a case where a video stream is analyzed, for example, a case where a valuable video segment is queried from a surveillance video, and the method may be executed by a target deduplication apparatus, which may be implemented in software and/or hardware, and may be integrated on an electronic device, for example, a server or a computer device.
As shown in fig. 1, the target deduplication method specifically includes:
s101, acquiring a first slice video and a second slice video which are adjacent in any two time sequences.
The method comprises the steps that a first calibration time period marked in advance exists at the tail of a first slice video, a second calibration time period marked in advance exists at the head of a second slice video, and the first calibration time period and the second calibration time period are overlapped in time. Exemplarily, the first slice video recording is a video recording of 10 00-10. It should be noted here that the reason why the duplicate targets exist in the different slice video analysis results is: the last target of each slice video cannot be completely tracked, and can exist in the adjacent slice videos at the same time. Therefore, only the video of two calibration time periods is subjected to target duplicate removal, and the fact that no repeated target exists in different slice video analysis results can be guaranteed.
S102, respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result.
In the embodiment of the application, when the sub-slice video is subjected to target detection, optionally, the target detection is performed through a pre-trained target detection network model, wherein the target detection network model can be one of an R-CNN model, a Fast R-CNN model and a Faster R-CNN model. When the detected target is tracked, optionally, a pre-trained target tracking model may be used, and the detected target and the sub-slice video are used as input to complete the tracking of the target, wherein the target tracking model may be a Recurrent Neural Network (RNN). Therefore, for the video contents in the first slice video except the first calibration time period and the video contents in the second slice video except the second calibration time period, the target detection and the target tracking can be respectively carried out through the two models, and then the optimal target is obtained through analysis. And for the sub-slice videos in the first calibration time period and the second calibration time period, the target calibration needs to be completed in the target detection and target tracking processes, which is specifically as follows:
the operation of performing target detection and target tracking on the sub-slice video within the first calibration time period and performing target calibration processing according to a target detection result and a target tracking result comprises:
and S01, carrying out target detection processing on the sub-slice video to obtain at least one target.
Illustratively, the target detection network model is used for carrying out target detection on the sub-slice video in the first calibration time period, and at least one target is obtained according to the output of the model.
And S02, judging whether the target appears for the first time, if so, marking as a new target, otherwise, marking as a historical target.
The new target refers to a target which appears for the first time in the first calibration time period, and the historical target refers to a target which is detected before the first calibration time period. It should be noted that, in the first calibration time, if a new target M disappears and then reappears, the reappeared target needs to be marked as another new target, for example, as a new target M' for distinction.
And S03, respectively carrying out target tracking processing on the new target and the historical target, and judging whether the new target and the historical target disappear at the end time of the first calibration time period.
For example, the new target and the historical target are respectively subjected to target tracking processing based on a target tracking model, and at the end time of the first calibration time period, it is determined whether the new target and the historical target disappear, that is, whether the new target and the historical target still exist in a video frame picture corresponding to the end time of the first calibration time period is detected.
And S04, if the new target and the historical target disappear, marking the new target and the historical target as tracking completion, otherwise, marking the new target and the historical target as tracking incomplete.
The first calibration result is obtained through S01-S04, and exemplarily, the first calibration result includes four types, which are respectively a type: "New target", "tracking completed"; b type: "historical goal", "tracking complete"; a' is a group: "New target", "tracking incomplete" class B ": "historical goal", "tracking incomplete".
The operation of performing target detection and target tracking on the sub-slice video in the second calibration time period and performing target calibration processing according to a target detection result and a target tracking result comprises the following steps:
and S11, carrying out target detection processing on the sub-slice video to obtain at least one target, and marking each target as a new target.
For example, the target detection network model performs target detection on the sub-slice video in the second calibration period, and obtains at least one target according to the output of the model, and since the sub-slice video in the second calibration period is the beginning part of the second slice video, the targets detected from the sub-slice video are all new targets, and therefore the detected targets need to be marked as "new targets".
And S12, carrying out target tracking processing on the detected new targets, and judging whether each new target disappears at the end time of the second calibration time period.
For example, the target tracking process is performed on the new targets based on the target tracking model, and at the end time of the second calibration time period, it is determined whether each new target disappears, that is, it is determined whether the new target still exists in the video frame corresponding to the end time of the second calibration time period.
And S13, marking the disappeared new target as the tracking completion, and marking the undiscovered new target as the tracking incompletion.
The second calibration result is obtained through S11-S13, and illustratively, the second calibration result includes two types, which are respectively C: "New target", "Trace completed"; class C': "New target", "tracking is not complete".
S103, performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
Since the sub-slice video corresponding to the first calibration time period and the sub-slice video corresponding to the second calibration time period are video videos at the same time, according to the first calibration result and the second calibration result, the target relationship obtained by analysis is as follows: c = a + B; c ' = a ' + B '.
According to the video stream analysis process, after a certain target is detected and tracked, the calculation unit searches for an optimal target corresponding to the target from the tracking process and outputs the optimal target as an analysis result, wherein the optimal target is optionally determined according to the position of the target in the video picture, for example, the target at the center of the picture is taken as the optimal target corresponding to the target. Since the class a and class B targets (i.e., the class C target in the second slice video) are already tracked targets, and therefore, the optimal targets corresponding to the class a and class B targets are necessarily extracted from the first slice video, the class C target detected from the second mark time period is discarded, that is, the new target detected from the sub-slice video corresponding to the second mark time period and marked as being tracked is discarded, and the class a and class B targets detected from the first mark time period are retained, that is, the new target detected from the sub-slice video corresponding to the first mark time period and marked as being tracked and the historical target are retained.
Since the tracking of the a 'class target and the B' class target is not completed in the first calibration time period, the optimal target of the class target is not necessarily extracted from the first slice video. Because the content of the sub-slice video in the second calibration time period of the second slice video is the same as that of the sub-slice video in the first calibration time period of the first slice video, the tracking of the A ' type and B ' type targets which are not tracked completely (namely the C ' type target in the second slice video) is continued in the second slice video until the tracking is completed, so that the optimal targets corresponding to the targets can be extracted from the second slice video inevitably, the A ' type and B ' type targets can be directly discarded, namely, the new targets which are marked as tracking incompleteness and the historical targets detected from the sub-slice video corresponding to the first calibration time period are discarded. Meanwhile, a new target which is marked as incomplete tracking and detected from the sub-slice video corresponding to the second calibration time period is reserved, namely a C' type target is reserved. Therefore, through the discarding operation, the repeated targets in the analysis results of the first slice video and the second slice video can be deleted, the effect of target duplicate removal is achieved, and after the target duplicate removal is achieved, the target in each slice video can be completely tracked, so that the optimal target corresponding to any target in any slice video can be found in the tracking process of the target, and the optimal targets obtained through final video analysis are ensured. .
In this embodiment, for two slice videos with adjacent time sequences, time overlapping is performed on the end of a first slice video and the beginning of a second slice video, the overlapped time is respectively marked as a first calibration time period and a second calibration time period, that is, the contents of sub-slice videos in the first calibration time period and the second calibration time period are the same, target calibration processing is performed on the sub-slice videos in the first calibration time period and the sub-slice videos in the second calibration time period respectively, and whether targets in the sub-slice videos in the first calibration time period and the sub-slice videos in the second calibration time period are discarded is determined according to a calibration result, so that the purpose of target deduplication of different slice videos is achieved, and it is further ensured that the finally obtained target is optimal.
Example two
Fig. 2a is a schematic flow chart of a target deduplication method according to an embodiment of the present application, where the embodiment is optimized based on the foregoing embodiment, and referring to fig. 2a, the method specifically includes:
s201, obtaining the video to be split.
Illustratively, a monitoring video (e.g., 10. According to the picture monitored by the video site, setting the time of normally tracking a target as n seconds, and overlapping according to the n seconds, namely ensuring that the overlapping time length of any two adjacent slice videos in time sequence after segmentation is n seconds, wherein the value of n can be adjusted according to different monitoring sites. It should be noted that, the time when one target is normally tracked is set as the overlapping duration, so that it can be ensured that one target can be completely tracked in the overlapping duration.
S202, determining the number of the slice videos, the time length of each slice video and the overlapping time length of any two adjacent slice videos in time sequence.
Illustratively, the time length of each slice video is set to be 10 minutes, the number of the slice videos is 6, and the overlapping time n =30 seconds.
S203, according to the duration of the first slice video, the video to be sliced is sliced, and the slice video with the time sequence arranged at the first position is obtained.
For example, for a slice video with the time sequence arranged at the first position, segmentation can be directly performed according to 10 minutes, and a video with the slice video being 10.
And S204, according to the time length and the overlapping time length of the slice videos, performing slicing processing on the to-be-sliced videos to obtain other slice videos after the slice video with the time sequence arranged at the first position.
For example, when some other slice video is obtained, the sum of the set slice video duration and the set overlap duration is used as the actual length of the slice video, that is, when other video slices are sliced, a section of video with the same overlap duration is sliced forward in time sequence. Exemplarily, the slice video ranked in the second position in time is 09. Specifically, see table 1, which shows the start and end times of each slice video. Optionally, refer to fig. 2b, which shows a schematic diagram of a comparison between before and after video slice, wherein the hatched area encircled by the dashed line frame is the overlapping area of two adjacent video slices.
Further, after the slice videos are obtained, calibration of the overlapping time period of each slice video is required, which may specifically be performed as follows:
s21, obtaining a sub-slice video with a preset length before the end of the slice video with the time sequence arranged at the first position, and marking the time period corresponding to the sub-slice video as a first calibration time period.
And S22, acquiring a sub-slice video with a preset length after the slice video with the time sequence arranged at the last bit starts, and marking the time period corresponding to the sub-slice video as a second calibration time period.
S23, aiming at each other slice video, the following operations are executed: acquiring a sub-slice video with a preset length after the slice video starts, and marking a time period corresponding to the sub-slice video as a second calibration time period; acquiring a sub-slice video with a preset length before the end of the slice video, and marking a time period corresponding to the sub-slice video as a first calibration time period; wherein the preset length is equal to the overlap duration, an exemplary overlap duration of 30 seconds.
Specifically, the time period calibration results can be seen in table 1.
TABLE 1 slice video start-stop time and calibration time period
Figure BDA0002342682850000101
S205, a first slice video and a second slice video which are adjacent in any two time sequences are obtained, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time.
S206, respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result.
And S207, performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
When the video is cut, the overlapping part of two adjacent cut video is ensured, the duration of the overlapping part is calibrated, a basis is provided for the duplicate removal of a subsequent target, and the duplicate removal of the subsequent target is also ensured.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a target deduplication apparatus according to a third embodiment of the present invention, as shown in fig. 3, the apparatus includes:
a first obtaining module 301, configured to obtain a first slice video and a second slice video that are adjacent in any two time sequences, where a first calibration time period marked in advance exists at the end of the first slice video, a second calibration time period marked in advance exists at the beginning of the second slice video, and times of the first calibration time period and the second calibration time period overlap;
a calibration processing module 302, configured to perform target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, respectively, and perform target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
a duplicate removal processing module 303, configured to perform target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the video to be divided;
the determining module is used for determining the number of the slice videos, the time length of each slice video and the overlapping time length of any two adjacent slice videos in time sequence;
the first segmentation module is used for segmenting the video to be segmented according to the duration of the first slice video to obtain the slice video with the time sequence arranged at the first position;
and the second segmentation module is used for performing segmentation processing on the video to be segmented according to the time length and the overlapping time length of the slice videos to obtain other slice videos with the time sequence arranged behind the first slice video.
Optionally, the apparatus further comprises:
the first calibration module is used for acquiring a sub-slice video with a preset length before the end of the slice video with the time sequence arranged at the first position, and marking the time period corresponding to the sub-slice video as a first calibration time period;
the second calibration module is used for acquiring a sub-slice video with a preset length after the slice video with the time sequence arranged at the last bit starts, and marking the time period corresponding to the sub-slice video as a second calibration time period;
the third calibration module is used for executing the following operations aiming at each other slice video: acquiring a sub-slice video with a preset length after the slice video starts, and marking a time period corresponding to the sub-slice video as a second calibration time period; acquiring a sub-slice video with a preset length before the slice video is finished, and marking a time period corresponding to the sub-slice video as a first calibration time period;
wherein the preset length is equal to the overlap duration.
Optionally, the calibration processing module includes:
the first detection unit is used for carrying out target detection processing on the sub-slice video to obtain at least one target;
the first marking unit is used for judging whether the target appears for the first time, if so, marking the target as a new target, and otherwise, marking the target as a historical target;
the first tracking unit is used for respectively tracking the new target and the historical target and judging whether the new target and the historical target disappear or not at the end time of a first calibration time period;
and the second calibration unit is used for marking the new target and the historical target as tracking completion if the new target and the historical target disappear, and otherwise marking the new target and the historical target as tracking incompletion.
Optionally, the calibration processing module includes:
the second detection unit is used for carrying out target detection processing on the sub-slice video to obtain at least one target and marking each target as a new target;
the second tracking unit is used for carrying out target tracking processing on the detected new targets and judging whether each new target disappears at the end time of the second calibration time period;
and the second marking unit is used for marking the disappeared new target as the tracking completion and marking the undiscovered new target as the tracking incompletion.
Optionally, the deduplication processing module includes:
the first duplicate removal unit is used for discarding new targets and historical targets which are marked as incomplete tracking and detected from the sub-slice video corresponding to the first calibration time period;
and the second duplication removing unit is used for discarding the new target marked as complete tracking detected from the sub-slice video corresponding to the second calibration time period.
The target duplicate removal device provided by the embodiment of the invention can execute the target duplicate removal method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in fig. 4, electronic device 12 is in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with the other modules of the electronic device 12 over the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement a target deduplication method provided by an embodiment of the present invention, the method including:
acquiring a first slice video and a second slice video which are adjacent in any two time sequences, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the beginning of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
and performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a target deduplication method provided in an embodiment of the present invention, where the method includes:
acquiring a first slice video and a second slice video which are adjacent in any two time sequences, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
respectively performing target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and performing target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
and performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (7)

1. A method of object deduplication, the method comprising:
acquiring a first slice video and a second slice video which are adjacent in any two time sequences, wherein a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the beginning of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
respectively performing target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and performing target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
performing target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result;
the target detection and target tracking are carried out on the sub-slice video in the first calibration time period, and target calibration processing is carried out according to a target detection result and a target tracking result, and the method comprises the following steps:
carrying out target detection processing on the sub-slice video to obtain at least one target;
judging whether the target appears for the first time, if so, marking as a new target, and otherwise, marking as a historical target;
respectively carrying out target tracking processing on the new target and the historical target, and judging whether the new target and the historical target disappear at the end time of a first calibration time period;
if the new target and the historical target disappear, the new target and the historical target are marked as tracking completion, otherwise, the new target and the historical target are marked as tracking incompletion;
the target detection and target tracking are carried out on the sub-slice video in the second calibration time period, and target calibration processing is carried out according to a target detection result and a tracking result, and the method comprises the following steps:
carrying out target detection processing on the sub-slice video to obtain at least one target, and marking each target as a new target;
carrying out target tracking processing on the detected new targets, and judging whether each new target disappears or not at the end time of the second calibration time period;
marking the disappeared new target as tracking completed and marking the undiscovered new target as tracking incomplete;
the performing, according to the first calibration result and the second calibration result, target deduplication processing on the first slice video and the second slice video includes:
discarding new targets and historical targets which are marked as incomplete tracking and detected from the sub-slice video corresponding to the first calibration time period;
and discarding the new target marked as tracking completion detected from the sub-slice video corresponding to the second calibration time period.
2. The method of claim 1, wherein prior to obtaining any two of the first slice video and the second slice video that are adjacent in time sequence, further comprising:
acquiring a video to be divided;
determining the number of the slice videos, the time length of each slice video and the overlapping time length of any two adjacent slice videos in time sequence;
according to the duration of the first slice video, performing segmentation processing on the video to be segmented to obtain a slice video with a time sequence arranged at the first position;
and performing segmentation processing on the video to be segmented according to the time length and the overlapping time length of the slice videos to obtain other slice videos with the time sequence arranged at the first position after the slice video.
3. The method of claim 2, wherein after obtaining other slice records subsequent in time sequence to the first slice record, the method further comprises:
acquiring a sub-slice video with a preset length before the end of the slice video with the time sequence arranged at the first position, and marking a time period corresponding to the sub-slice video as a first calibration time period;
acquiring a sub-slice video with a preset length after the slice video with the time sequence arranged at the last bit starts, and marking a time period corresponding to the sub-slice video as a second calibration time period;
for each of the other slice videos, the following operations are performed: acquiring a sub-slice video with a preset length after the slice video starts, and marking a time period corresponding to the sub-slice video as a second calibration time period; acquiring a sub-slice video with a preset length before the slice video is finished, and marking a time period corresponding to the sub-slice video as a first calibration time period;
wherein the preset length is equal to the overlap duration.
4. An object-reducing apparatus, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first slice video and a second slice video which are adjacent in any two time sequences, a first calibration time period marked in advance exists at the tail of the first slice video, a second calibration time period marked in advance exists at the head of the second slice video, and the first calibration time period and the second calibration time period are overlapped in time;
the calibration processing module is used for respectively carrying out target detection and target tracking on the sub-slice video in the first calibration time period and the sub-slice video in the second calibration time period, and carrying out target calibration processing according to a target detection result and a target tracking result to obtain a first calibration result and a second calibration result;
the duplicate removal processing module is used for carrying out target duplicate removal processing on the first slice video and the second slice video according to the first calibration result and the second calibration result;
the first detection unit is used for carrying out target detection processing on the sub-slice video to obtain at least one target;
the first marking unit is used for judging whether the target appears for the first time, if so, marking the target as a new target, and otherwise, marking the target as a historical target;
the first tracking unit is used for respectively carrying out target tracking processing on the new target and the historical target and judging whether the new target and the historical target disappear at the end time of a first calibration time period;
the second calibration unit is used for marking the new target and the historical target as tracking completion if the new target and the historical target disappear, and otherwise, marking the new target and the historical target as tracking incompletion;
the calibration processing module comprises:
the second detection unit is used for carrying out target detection processing on the sub-slice video to obtain at least one target and marking each target as a new target;
the second tracking unit is used for carrying out target tracking processing on the detected new targets and judging whether each new target disappears at the end time of the second calibration time period;
the second marking unit is used for marking the disappeared new target as the tracking completion and marking the undiscovered new target as the tracking incompletion;
the deduplication processing module comprises:
the first duplicate removal unit is used for discarding new targets and historical targets which are marked as incomplete tracking and detected from the sub-slice video corresponding to the first calibration time period;
and the second duplication removing unit is used for discarding the new target marked as complete tracking detected from the sub-slice video corresponding to the second calibration time period.
5. The apparatus of claim 4, further comprising:
the second acquisition module is used for acquiring the video to be divided;
the determining module is used for determining the number of the slice videos, the time length of each slice video and the overlapping time length of any two adjacent slice videos in time sequence;
the first segmentation module is used for segmenting the video to be segmented according to the duration of the first slice video to obtain the slice video with the time sequence arranged at the first position;
and the second segmentation module is used for performing segmentation processing on the video to be segmented according to the time length and the overlapping time length of the slice videos to obtain other slice videos with the time sequence arranged behind the first slice video.
6. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the object deduplication method as recited in any of claims 1-3.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the object deduplication method as recited in any one of claims 1-3.
CN201911382709.1A 2019-12-28 2019-12-28 Target duplicate removal method and device, electronic equipment and storage medium Active CN113051437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911382709.1A CN113051437B (en) 2019-12-28 2019-12-28 Target duplicate removal method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911382709.1A CN113051437B (en) 2019-12-28 2019-12-28 Target duplicate removal method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113051437A CN113051437A (en) 2021-06-29
CN113051437B true CN113051437B (en) 2022-12-13

Family

ID=76507394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911382709.1A Active CN113051437B (en) 2019-12-28 2019-12-28 Target duplicate removal method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113051437B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985221A (en) * 2018-07-12 2018-12-11 广州视源电子科技股份有限公司 Video clip detection method, device, equipment and storage medium
CN109214238A (en) * 2017-06-30 2019-01-15 百度在线网络技术(北京)有限公司 Multi-object tracking method, device, equipment and storage medium
CN109543641A (en) * 2018-11-30 2019-03-29 厦门市美亚柏科信息股份有限公司 A kind of multiple target De-weight method, terminal device and the storage medium of real-time video
CN109582640A (en) * 2018-11-15 2019-04-05 深圳市酷开网络科技有限公司 A kind of data deduplication storage method, device and storage medium based on sliding window

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2008222933A1 (en) * 2007-03-02 2008-09-12 Organic Motion System and method for tracking three dimensional objects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214238A (en) * 2017-06-30 2019-01-15 百度在线网络技术(北京)有限公司 Multi-object tracking method, device, equipment and storage medium
CN108985221A (en) * 2018-07-12 2018-12-11 广州视源电子科技股份有限公司 Video clip detection method, device, equipment and storage medium
CN109582640A (en) * 2018-11-15 2019-04-05 深圳市酷开网络科技有限公司 A kind of data deduplication storage method, device and storage medium based on sliding window
CN109543641A (en) * 2018-11-30 2019-03-29 厦门市美亚柏科信息股份有限公司 A kind of multiple target De-weight method, terminal device and the storage medium of real-time video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
运动目标检测与跟踪;魏洪峰等;《渤海大学学报(自然科学版)》;20171215(第04期);全文 *

Also Published As

Publication number Publication date
CN113051437A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN109144695B (en) Method, device, equipment and medium for processing task topological relation
CN110457277B (en) Service processing performance analysis method, device, equipment and storage medium
US11204935B2 (en) Similarity analyses in analytics workflows
US20140365833A1 (en) Capturing trace information using annotated trace output
US8972338B2 (en) Sampling transactions from multi-level log file records
US10169053B2 (en) Loading a web page
US11768815B2 (en) Determining when a change set was delivered to a workspace or stream and by whom
US10001925B2 (en) Method to improve page out mechanism with compressed memory pools
CN115965657B (en) Target tracking method, electronic device, storage medium and vehicle
US9652245B2 (en) Branch prediction for indirect jumps by hashing current and previous branch instruction addresses
US20140280544A1 (en) Dynamically Managing Social Networking Groups
CN113051437B (en) Target duplicate removal method and device, electronic equipment and storage medium
US10585858B2 (en) Log synchronization among discrete devices in a computer system
US10310927B2 (en) Operating a trace procedure for a computer program
US11599743B2 (en) Method and apparatus for obtaining product training images, and non-transitory computer-readable storage medium
CN107749892B (en) Network reading method and device for conference record, intelligent tablet and storage medium
CN107741992B (en) Network storage method and device for conference records, intelligent tablet and storage medium
US10885462B2 (en) Determine an interval duration and a training period length for log anomaly detection
US11704361B2 (en) Method, apparatus and storage medium for implementing a discrete frame-based scene section
CN111262727A (en) Service capacity expansion method, device, equipment and storage medium
CN111739054A (en) Target tracking marking method, system, electronic equipment and readable storage medium
CN111400342A (en) Database updating method, device, equipment and storage medium
CN114900468B (en) Rule matching method, device, equipment and storage medium
CN113759331B (en) Radar positioning method, device, equipment and storage medium
CN117556270A (en) Method, system, device and medium for identifying sensor measurement data state change

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant