CN110781733B - Image duplicate removal method, storage medium, network equipment and intelligent monitoring system - Google Patents

Image duplicate removal method, storage medium, network equipment and intelligent monitoring system Download PDF

Info

Publication number
CN110781733B
CN110781733B CN201910878172.1A CN201910878172A CN110781733B CN 110781733 B CN110781733 B CN 110781733B CN 201910878172 A CN201910878172 A CN 201910878172A CN 110781733 B CN110781733 B CN 110781733B
Authority
CN
China
Prior art keywords
similarity
target
human body
target image
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910878172.1A
Other languages
Chinese (zh)
Other versions
CN110781733A (en
Inventor
潘华东
孙鹤
罗时现
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910878172.1A priority Critical patent/CN110781733B/en
Publication of CN110781733A publication Critical patent/CN110781733A/en
Application granted granted Critical
Publication of CN110781733B publication Critical patent/CN110781733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image duplicate removal method, a storage medium, network equipment and an intelligent monitoring system. The image de-duplication method comprises the following steps: acquiring a first target image to be stored in a database and a second target image stored in the database; extracting global features and at least local features of a first human body target in a first target image and a second human body target in a second target image; respectively carrying out similarity matching between global features and between local features to obtain at least two candidate similarities; generating a first similarity according to the at least two candidate similarities; and determining whether the first target image meets the warehousing standard or not based on the first similarity. The first similarity between the first human body target and the second human body target is obtained through the global features and the local features, and images repeatedly snapshotted for the same human body target caused by factors such as shielding and human body posture changes can be effectively removed.

Description

Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image deduplication method, a storage medium, a network device, and an intelligent monitoring system.
Background
With the development of artificial intelligence technology, intelligent monitoring devices represented by intelligent cameras are rapidly developed and popularized at the present stage. The intelligent monitoring equipment can realize real-time monitoring and snapshot of the target. For example, the existing video structured intelligent camera can track and capture a human target and store the captured target picture in a base library. However, due to occlusion, pedestrian posture change and other factors, the video structured smart camera may cause repeated capturing when capturing, that is, the same object has multiple IDs (Identity documents).
As can be seen from the repeated snapshot, the same target should have the same ID, but the same target can be repeatedly snapshot due to factors such as shielding, and the like, so that the same target has different IDs. Repeated snapshot in actual scene application can cause information redundancy, a large number of repeated targets are stored in a bottom library, pressure is brought to storage, and meanwhile the number of image maintenance is increased.
Disclosure of Invention
The application mainly provides an image duplicate removal method, a storage medium, network equipment and an intelligent monitoring system, and aims to solve the problem that due to factors such as shielding and pedestrian posture change, images captured by the same human target are stored repeatedly.
In order to solve the technical problem, the application adopts a technical scheme that: an image deduplication method is provided. The image de-duplication method comprises the following steps: acquiring a first target image to be stored in a database and a second target image stored in the database; extracting global features and at least local features of a first human body target in a first target image and a second human body target in a second target image; respectively carrying out similarity matching between global features and between local features to obtain at least two candidate similarities; generating a first similarity according to the at least two candidate similarities; and determining whether the first target image meets the warehousing standard or not based on the first similarity.
In order to solve the above technical problem, another technical solution adopted by the present application is: a storage medium is provided. The storage medium has stored thereon program data which, when executed by a processor, performs the steps of the image deduplication method as described above.
In order to solve the above technical problem, another technical solution adopted by the present application is: a network device is provided. The network device comprises a processor and a memory which are connected with each other, the memory stores a computer program, and the processor executes the computer program to realize the steps of the image deduplication method.
In order to solve the technical problem, the other technical scheme adopted by the application is as follows: an intelligent monitoring system is provided. The intelligent monitoring system comprises the network camera and the network equipment, wherein the network camera is in communication connection with the network equipment, and the network camera is used for capturing a human body in a monitoring area.
The beneficial effect of this application is: different from the prior art, the application discloses an image duplicate removal method, a storage medium, a network device and an intelligent monitoring system. Whether the first human target and the second human target are the same human target or not is judged by obtaining the similarity between the first target image to be stored in the database and the second target image stored in the database, whether the first target image needs to be de-duplicated or not is judged, the global features and at least one local feature of the first human target in the first target image and the second human target in the second target image are extracted, similarity matching is respectively carried out between the global features and between the local features to obtain at least two candidate similarities, and then the first similarity is generated according to the at least two candidate similarities, so that even if the human target has factors such as shielding and human posture change locally, the candidate similarity obtained through the local features cannot be influenced, the first similarity is influenced by the candidate similarity corresponding to the local features to have larger bias, the influence of the shielding, the human posture change and the like on the first similarity can be greatly eliminated, whether the first target image meets the warehousing standard or not based on the first similarity is determined, whether the first target image and the second target image do not meet the warehousing standard, and whether the first target image and the second human target image do not meet the warehousing standard, and the first target image should be repeatedly captured in the database, so that the first target image and the first target image do not meet the warehousing standard; when the first human body target and the second human body target are judged to be different human body targets, the first target image is stored in the database, and the first target image is determined to meet the warehousing standard, so that the method and the device can effectively remove the images repeatedly snapped on the same human body target caused by factors such as shielding, human body posture change and the like.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts, wherein:
FIG. 1 is a schematic flowchart of an embodiment of an image deduplication method provided in the present application;
FIG. 2 is a schematic flow chart of step S15 of the image deduplication method of FIG. 1;
FIG. 3 is a schematic structural diagram of an embodiment of a storage medium provided herein;
FIG. 4 is a schematic block diagram of an embodiment of a network device provided herein;
fig. 5 is a schematic structural diagram of an embodiment of the intelligent monitoring system provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of an image deduplication method provided in the present application.
Specifically, the image deduplication method comprises the following steps:
s11: and acquiring a first target image to be stored in the database and a second target image stored in the database.
A first target image is acquired by using an image pickup device such as a network camera. For example, the network camera is installed at an entrance of a residential area to monitor people who enter and exit the residential area and capture images of people who enter and exit the residential area.
And a plurality of second target images are stored in the database, and the second target images are all images which are captured and stored by the network camera. The first target image comprises a first human body target, the second target image comprises a second human body target, and whether the first target image needs to be discarded or not is judged through comparison of the first human body target and the second human body target so as to achieve the purpose of duplicate removal.
It can be understood that in the distance that the person passes through the monitoring area, the network camera takes a plurality of snapshots of the human body, and a plurality of first target images formed by taking a plurality of snapshots of the same human body within a certain time period are repeated, which brings storage pressure to the database and increases the number of image maintenance, so that through the passage of the snapshots, the database stores only one first target image in the distance, and also processes and stores one first target image when the human body passes through the distance again.
In the same time period, a plurality of different human bodies pass through the monitoring area, images of part of the human bodies are stored in the database as second target images after being captured, the captured human bodies can also be captured and deduplicated in subsequent journey segments without being stored, and the human bodies which are not captured are captured and deduplicated and are stored in the database as second target images.
The number of second target images selected is reduced to reduce the workload of deduplication.
Optionally, a second target image with the moving direction of the second human target being the same as the moving direction of the first human target is screened out from the database. Therefore, the second target images with the motion directions different from the motion direction of the first human body target do not participate in the de-duplication, the number of the selected second target images is effectively reduced, and the de-duplication efficiency is improved.
The snapshot of the human body is carried out when the target tracking is finished, so that each human body target which is snapshot has corresponding motion track information, and the motion direction of the human body target can be judged according to the track information. Interference items are reduced by removing a second target image different from the motion direction of the human body target, and the method is favorable for improving the deduplication efficiency.
For example, if 50 frames of images exist in a period of time when the network camera tracks and monitors a human body, one of the 50 frames of images is selected as a first target image, and the 50 frames of images are used to acquire motion trajectory information of the human body target, and further acquire a motion direction of the human body target.
Since there are several snapshots of the human body during its path through the monitoring area, these snapshots are spaced apart. For example, one of the 50 frames of images is acquired as a first target image after one snapshot, one of the other subsequent 50 frames of images is acquired as a new snapshot, and multiple first target images obtained by multiple snapshots need to be deduplicated to eliminate images repeatedly snapshot on the same human target.
Optionally, a second target image in which the coordinates of the second human target coincide with the motion trajectory of the first human target is screened out from the database. Thus, a second object image in which the coordinates of the second human object do not coincide with the motion trajectory of the first human object will not participate in de-duplication.
Optionally, a second target image with the same attribute information of the second human target as the first human target is screened out from the database. For example, the attribute information is a human sex, a clothes color, and the like. Thus, the second target image having different attribute information of the second human target from the attribute information of the first human target will not participate in the deduplication.
S12: global features and at least local features of a first human target in a first target image and a second human target in a second target image are extracted.
Extracting a first human body target from the first target image, namely acquiring a region where the first human body image is located from the first target image; extracting a second human body target from the second target image, namely acquiring the area of the second human body image from the second target image; and extracting global features and at least local features of the first human target and the second human target, respectively.
The global feature is a full-body feature of the human body, and the local features include an upper-body feature and a lower-body feature of the human body. Namely, the whole body feature, the upper half body feature and/or the lower half body feature of the first human body target and the second human body target need to be extracted.
Due to the fact that obstacles exist in the environment to shield a human body, such as an automobile, a guardrail, a billboard and the like, and changes of human body postures, such as a kicking posture, a stooping posture and the like, repeated candid shots of the same human body target are recognized as candids of a plurality of different human body targets, and the repeated candid shots of the same human body target are influenced by the factors.
According to the method and the device, the influence of factors such as human body shielding and human body posture change is eliminated by obtaining the global characteristics and the local characteristics of the human body target. For example, if the lower body of the human body is occluded by an automobile, the acquired upper body feature is not affected by the occlusion factor, and therefore, the obtained similarity is not affected in the subsequent similarity calculation between the upper body feature of the first human body target and the upper body feature of the second human body target.
S13: similarity matching is performed between the global features and between the local features respectively to obtain at least two candidate similarities.
Similarity matching is respectively carried out between the global features and between the local features so as to obtain at least two candidate similarities, namely the whole-body feature of the first human body target is matched with the whole-body feature of the second human body target and similarity is calculated, the upper-body feature of the first human body target is matched with the upper-body feature of the second human body target and similarity is calculated, and/or the lower-body feature of the first human body target is matched with the lower-body feature of the second human body target and similarity is calculated, so that at least two candidate similarities can be obtained.
For example, if the lower body of the human body is occluded in the first human body target and the second human body target is not occluded, the degree of candidate similarity calculated between the whole-body feature of the first human body target and the whole-body feature of the second human body target is low, the degree of candidate similarity calculated between the upper-body feature of the first human body target and the upper-body feature of the second human body target is high, and the degree of candidate similarity calculated between the lower-body feature of the first human body target and the lower-body feature of the second human body target is low.
S14: and generating a first similarity according to the at least two candidate similarities.
In the above steps, at least two candidate similarities are obtained, and a first similarity between the first human body target and the second human body target is generated according to the at least two candidate similarities.
In some embodiments, the similarity with the highest similarity is selected from the at least two candidate similarities to serve as the first similarity, so that influences of factors such as occlusion and human posture change on the similarity between the first human body target and the second human body target can be eliminated, and the method and the device are beneficial to improving the accuracy of the target image duplicate removal.
In this embodiment, three candidate similarities corresponding to the whole-body feature, the upper-half body feature, and the lower-half body feature are obtained, and the highest similarity among the three candidate similarities is obtained as the first similarity.
In some other embodiments, the at least two candidate similarities are weighted and summed to obtain the first similarity, so that the influence of occlusion, body posture change, and the like on the similarity between the first human target and the second human target can be weakened.
S15: and determining whether the first target image meets the warehousing standard or not based on the first similarity.
And determining whether the first target image meets the warehousing standard or not based on the first similarity.
Since there are cases where a plurality of second target images are screened from the database, a plurality of first similarities between the first target image and the plurality of second target images will be obtained.
Optionally, obtaining a maximum value of the plurality of first similarity degrees, and comparing the maximum value with a preset similarity threshold value; if the maximum value is larger than or equal to the similarity threshold value, the first human body target and the second human body target can be regarded as the same human body target, the first target image does not meet the warehousing standard, the first target image belongs to repeated snapshot and needs to be discarded, and the purpose of duplicate removal is achieved. Otherwise, the first target image meets the binning criteria, and the first target image is stored in the database to add a new second target image.
In this embodiment, the plurality of first similarities are corrected, and then it is determined whether the first target image meets the warehousing standard. Specifically, the method is realized by adopting the following steps:
s151: and correcting the first similarity according to the attribute information of the first human body target and the second human body target to obtain a second similarity.
Specifically, the auxiliary similarity is acquired according to the attribute information.
The attribute information includes at least one of integrity, gender, and clothing color of the first and second human targets. The attribute information may also be hair style characteristics, height, etc. of the human target, which is helpful for correcting the first similarity, so as to differentiate the plurality of obtained second similarities. Meanwhile, the first similarity is corrected by adopting various attribute information to obtain a plurality of second similarities, and the similarities are better distinguished from each other. For example, the first similarity is corrected according to the color and the gender of the clothes, and the obtained plurality of second similarities are more differentiated from each other.
If there is a case where the second human target in the plurality of second target images is the same human target as the first human target, a second similarity is inevitably higher.
Specifically, when the attribute information of the first human body target and the second human body target meets a preset condition, the auxiliary similarity is set to be a first numerical value, otherwise, the auxiliary similarity is set to be a second numerical value, wherein the first numerical value is larger than the second numerical value.
For example, the similarity value of the first human target and the second human target is divided between 0 and 100, wherein a similarity of 0 means that the first human target is not similar to the second human target at all, and a similarity of 100 means that the first human target is identical to the second human target. The first value may be set to 100 and the first value to 0.
For example, if the attribute information is the integrity of the human body target, when the first human body target and the second human body target are both intact, the first human body target and the second human body target are judged to meet the preset condition, and the auxiliary similarity is set to be a first numerical value; otherwise the auxiliary similarity is set to a second value. That is, if any one of the first human body target and the second human body target is a bust, the auxiliary similarity is set to a second value.
Or, the attribute information is gender or clothing color, and when the gender of the first human body target is the same as the gender of the second human body target or the clothing color of the first human body target is the same as the gender of the second human body target, the first human body target is judged to meet the preset condition. And when the gender of the first human body target is different from that of the second human body target or the color of clothes is different, judging that the preset condition is not met.
Or, if the attribute information is two or three of the integrity, the gender and the clothes color of the human body target, if the corresponding attributes are the same, the preset condition is judged to be met, if the corresponding attributes are the same, the auxiliary similarity is set as a first numerical value, if the corresponding attributes are different, the preset condition is judged to be not met, and if the corresponding attributes are different, the auxiliary similarity is set as a second numerical value.
For example, the attribute information includes gender and clothing color, the gender of the first human body target is the same as the gender of the second human body target, and the clothing color of the first human body target is different from that of the second human body target, the auxiliary similarity corresponding to the gender is set as a first numerical value, and the auxiliary similarity corresponding to the clothing color is set as a second numerical value.
Further, the auxiliary similarity and the first similarity are subjected to weighted summation to obtain a second similarity, wherein the weight assigned by the first similarity is larger than the weight assigned by the auxiliary similarity.
Thus, a plurality of second similarities can be obtained according to a plurality of different first similarities, and the second similarities are more distinguishable.
S152: and determining whether the first target image meets the warehousing standard or not based on the second similarity.
Specifically, a plurality of second similarity degrees calculated according to the first target image and the plurality of second target images are sorted according to the similarity degree score size, and the largest second similarity degree is selected.
And comparing the maximum second similarity with a preset similarity threshold. And if the maximum second similarity is smaller than the similarity threshold, the first human body target and the second human body target are not the same human body target, the first target image meets the warehousing standard, and the first target image is stored in the database to add a new second target image. Otherwise, when the first target image does not meet the warehousing standard, namely the maximum second similarity is larger than or equal to the similarity threshold value, the first human body target and the second human body target are the same human body target, the first target image does not meet the warehousing standard, the first target image belongs to repeated snapshot and needs to be discarded, and the purpose of removing the duplicate is achieved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a storage medium provided in the present application.
The storage medium 20 stores program data 21, and the program data 21, when executed by a processor, implements the image deduplication method as described in fig. 1 to 2.
The program data 21 may be stored in a storage medium 20 in the form of a software product, and includes several instructions for causing a network device (which may be a router, a personal computer, a server, or other network device) or a processor to perform all or part of the steps of the methods described in the embodiments of the present application.
The storage medium 20 is a medium in computer memory for storing some discrete physical quantity. And the aforementioned storage medium 20 having a storage function includes: a U-disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing the program data 21 code.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a network device provided in the present application.
The network device 30 comprises a processor 32 and a memory 31 connected to each other, the memory 31 stores a computer program, and the processor 32 implements the image deduplication method described in fig. 1 to 2 when executing the computer program.
The network device 30 may be a codec. Processor 32 may also be referred to as a CPU (Central Processing Unit). The processor 32 may be an integrated circuit chip having signal processing capabilities. The processor 32 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor 32 may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the intelligent monitoring system provided in the present application.
The intelligent monitoring system 40 comprises a network camera 41 and the network device 30 as described above, the network camera 41 is in communication connection with the network device 30, wherein the network camera 41 is used for capturing a human body in a monitored area, and the network device 30 runs a computer program to determine whether the captured image needs to be deduplicated, so as to save storage space.
Different from the situation of the prior art, the application discloses an image duplicate removal method, a storage medium, a network device and an intelligent monitoring system. Whether the first human target and the second human target are the same human target or not is judged by obtaining the similarity between the first target image to be stored in the database and the second target image stored in the database, whether the first target image needs to be de-duplicated or not is judged, the global features and at least one local feature of the first human target in the first target image and the second human target in the second target image are extracted, similarity matching is respectively carried out between the global features and between the local features to obtain at least two candidate similarities, and then the first similarity is generated according to the at least two candidate similarities, so that even if the human target has factors such as shielding and human posture change locally, the candidate similarity obtained through the local features cannot be influenced, the first similarity is influenced by the candidate similarity corresponding to the local features to have larger bias, the influence of the shielding, the human posture change and the like on the first similarity can be greatly eliminated, whether the first target image meets the warehousing standard or not based on the first similarity is determined, whether the first target image and the second target image do not meet the warehousing standard, and whether the first target image and the second human target image do not meet the warehousing standard, and the first target image should be repeatedly captured in the database, so that the first target image and the first target image do not meet the warehousing standard; when the first human body target and the second human body target are judged to be different human body targets, the first target image is stored in the database, and the first target image is determined to meet the warehousing standard, so that the image repeatedly snapped on the same human body target caused by factors such as shielding, human body posture change and the like can be effectively removed.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application.
The above description is only an example of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (9)

1. An image deduplication method, comprising:
acquiring a first target image to be stored in a database and a second target image stored in the database, wherein the second target image with the motion direction of a second human body target being the same as the motion direction of a first human body target in the first target image is screened out from the database;
extracting global features and at least local features of a first human target in the first target image and a second human target in the second target image;
similarity matching is carried out between the global features and between the local features respectively to obtain at least two candidate similarities;
generating a first similarity according to the at least two candidate similarities;
determining whether the first target image meets warehousing standards based on the first similarity;
the step of determining whether the first target image satisfies a binning criterion based on the first similarity comprises:
acquiring auxiliary similarity according to the attribute information of the first human body target and the second human body target;
weighting and summing the auxiliary similarity and the first similarity to obtain a second similarity, wherein the weight assigned by the first similarity is greater than the weight assigned by the auxiliary similarity;
determining whether the first target image meets warehousing standards based on the second similarity;
and when the attribute information meets a preset condition, setting the auxiliary similarity as a first numerical value, otherwise, setting the auxiliary similarity as a second numerical value, wherein the first numerical value is larger than the second numerical value.
2. The method according to claim 1, wherein the step of obtaining the first similarity according to the at least two candidate similarities comprises:
selecting the first similarity with the highest similarity from the at least two candidate similarities; or
And carrying out weighted summation on the at least two candidate similarities to obtain the first similarity.
3. The method of claim 1, wherein the global feature is a whole-body feature and the local features include an upper-body feature and a lower-body feature.
4. The method of claim 1, wherein the attribute information includes at least one of integrity, gender, and clothing color of the first and second human targets.
5. The method according to claim 4, characterized in that the preset condition is fulfilled when both the first and second human target are intact, or
And when the sex of the first human body target is the same as that of the second human body target and/or the color of clothes is the same, the preset condition is met.
6. The method of claim 1, wherein the step of determining whether the first target image satisfies binning criteria based on the second similarity comprises:
sorting a plurality of second similarities obtained according to calculation of the first target image and the plurality of second target images, and selecting the largest second similarity;
comparing the maximum second similarity with a preset similarity threshold;
if the maximum second similarity is smaller than the similarity threshold, the first target image meets the warehousing standard;
otherwise, the first target image does not meet the warehousing standard.
7. A storage medium having program data stored thereon, characterized in that the program data, when being executed by a processor, realize the steps of a method according to any one of the claims 1 to 6.
8. Network device, characterized in that it comprises a processor and a memory connected to each other, said memory storing a computer program which, when executed by said processor, carries out the steps of the method according to any one of claims 1 to 6.
9. An intelligent monitoring system, characterized in that the intelligent monitoring system comprises a network camera and the network device according to claim 8, the network camera is connected with the network device in communication, and the network camera is used for capturing a human body in a monitored area.
CN201910878172.1A 2019-09-17 2019-09-17 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system Active CN110781733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910878172.1A CN110781733B (en) 2019-09-17 2019-09-17 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910878172.1A CN110781733B (en) 2019-09-17 2019-09-17 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system

Publications (2)

Publication Number Publication Date
CN110781733A CN110781733A (en) 2020-02-11
CN110781733B true CN110781733B (en) 2022-12-06

Family

ID=69383543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910878172.1A Active CN110781733B (en) 2019-09-17 2019-09-17 Image duplicate removal method, storage medium, network equipment and intelligent monitoring system

Country Status (1)

Country Link
CN (1) CN110781733B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898467B (en) * 2020-07-08 2023-02-28 浙江大华技术股份有限公司 Attribute identification method and device, storage medium and electronic device
CN113408496B (en) * 2021-07-30 2023-06-16 浙江大华技术股份有限公司 Image determining method and device, storage medium and electronic equipment
CN114253281A (en) * 2021-11-09 2022-03-29 深圳鹏行智能研究有限公司 Four-legged robot motion control method, related device and storage medium
CN114298992A (en) * 2021-12-21 2022-04-08 北京百度网讯科技有限公司 Video frame duplication removing method and device, electronic equipment and storage medium
CN115016344B (en) * 2022-06-07 2023-09-19 智迪机器人技术(盐城)有限公司 Automatic automobile part installation control system and method based on robot
CN116935305A (en) * 2023-06-20 2023-10-24 联城科技(河北)股份有限公司 Intelligent security monitoring method, system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103430214A (en) * 2011-03-28 2013-12-04 日本电气株式会社 Person tracking device, person tracking method, and non-temporary computer-readable medium storing person tracking program
CN106250870A (en) * 2016-08-16 2016-12-21 电子科技大学 A kind of pedestrian's recognition methods again combining local and overall situation similarity measurement study
CN107330359A (en) * 2017-05-23 2017-11-07 深圳市深网视界科技有限公司 A kind of method and apparatus of face contrast
CN109800664A (en) * 2018-12-28 2019-05-24 上海依图网络科技有限公司 A kind of method and device of determining passerby track
CN109902550A (en) * 2018-11-08 2019-06-18 阿里巴巴集团控股有限公司 The recognition methods of pedestrian's attribute and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103430214A (en) * 2011-03-28 2013-12-04 日本电气株式会社 Person tracking device, person tracking method, and non-temporary computer-readable medium storing person tracking program
CN106250870A (en) * 2016-08-16 2016-12-21 电子科技大学 A kind of pedestrian's recognition methods again combining local and overall situation similarity measurement study
CN107330359A (en) * 2017-05-23 2017-11-07 深圳市深网视界科技有限公司 A kind of method and apparatus of face contrast
CN109902550A (en) * 2018-11-08 2019-06-18 阿里巴巴集团控股有限公司 The recognition methods of pedestrian's attribute and device
CN109800664A (en) * 2018-12-28 2019-05-24 上海依图网络科技有限公司 A kind of method and device of determining passerby track

Also Published As

Publication number Publication date
CN110781733A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110781733B (en) Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
Chan et al. Privacy preserving crowd monitoring: Counting people without people models or tracking
Avgerinakis et al. Recognition of activities of daily living for smart home environments
US11527000B2 (en) System and method for re-identifying target object based on location information of CCTV and movement information of object
CN110427905A (en) Pedestrian tracting method, device and terminal
Kim et al. Human action recognition using ordinal measure of accumulated motion
Seo et al. Effective and efficient human action recognition using dynamic frame skipping and trajectory rejection
CN111091025B (en) Image processing method, device and equipment
Avgerinakis et al. Activity detection and recognition of daily living events
Bedagkar-Gala et al. Gait-assisted person re-identification in wide area surveillance
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN109902550A (en) The recognition methods of pedestrian's attribute and device
CN113657434A (en) Human face and human body association method and system and computer readable storage medium
Aoun et al. Graph modeling based video event detection
Ma et al. Motion texture: A new motion based video representation
Ehsan et al. Violence detection in indoor surveillance cameras using motion trajectory and differential histogram of optical flow
Iazzi et al. Fall detection based on posture analysis and support vector machine
CN114783037A (en) Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
Vashistha et al. An architecture to identify violence in video surveillance system using ViF and LBP
JP7195892B2 (en) Coordinate transformation matrix estimation method and computer program
Miah et al. An empirical analysis of visual features for multiple object tracking in urban scenes
Solichin et al. Movement direction estimation on video using optical flow analysis on multiple frames
Ribnick et al. Detection of thrown objects in indoor and outdoor scenes
Biswas et al. Short local trajectory based moving anomaly detection
Yin et al. Global anomaly crowd behavior detection using crowd behavior feature vector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant