CN111539254A - Target detection method, target detection device, electronic equipment and computer-readable storage medium - Google Patents

Target detection method, target detection device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN111539254A
CN111539254A CN202010225203.6A CN202010225203A CN111539254A CN 111539254 A CN111539254 A CN 111539254A CN 202010225203 A CN202010225203 A CN 202010225203A CN 111539254 A CN111539254 A CN 111539254A
Authority
CN
China
Prior art keywords
target
monitoring
image
monitoring image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010225203.6A
Other languages
Chinese (zh)
Inventor
孟怀鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010225203.6A priority Critical patent/CN111539254A/en
Publication of CN111539254A publication Critical patent/CN111539254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a target detection method, a target detection device, an electronic device and a computer-readable storage medium, wherein the target detection method comprises the following steps: acquiring a first monitoring image; performing target analysis on the first monitoring image to determine the position relation between the monitoring target and the related target thereof; and determining the association result of the monitoring target and the associated target thereof based on the position relation. According to the scheme, the association detection can be performed on the monitoring target.

Description

Target detection method, target detection device, electronic equipment and computer-readable storage medium
Technical Field
The present application relates to the field of security monitoring technologies, and in particular, to a target detection method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
Along with the increase of population and the rapid development of economy, originally, more in large-scale business centers in big cities or places with intensive personnel such as subways, railway stations, airports and the like, personnel density is also more and more, and these places are also equipped with cameras for security monitoring, and the monitoring system provides great help in image tracing after rapidly reacting to an emergency or a problem occurs.
However, the existing monitoring system does not make advance judgment on the problem that many vulnerable groups, such as infants, disabled people, senile dementia and the like, are prone to being lost with family or relatives in a public place with dense staff, and after a lost event occurs, after an alarm of a party is given, a public security officer or a security worker in a lost place at that time searches for the lost person through video playback.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a target detection method, a target detection device, an electronic device and a computer-readable storage medium, which can perform correlation detection on a monitored target.
In order to solve the above problem, a first aspect of the present application provides a target detection method, including: acquiring a first monitoring image; performing target analysis on the first monitoring image to determine the position relation between a monitoring target and a related target thereof; and determining the association result of the monitoring target and the associated target thereof based on the position relation.
Therefore, the position relation between the monitoring target and the associated target thereof can be determined by acquiring the first monitoring image and analyzing the target, and the associated result of the monitoring target and the associated target thereof can be determined according to the obtained position relation, so that the associated detection of the monitoring target is realized, the early warning of the loss of connection event can be realized through the associated result of the monitoring target and the associated target thereof, and the response of the loss of connection event can be carried out more quickly.
Wherein, the target analysis of the first monitoring image to determine the position relationship between the monitoring target and the associated target thereof comprises: analyzing the first monitoring image to obtain characteristic information of a plurality of first peripheral targets in the first monitoring image, wherein the first peripheral targets and the monitoring target have a preset position relation; judging whether the feature information of the plurality of first peripheral targets contains the feature information of the associated target; if yes, determining that a preset position relation exists between the monitoring target and the associated target; the determining the association result of the monitoring target and the associated target based on the position relationship comprises: and if the preset position relation does not exist between the monitoring target and at least one associated target, preliminarily determining that the monitoring target is not associated with the associated target.
Therefore, the feature information of the plurality of first surrounding targets having the preset position relationship with the monitoring target in the first monitoring image is obtained through analysis, and whether the feature information of the associated targets includes the feature information of the associated targets or not is judged, so that whether the preset position relationship exists between the monitoring target and the associated targets can be determined, and when the preset position relationship does not exist between the monitoring target and at least one associated target is detected, the monitoring target can be preliminarily determined to be out of association with the associated targets.
Wherein, prior to the acquiring the first monitoring image, the method further comprises: acquiring a plurality of frames of second monitoring images acquired at different moments; and carrying out target analysis on each frame of the second monitoring image to determine a related target of the monitoring target.
Therefore, before the first monitoring image is obtained, a plurality of frames of second monitoring images are acquired at different moments, and target analysis is performed on each frame of second monitoring image, so that the associated target of the monitoring target can be determined.
The target analysis is performed on each frame of the second monitoring image, and the determination of the associated target of the monitoring target includes: analyzing each frame of the second monitoring image to obtain characteristic information of a second surrounding target in each frame of the second monitoring image, wherein the second surrounding target and the monitoring target have a preset position relation; comparing the characteristic information of the second surrounding target among a plurality of frames of the second monitoring image to obtain a comparison result; and determining the second surrounding target existing in the second monitoring image in a continuous preset number of frames as a related target of the monitoring target based on the comparison result.
Therefore, by analyzing each frame of the second monitoring image, the feature information of the second surrounding target having a preset position relationship with the monitoring target in each frame of the second monitoring image can be obtained, the feature information of the second surrounding target among a plurality of frames of the second monitoring image is compared, the second surrounding targets existing in the second monitoring images of a continuous preset number of frames are determined as the associated targets of the monitoring target, and the associated targets of the monitoring target can be accurately determined.
Wherein the determining, based on the comparison result, the second surrounding target existing in the second monitoring image for a preset number of consecutive frames as the associated target of the monitoring target includes: based on the comparison result, finding out the second surrounding target existing in the second monitoring image of continuous preset number frames; judging whether the age of the found second surrounding target meets a preset age range or not; if yes, determining the found second surrounding target as a related target of the monitoring target.
Therefore, by finding out the second surrounding targets existing in the second monitoring images of the continuous preset number of frames, the found second surrounding targets can be determined as the related targets of the monitoring targets, and in addition, the ages of the found second surrounding targets are further limited to be determined as the related targets of the monitoring targets only when the preset age range is met, so that the monitoring targets and the second surrounding targets meeting the preset age range can be prevented from losing the relationship, and the second surrounding targets meeting the preset age range cannot have accidents.
Wherein the preset positional relationship comprises: the distance between the target and the monitoring target is smaller than a preset distance value, and/or the target and the monitoring target are located in the same monitoring image.
Therefore, by setting the preset positional relationship to: the distance between the target and the monitoring target is smaller than a preset distance value, and/or the target and the monitoring target are located in the same monitoring image, so that the associated target of the monitoring target can be accurately judged, whether the monitoring target is associated with the associated target can be accurately determined, early warning information that the monitoring target is associated with the associated target is timely generated, and quick response of the unlink event is realized.
Wherein, after the step of preliminarily determining that the monitoring target loses association with its associated target, the method further comprises: acquiring a third monitoring image after a preset time interval; performing target analysis on the third monitoring image to determine the position relationship between the monitoring target and the associated target; and finally determining that the monitoring target loses the association with the associated target thereof based on the position relation determined by analyzing the third monitoring image, and/or generating early warning information that the monitoring target loses the association with the associated target thereof.
Therefore, after the monitoring target is preliminarily determined to be out of association with the associated target, after a preset time interval, the position relationship between the monitoring target and the associated target can be determined again by obtaining the third monitoring image and performing target analysis, so that whether the monitoring target and the associated target actually lose the association can be finally determined, and/or early warning information indicating that the monitoring target and the associated target lose the association can be generated, thereby avoiding the generation of false early warning information caused by the fact that the monitoring target and the associated target temporarily lose the preset position relationship in the first monitoring image, and reducing the space occupation of the storage device because the early warning information is generated after the monitoring target and the associated target are finally determined to lose the association.
In order to solve the above problem, a second aspect of the present application provides an object detection apparatus comprising: the image acquisition module is used for acquiring a first monitoring image; the analysis module is used for carrying out target analysis on the first monitoring image so as to determine the position relation between the monitoring target and the related target thereof; and the determining module is used for determining the association result of the monitoring target and the associated target thereof based on the position relation.
In order to solve the above problem, a third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the object detection method in the first aspect.
In order to solve the above-mentioned problems, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the object detection method in the first aspect described above.
According to the scheme, after the first monitoring image is obtained, the position relation between the monitoring target and the associated target of the monitoring target can be determined by performing target analysis on the first monitoring image, and the associated result of the monitoring target and the associated target of the monitoring target can be determined according to the obtained position relation, so that the associated detection of the monitoring target is realized, and further early warning on the loss of connection event can be realized through the associated result of the monitoring target and the associated target of the monitoring target, so that quick event response can be performed when the loss of connection event occurs, and the associated target can be found back more quickly.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a target detection method of the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of a target detection method of the present application;
FIG. 4 is a flowchart illustrating an embodiment of step S32 in FIG. 3;
FIG. 5 is a flowchart illustrating an embodiment of step S323 in FIG. 4;
FIG. 6 is a schematic flow chart diagram illustrating a further embodiment of a target detection method of the present application;
FIG. 7 is a block diagram of an embodiment of an object detection device according to the present application;
FIG. 8 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 9 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a target detection method according to the present application. Specifically, the method may include the steps of:
step S11: a first monitoring image is acquired.
In an implementation scene, in order to monitor the monitoring target and the related target thereof, when the monitoring target enters a monitoring area, the camera is controlled to focus the monitoring target entering a shooting picture of the camera, so that the head or the face of the monitoring target is positioned at the central point of the shooting picture, and a complete picture of the monitoring target is conveniently obtained.
In one implementation scenario, in order to improve the efficiency of acquiring the images of the monitoring target and the target associated therewith, the first monitoring image may not be one image, but a group of images, where the number of images in the group of images may be multiple, for example: 2. 3, 4, etc., without limitation.
Step S12: and carrying out target analysis on the first monitoring image to determine the position relation between the monitoring target and the related target thereof.
In an implementation scenario, after the first monitoring image is acquired, target analysis can be performed on the first monitoring image; for example, a monitoring target is extracted from a first monitoring image, facial features of the monitoring target are obtained, then the similarity between the obtained facial features of the monitoring target and preset facial information of the monitoring target is judged through a convolutional neural network model, if the similarity is larger than a preset threshold value, the obtained facial features of the monitoring target can be determined to be correct, then the position relationship between the monitoring target and an associated target of the monitoring target can be analyzed, and whether a preset position relationship exists between the monitoring target and the associated target of the monitoring target can be determined.
Step S13: and determining the association result of the monitoring target and the associated target thereof based on the position relation.
It can be understood that, according to the position relationship between the monitoring target and the associated target thereof, it may be determined whether the monitoring target and the associated target thereof lose the association, that is, the association result of the monitoring target and the associated target thereof may be determined. When the correlation result is that the monitoring target may lose the correlation with the correlation target, in order to prevent serious consequences after the occurrence of the correlation loss event, early warning can be performed on the loss event, and then faster response of the loss event can be performed.
In one implementation scenario, the relevant person may be any person that can help the associated target of the monitoring target to return to the monitoring target, such as the monitoring target itself, the family of the associated target, the staff or security personnel in the public place, a nearby police, and so on.
In one implementation scenario, the warning information may be information that informs the attention of the relevant person, such as an alarm, a broadcast, and the like, or information that enables the relevant person to obtain through an intelligent device, such as a mobile phone or a computer, by a short message, a telephone, a mail, and the like.
In one implementation scenario, before the monitoring target and the related target enter the monitoring area or the public place, the identity information and the contact manner of the monitoring target and the related target can be pre-entered, so as to facilitate subsequent monitoring and timely contact the monitoring target and the related target when a loss of related event occurs.
According to the scheme, after the first monitoring image is obtained, the position relation between the monitoring target and the associated target of the monitoring target can be determined by performing target analysis on the first monitoring image, and the associated result of the monitoring target and the associated target of the monitoring target can be determined according to the obtained position relation, so that the associated detection of the monitoring target is realized, when the associated result is that the monitoring target is not associated with the associated target of the monitoring target, early warning on an unconnection event can be further realized, and further, a quicker response to the unconnection event can be performed, and the protection is performed in the bud.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S12 in fig. 1. In this embodiment, the step S12 may specifically include the following steps:
step S121: analyzing the first monitoring image to obtain characteristic information of a plurality of first peripheral targets in the first monitoring image, wherein the first peripheral targets and the monitoring targets have preset position relations.
Step S122: and judging whether the characteristic information of the plurality of first peripheral targets comprises the characteristic information of the associated target. If yes, step S123 is executed, and if no, step S124 is executed.
Step S123: and determining that a preset position relation exists between the monitoring target and the associated target.
Step S124: and determining that no preset position relation exists between the monitoring target and the associated target.
It can be understood that, after the monitoring target is determined from the first monitoring image, the first monitoring image may be further analyzed to obtain feature information of all first peripheral targets in the first monitoring image, where the first peripheral targets in the present application are targets having a preset position relationship with the monitoring target, and therefore, by determining whether the feature information of all first peripheral targets includes feature information of an associated target, it may be determined whether a preset position relationship exists between the monitoring target and the associated target. When the feature information of all the first peripheral targets comprises the feature information of the associated targets, the fact that a preset position relation exists between the monitoring target and the associated targets can be determined; and when the feature information of all the first peripheral targets does not include the feature information of the associated target, it can be determined that no preset position relationship exists between the monitoring target and the associated target.
In this case, the step S13 may specifically include: and if the preset position relation does not exist between the monitoring target and at least one associated target, preliminarily determining that the monitoring target is not associated with the associated target. In the application, when a preset position relationship exists between the monitoring target and the associated target thereof, it indicates that the monitoring target and the associated target thereof do not lose the association, and when the preset position relationship does not exist between the monitoring target and the associated target thereof, it indicates that the monitoring target and the associated target thereof risk losing the association.
Different from the foregoing embodiment, the feature information of all first surrounding targets having a preset position relationship with the monitoring target in the first monitoring image is obtained through analysis, and it is determined whether the feature information of all first surrounding targets includes the feature information of the associated target, so that it can be determined whether the preset position relationship exists between the monitoring target and the associated target thereof, and when it is detected that the preset position relationship does not exist between the monitoring target and at least one associated target, it can be preliminarily determined that the monitoring target loses the association with the associated target thereof.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a target detection method according to another embodiment of the present application. Specifically, the method may include the steps of:
step S31: and acquiring a plurality of frames of second monitoring images acquired at different moments.
Step S32: and carrying out target analysis on each frame of second monitoring image to determine a related target of the monitoring target.
In an implementation scenario, in order to determine an associated target of a monitoring target, image acquisition may be performed on the monitoring target at different times, so as to obtain a plurality of frames of second monitoring images, and due to different formation times of each frame of second monitoring image, target analysis is performed on each frame of second monitoring image, and if each frame of second monitoring image can reflect an association relationship between the monitoring target and a certain target, the associated target of the monitoring target may be determined.
Step S33: a first monitoring image is acquired.
Step S34: and carrying out target analysis on the first monitoring image to determine the position relation between the monitoring target and the related target thereof.
Step S35: and determining the association result of the monitoring target and the associated target thereof based on the position relation.
In this embodiment, steps S33-S35 are substantially similar to steps S11-S13 of the above embodiments of the present application, and are not repeated herein.
Different from the foregoing embodiment, before the first monitoring image is obtained, a plurality of frames of second monitoring images are obtained by collecting at different times, and target analysis is performed on each frame of second monitoring image, so that the associated target of the monitoring target can be determined, and the identity information of the monitoring target and the associated target thereof does not need to be collected in advance, so that the implementation of the target detection method of the present application is very convenient.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S32 in fig. 3. In this embodiment, the step S32 may specifically include the following steps:
step S321: analyzing each frame of the second monitoring image to obtain the characteristic information of a second surrounding target in each frame of the second monitoring image, wherein the second surrounding target and the monitoring target have a preset position relation.
Step S322: comparing the characteristic information of the second surrounding target among a plurality of frames of the second monitoring image to obtain a comparison result.
Step S323: and determining the second surrounding target existing in the second monitoring image in a continuous preset number of frames as a related target of the monitoring target based on the comparison result.
It can be understood that each frame of the second monitoring image includes the characteristic information of the monitoring target, and each frame of the second monitoring image also includes a second surrounding target having a preset position relationship with the monitoring target, and the characteristic information of the second surrounding target in each frame of the second monitoring image can be obtained by analyzing each frame of the second monitoring image; comparing the feature information of the second surrounding targets between the frames of the second monitoring images, so that one or some second surrounding targets exist in the second monitoring images of the continuous preset number of frames, and then determining the second surrounding targets as the associated targets of the monitoring targets.
Different from the foregoing embodiment, by analyzing each frame of the second monitoring image, the feature information of the second surrounding target having the preset position relationship with the monitoring target in each frame of the second monitoring image can be obtained, the feature information of the second surrounding target between a plurality of frames of the second monitoring image is compared, the second surrounding targets existing in the second monitoring images of the consecutive preset number of frames are determined as the associated targets of the monitoring target, and the associated targets of the monitoring target can be determined more accurately.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of step S323 in fig. 4. In this embodiment, the step S323 may specifically include the following steps:
step S3231: and finding out the second surrounding target existing in the second monitoring image in a continuous preset number of frames based on the comparison result.
Step S3232: and judging whether the age of the found second surrounding target meets a preset age range. If so, step S3233 is executed, otherwise, it indicates that the searched age of the second encircling object does not meet the requirement, and it is not necessary to monitor whether it is in the state of losing association.
Step S3233: and determining the found second surrounding target as a correlation target of the monitoring target.
It can be understood that, after comparing the feature information of the second perimetral object among the plurality of frames of the second monitored images, it is found that a part of the second perimetral object exists in the second monitored images of the consecutive preset number of frames, and then the part of the second perimetral object may be a related object of the monitored object; in this embodiment, after the part of the second surrounding objects is found, it may be determined whether the age of the found second surrounding object meets the preset age range, and if the age meets the preset age range, it is determined that the age of the found second surrounding object meets the requirement, and it is necessary to monitor whether the second surrounding object is in a state of losing correlation, so that the found second surrounding object may be determined as the correlation object of the monitoring object. Wherein the preset age range can be set to be less than 12 years old and/or more than 60 years old, people within the preset age range are more prone to loss, and may need to go back to the side of the associated person through other people after loss.
Different from the foregoing embodiment, by finding out the second surrounding targets existing in the second monitoring images of the consecutive preset number of frames, the found second surrounding targets can be determined as the associated targets of the monitoring targets, and in addition, by further limiting the age of the found second surrounding targets to be determined as the associated targets of the monitoring targets only when the preset age range is met, the loss of association between the monitoring targets and the second surrounding targets meeting the preset age range can be avoided, so that whether the second surrounding targets meeting the preset age range are in the loss of association state can be monitored, and accidents of the second surrounding targets meeting the preset age range can be avoided.
It is to be understood that, in the present application, cameras may be disposed at various positions or areas within the monitoring environment, so that the first monitoring image, the second monitoring image and the third monitoring image may be obtained by shooting with the same camera or by shooting with different cameras. The monitoring environment of the present application may be a crowded place, such as a large mall, bus stop, train station, airport, venue for a major event, and the like.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a target detection method according to another embodiment of the present application. Specifically, the method may include the steps of:
step S61: a first monitoring image is acquired.
Step S62: analyzing the first monitoring image to obtain characteristic information of a plurality of first peripheral targets in the first monitoring image, wherein the first peripheral targets and the monitoring target have a preset position relation; judging whether the feature information of the plurality of first peripheral targets contains the feature information of the associated target; and if the preset position relationship does not exist, determining that the preset position relationship does not exist between the monitoring target and the associated target.
Step S63: and if the preset position relation does not exist between the monitoring target and at least one associated target, preliminarily determining that the monitoring target is not associated with the associated target.
In this embodiment, steps S61-S63 are substantially similar to steps S11-S13 of the above embodiments of the present application, and are not repeated herein.
Step S64: and acquiring a third monitoring image after the preset time interval.
Step S65: and carrying out target analysis on the third monitoring image to determine the position relation between the monitoring target and the associated target.
Step S66: and finally determining that the monitoring target loses the association with the associated target thereof based on the position relation determined by analyzing the third monitoring image, and/or generating early warning information that the monitoring target loses the association with the associated target thereof.
In an implementation scenario, in order to confirm whether the result that the monitoring target loses the correlation with the associated target is accurate or not in the previous step, after a preset time interval, a third monitoring image about the monitoring target is obtained again, and by performing target analysis on the third monitoring image, whether a preset position relationship exists between the monitoring target and the associated target after the preset time interval can be determined again. If the preset position relationship between the monitoring target and the associated target is found to exist again after the preset time, the conclusion that the association between the monitoring target and the associated target is lost is considered to be wrong or the preset position relationship is artificially restored, and the wrong early warning can be avoided; if the preset position relation does not exist between the monitoring target and the associated target after the preset time interval, whether the monitoring target and the associated target actually lose the association or not can be finally determined, and early warning information that the monitoring target and the associated target lose the association can be generated, so that early warning of the loss of association event can be realized.
Different from the foregoing embodiment, after it is preliminarily determined that the monitoring target loses the association with the associated target, after a preset time interval, the third monitoring image is obtained and the target analysis is performed, so that the position relationship between the monitoring target and the associated target thereof can be determined again, and thus it can be finally determined whether the monitoring target and the associated target thereof really lose the association, and/or early warning information that the monitoring target loses the association with the associated target thereof can be generated, so that it can be avoided that false early warning information is generated due to the fact that the monitoring target temporarily loses the preset position relationship with the associated target thereof in the first monitoring image, and because the early warning information is generated only after the association is finally determined to be lost, the space occupation of the storage device can be reduced.
In an embodiment, the predetermined positional relationship includes: the distance between the target and the monitoring target is smaller than a preset distance value, and/or the target and the monitoring target are located in the same monitoring image. For example, in the first monitoring image or the third monitoring image, if it is determined that the distance between the associated target and the monitoring target is not less than the preset distance value, it is indicated that the monitoring target is not associated with the associated target, at this time, there may be a risk of losing contact between the monitoring target and the associated target, and then, early warning information that the monitoring target is not associated with the associated target can be generated, so that a quick response to the loss of contact event is realized. For another example, if a second surrounding target exists in a plurality of frames of second monitoring images acquired at different times, if there is a second surrounding target existing in a preset number of consecutive frames of second monitoring images, the second surrounding target and the monitoring target are located in the same monitoring image due to the monitoring target existing in the second monitoring image, and the second surrounding target exists in the preset number of consecutive frames of second monitoring images, so that the second surrounding target can be determined as a related target of the monitoring target.
According to the scheme, the preset position relation is set as: the distance between the target and the monitoring target is smaller than a preset distance value, and/or the target and the monitoring target are located in the same monitoring image, so that the associated target of the monitoring target can be accurately judged, whether the monitoring target is associated with the associated target can be accurately determined, early warning information that the monitoring target is associated with the associated target is timely generated, and quick response of the unlink event is realized.
In one embodiment, the monitoring target and the associated target are human or animal; the feature information of the target is face feature information. The monitoring target and the associated target in the embodiment can be human or animals, so that the target detection method can perform early warning for the loss event between human and between human and animal, when the first monitoring image finds that the distance between the monitoring target and the associated target is not less than a preset distance value or does not exist in the monitoring image at the same time, the monitoring target and the associated target are considered to have the loss event, and early warning for the loss event is performed by generating early warning information that the monitoring target and the associated target lose association; it can be understood that when the position relation between the monitoring target and the associated target is determined through the first monitoring image, each target can be accurately judged by mainly analyzing the facial feature information of the target and analyzing the facial feature information in the monitoring image, so that the misjudgment of the target is avoided. Certainly, in other embodiments, the monitored target may be a person, the associated target may be an article, at this time, the target detection method of the present application may perform an early warning for a loss event between the person and the article, when it is found through the first monitoring image that the distance between the monitored target and the associated target is not less than the preset distance value or does not exist in the monitored image at the same time, it is considered that the monitored target and the associated target have the loss event, at this time, the early warning for the loss event is performed by generating early warning information that the monitored target and the associated target lose association, so that the monitored target can retrieve the associated target.
According to the scheme, early warning can be carried out on the lost event of the human or animal by generating early warning information that the human or animal loses the relevance, the searching is convenient, and due to the fact that the characteristic information of the target is the facial characteristic information, the target can be accurately judged by analyzing the facial characteristic information in the monitoring image, and the misjudgment of the target is avoided.
In a specific implementation scene, when each person enters a monitoring environment, an image is captured through a camera, then the faces of the surrounding persons are synchronously extracted through the image, and a face feature table of the surrounding persons for each person is formed; then repeating the above actions after a certain time interval, so that if two or more persons are the same person, they exist in each other's face feature table; then, the whole monitoring area can be subjected to image analysis in real time through a background monitoring system, after a monitoring image is obtained, all people serving as the same pedestrian in the monitoring image are subjected to target analysis and compared with a human face feature table formed in advance, if the situation that people do not appear in the human face feature table of the same pedestrian exists in a certain monitoring image is found, the situation that people are lost in the same pedestrian can be judged, and then a background can send an alarm to remind the occurrence of a lost event; certainly, if no difference exists between the two characteristic tables before and after the comparison, it is indicated that the person is not lost, the background system may continuously acquire the monitoring image and extract the face characteristic table of the people around each person after a preset time interval (the preset time is short, and the interval between the time when the person is lost and the time when the person is lost is minimized and the timeliness when the person is found in the event of the person lost is improved under the condition that the person is found to be lost by the monitoring image acquired at a certain time) is set, and the comparison with the previous face characteristic table is performed again, so as to continuously perform monitoring through reciprocating operation.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an embodiment of an object detection device according to the present application. The object detection device 70 includes: an image acquisition module 71, an analysis module 72 and a determination module 73; the image obtaining module 71 is configured to obtain a first monitoring image; the analysis module 72 is configured to perform target analysis on the first monitoring image to determine a position relationship between the monitoring target and a target associated therewith; the determining module 73 is configured to determine a correlation result between the monitoring target and its associated target based on the position relationship.
According to the scheme, after the image acquisition module 71 acquires the first monitoring image, the analysis module 72 performs target analysis on the first monitoring image to determine the position relationship between the monitoring target and the associated target thereof, and then the determination module 73 determines the associated result of the monitoring target and the associated target thereof according to the obtained position relationship, so that early warning of the loss of connection event can be realized through the associated result of the monitoring target and the associated target thereof, and further, faster response of the loss of connection event can be performed.
In some embodiments, the analysis module 72 includes a feature information analysis sub-module and a positional relationship analysis sub-module; the characteristic information analysis submodule is used for analyzing the first monitoring image to obtain characteristic information of a plurality of first peripheral targets in the first monitoring image, wherein the first peripheral targets and the monitoring targets have preset position relations; the position relation analysis submodule is used for judging whether the characteristic information of the first peripheral targets contains the characteristic information of the associated targets, and if so, determining that a preset position relation exists between the monitoring target and the associated targets. At this time, the determining module 73 is specifically configured to: and if the preset position relation does not exist between the monitoring target and at least one associated target, preliminarily determining that the monitoring target is not associated with the associated target.
In some embodiments, before acquiring the first monitoring image, the image acquiring module 71 is further configured to acquire a plurality of frames of second monitoring images acquired at different times; the analysis module 72 is configured to perform target analysis on each frame of the second monitoring image, and determine a related target of the monitoring target.
In some embodiments, analysis module 72 includes a feature information analysis submodule and an associated target analysis submodule; the characteristic information analysis submodule is used for analyzing each frame of second monitoring image to obtain the characteristic information of a second surrounding target in each frame of second monitoring image, wherein the second surrounding target and the monitoring target have a preset position relation; the associated target analysis submodule is used for comparing the characteristic information of the second surrounding target among the plurality of frames of second monitoring images to obtain a comparison result, and determining the second surrounding target existing in the second monitoring images of the continuous preset number of frames as the associated target of the monitoring target based on the comparison result.
In some embodiments, the related target analysis sub-module is specifically configured to compare feature information of the second girth target between the frames of the second monitored images to obtain a comparison result, find, based on the comparison result, second girth targets existing in a preset number of consecutive frames of the second monitored images, determine whether the age of the found second girth target meets a preset age range, and if so, determine the found second girth target as the related target of the monitoring target.
In some embodiments, after the determining module 73 preliminarily determines that the monitoring target loses the association with its associated target, the image obtaining module 71 is further configured to obtain a third monitoring image after a preset time interval; the analysis module 72 is configured to perform target analysis on the third monitoring image to determine a position relationship between the monitoring target and a target associated therewith; the determination module 73 is further configured to finally determine that the monitoring target loses the relationship with its associated target based on the position relationship determined by analyzing the third monitoring image; in addition, the object detection device 70 may further include an early warning module 74, and the early warning module 74 is configured to generate early warning information that the monitored object loses the association with the associated object.
Referring to fig. 8, fig. 8 is a schematic frame diagram of an embodiment of an electronic device according to the present application. The electronic device 80 comprises a memory 81 and a processor 82 coupled to each other, the processor 82 being configured to execute program instructions stored in the memory 81 to implement the steps of any of the above-described embodiments of the object detection method. In one particular implementation scenario, the electronic device 80 may include, but is not limited to: a microcomputer, a server, and the electronic device 80 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 82 is configured to control itself and the memory 81 to implement the steps in any of the above-described embodiments of the object detection method. The processor 82 may also be referred to as a CPU (Central Processing Unit). The processor 82 may be an integrated circuit chip having signal processing capabilities. The Processor 82 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 82 may be collectively implemented by an integrated circuit chip.
According to the scheme, after the first monitoring image is obtained, the position relation between the monitoring target and the associated target of the monitoring target can be determined by performing target analysis on the first monitoring image, and the associated result of the monitoring target and the associated target of the monitoring target can be determined according to the obtained position relation, so that the associated detection of the monitoring target is realized, and further early warning on the loss of connection event can be realized through the associated result of the monitoring target and the associated target of the monitoring target, so that quick event response can be performed when the loss of connection event occurs, and the associated target can be found back more quickly.
Referring to fig. 9, fig. 9 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 90 stores program instructions 91 executable by the processor, the program instructions 91 for implementing the steps in any of the above-described object detection method embodiments.
According to the scheme, after the first monitoring image is acquired, whether the preset position relation exists between the monitoring target and the associated target of the monitoring target can be determined through target analysis on the first monitoring image, so that whether the monitoring target loses the association with the associated target can be judged in advance by determining whether the preset position relation exists between the monitoring target and the associated target of the monitoring target, and after the monitoring target loses the association with the associated target, early warning information that the monitoring target loses the association with the associated target can be generated, so that rapid event response can be carried out when an unlink event occurs.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A method of object detection, the method comprising:
acquiring a first monitoring image;
performing target analysis on the first monitoring image to determine the position relation between a monitoring target and a related target thereof;
and determining the association result of the monitoring target and the associated target thereof based on the position relation.
2. The target detection method of claim 1, wherein the performing target analysis on the first monitoring image to determine a position relationship between a monitoring target and its associated target comprises:
analyzing the first monitoring image to obtain characteristic information of a plurality of first peripheral targets in the first monitoring image, wherein the first peripheral targets and the monitoring target have a preset position relation;
judging whether the feature information of the plurality of first peripheral targets contains the feature information of the associated target;
if yes, determining that a preset position relation exists between the monitoring target and the associated target;
the determining the association result of the monitoring target and the associated target based on the position relationship comprises:
and if the preset position relation does not exist between the monitoring target and at least one associated target, preliminarily determining that the monitoring target is not associated with the associated target.
3. The object detection method according to claim 1 or 2, characterized in that, before said acquiring the first monitoring image, the method further comprises:
acquiring a plurality of frames of second monitoring images acquired at different moments;
and carrying out target analysis on each frame of the second monitoring image to determine a related target of the monitoring target.
4. The object detection method of claim 3, wherein the performing the object analysis on each frame of the second monitoring image to determine the associated object of the monitoring object comprises:
analyzing each frame of the second monitoring image to obtain characteristic information of a second surrounding target in each frame of the second monitoring image, wherein the second surrounding target and the monitoring target have a preset position relation;
comparing the characteristic information of the second surrounding target among a plurality of frames of the second monitoring image to obtain a comparison result;
and determining the second surrounding target existing in the second monitoring image in a continuous preset number of frames as a related target of the monitoring target based on the comparison result.
5. The target detection method according to claim 4, wherein the determining, based on the comparison result, the second surrounding target present in the second monitored image for a preset number of consecutive frames as the associated target of the monitored target comprises:
based on the comparison result, finding out the second surrounding target existing in the second monitoring image of continuous preset number frames;
judging whether the age of the found second surrounding target meets a preset age range or not;
if yes, determining the found second surrounding target as a related target of the monitoring target.
6. The object detection method according to any one of claims 2 to 5, wherein the preset positional relationship includes: the distance between the target and the monitoring target is smaller than a preset distance value, and/or the target and the monitoring target are located in the same monitoring image.
7. The object detection method of claim 2, wherein after the step of preliminarily determining that the monitored object is out of association with its associated object, the method further comprises:
acquiring a third monitoring image after a preset time interval;
performing target analysis on the third monitoring image to determine the position relationship between the monitoring target and the associated target;
and finally determining that the monitoring target loses the association with the associated target thereof based on the position relation determined by analyzing the third monitoring image, and/or generating early warning information that the monitoring target loses the association with the associated target thereof.
8. An object detection device, comprising:
the image acquisition module is used for acquiring a first monitoring image;
the analysis module is used for carrying out target analysis on the first monitoring image so as to determine the position relation between the monitoring target and the related target thereof;
and the determining module is used for determining the association result of the monitoring target and the associated target thereof based on the position relation.
9. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the object detection method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the object detection method of any one of claims 1 to 7.
CN202010225203.6A 2020-03-26 2020-03-26 Target detection method, target detection device, electronic equipment and computer-readable storage medium Pending CN111539254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010225203.6A CN111539254A (en) 2020-03-26 2020-03-26 Target detection method, target detection device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010225203.6A CN111539254A (en) 2020-03-26 2020-03-26 Target detection method, target detection device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111539254A true CN111539254A (en) 2020-08-14

Family

ID=71978415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010225203.6A Pending CN111539254A (en) 2020-03-26 2020-03-26 Target detection method, target detection device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111539254A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949538A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Target association method and device, electronic equipment and machine-readable storage medium
CN115311608A (en) * 2022-10-11 2022-11-08 之江实验室 Method and device for multi-task multi-target association tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239744A (en) * 2017-05-15 2017-10-10 深圳奥比中光科技有限公司 Monitoring method, system and the storage device of human body incidence relation
CN109686049A (en) * 2019-01-03 2019-04-26 深圳壹账通智能科技有限公司 Children fall single based reminding method, device, medium and electronic equipment in public place
CN109887234A (en) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 A kind of children loss prevention method, apparatus, electronic equipment and storage medium
CN110298293A (en) * 2019-06-25 2019-10-01 重庆紫光华山智安科技有限公司 One kind anti-wander away method, apparatus, readable storage medium storing program for executing and electric terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239744A (en) * 2017-05-15 2017-10-10 深圳奥比中光科技有限公司 Monitoring method, system and the storage device of human body incidence relation
CN109686049A (en) * 2019-01-03 2019-04-26 深圳壹账通智能科技有限公司 Children fall single based reminding method, device, medium and electronic equipment in public place
CN109887234A (en) * 2019-03-07 2019-06-14 百度在线网络技术(北京)有限公司 A kind of children loss prevention method, apparatus, electronic equipment and storage medium
CN110298293A (en) * 2019-06-25 2019-10-01 重庆紫光华山智安科技有限公司 One kind anti-wander away method, apparatus, readable storage medium storing program for executing and electric terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949538A (en) * 2021-03-16 2021-06-11 杭州海康威视数字技术股份有限公司 Target association method and device, electronic equipment and machine-readable storage medium
CN112949538B (en) * 2021-03-16 2023-08-04 杭州海康威视数字技术股份有限公司 Target association method, device, electronic equipment and machine-readable storage medium
CN115311608A (en) * 2022-10-11 2022-11-08 之江实验室 Method and device for multi-task multi-target association tracking

Similar Documents

Publication Publication Date Title
US7542588B2 (en) System and method for assuring high resolution imaging of distinctive characteristics of a moving object
US7535353B2 (en) Surveillance system and surveillance method
CN106203458B (en) Crowd video analysis method and system
CN110933955B (en) Improved generation of alarm events based on detection of objects from camera images
US20040240542A1 (en) Method and apparatus for video frame sequence-based object tracking
US11625936B2 (en) High definition camera and image recognition system for criminal identification
US11615620B2 (en) Systems and methods of enforcing distancing rules
US20040161133A1 (en) System and method for video content analysis-based detection, surveillance and alarm management
WO2017117879A1 (en) Personal identification processing method, apparatus and system
CN110796819B (en) Detection method and system for platform yellow line invasion border crossing personnel
JP2018160219A (en) Moving route prediction device and method for predicting moving route
CN109089160A (en) A kind of colleges and universities dining room food processing unlawful practice video analytic system and method
JP2008040781A (en) Subject verification apparatus and subject verification method
US20220301317A1 (en) Method and device for constructing object motion trajectory, and computer storage medium
CN111539254A (en) Target detection method, target detection device, electronic equipment and computer-readable storage medium
CN110717357A (en) Early warning method and device, electronic equipment and storage medium
JP2001266131A (en) Monitoring system
EP3910539A1 (en) Systems and methods of identifying persons-of-interest
US10783365B2 (en) Image processing device and image processing system
CN112102623A (en) Traffic violation identification method and device and intelligent wearable device
CN111079524A (en) Target identity recognition method and system based on operator base station
US11854266B2 (en) Automated surveillance system and method therefor
CN210983509U (en) Entry key group distribution and control system
CN112241671B (en) Personnel identity recognition method, device and system
Giuliano et al. Integration of video and radio technologies for social distancing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination