CN112883809B - Target detection method, device, equipment and medium - Google Patents

Target detection method, device, equipment and medium Download PDF

Info

Publication number
CN112883809B
CN112883809B CN202110095548.9A CN202110095548A CN112883809B CN 112883809 B CN112883809 B CN 112883809B CN 202110095548 A CN202110095548 A CN 202110095548A CN 112883809 B CN112883809 B CN 112883809B
Authority
CN
China
Prior art keywords
user
radar
distance
face image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110095548.9A
Other languages
Chinese (zh)
Other versions
CN112883809A (en
Inventor
唐晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110095548.9A priority Critical patent/CN112883809B/en
Publication of CN112883809A publication Critical patent/CN112883809A/en
Application granted granted Critical
Publication of CN112883809B publication Critical patent/CN112883809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to the technical field of computers, in particular to the field of security monitoring, and provides a target detection method, device, equipment and medium, which are used for solving the problem of low accuracy in detecting suspected infected persons. The method is applied to a target detection system, wherein the target detection system comprises a radar and a camera, and the method comprises the following steps: according to the real-time image of the first user, the first position information of the first user relative to the radar is determined, the second position information of the second user relative to the radar within the preset distance range of the first user is detected, the distance between the first user and the second user can be accurately determined by combining the first position information and the second position information, if the distance is smaller than a distance threshold value, the second user is marked as a suspected user, and face images of the second user are acquired, so that the problem that the accuracy of detecting suspected infected persons is low is solved.

Description

Target detection method, device, equipment and medium
Technical Field
The application relates to the technical field of computers, in particular to the field of security monitoring, and provides a target detection method, a target detection device, target detection equipment and target detection media.
Background
Maintaining a secure distance between users is an important way to ensure public health safety. In contrast, for some infectious diseases, if the distance between a certain user and an infected person is smaller than the safe distance, the possibility that the user is infected is great. In the case of high-density people, such as stations or airports, how to detect suspected infected persons with a distance smaller than a safe distance from the infected persons and acquire information of the suspected infected persons is of great significance for follow-up tracking of the suspected infected persons.
At present, images are mainly acquired through a camera, and the distance between users is detected through the images, however, due to interference of various external factors, the quality of the shot images is poor, and the accuracy of the result of detecting suspected infected persons is affected.
Disclosure of Invention
The embodiment of the application provides a target detection method, device, equipment and medium, which are used for solving the problem of low accuracy in detecting suspected infected persons.
In a first aspect, an embodiment of the present invention provides a target detection method, which is applied to a target detection system, where the target detection system includes a radar and a camera, and the method includes:
If a face image of a first user matched with a target face image exists in the current acquisition picture, acquiring a real-time image of the first user, wherein the user corresponding to the target face image is an abnormal user marked as sign abnormality;
determining first position information of the first user relative to the radar according to the real-time image;
if a second user exists in the preset distance range of the first user, detecting second position information of the second user relative to the radar through the radar;
determining the distance between the first user and the second user according to the first position information and the second position information;
if the distance is smaller than the distance threshold, marking the second user as a suspected user, and acquiring a face image of the second user through the camera, wherein the suspected user is a user with possibly abnormal sign.
According to the method and the device for detecting the distance between the radar and the abnormal user, according to the real-time image of the abnormal user with abnormal signs, the first position information of the abnormal user relative to the radar is determined, the radar detects the second position information of the second user relative to the radar within the preset distance range of the abnormal user, interference of external factors can be reduced, accordingly, the distance between the abnormal user and the second user can be accurately obtained, and accuracy of a distance detection result is improved. In addition, in the embodiment of the application, after the distance between the abnormal user and the second user is determined to be smaller than the distance threshold, the second user is marked as a suspected user with possibly abnormal sign, so that the suspected user with the distance smaller than the distance threshold is accurately detected, the face image of the suspected user is acquired, and other information of the suspected user can be quickly acquired according to the face image of the suspected user, so that the suspected user can be conveniently tracked continuously in the later period.
In one possible embodiment, when there is a face image of the first user matching the target face image in the current acquisition frame, before acquiring the real-time image of the first user, the method further includes:
a target face image is acquired from a recognition device, wherein the target face image is transmitted by the recognition device after determining a user with abnormal signs.
In the embodiment of the application, the target face image of the abnormal user with abnormal signs can be directly obtained from the identification equipment, so that the subsequent rapid matching is conveniently performed according to the target face image and the face image in the current acquisition picture, and the abnormal user in the current acquisition picture is rapidly locked.
In one possible embodiment, determining first location information of the first user relative to the radar from the real-time image includes:
determining a first position point of a face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
and obtaining a second position point of the first position point in a second coordinate system according to the coordinate conversion relation between the first coordinate system and the second coordinate system so as to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
According to the method and the device, according to the coordinate conversion relation between the first coordinate system and the second coordinate system, the first position point of the face area of the first user in the first coordinate system is converted into the second position point in the second coordinate system established by taking the radar as the reference point, so that the subsequent radar can conveniently and accurately detect the first user according to the second position point, and the second user in the preset distance range of the first user.
In one possible embodiment, the second location point comprises a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar; detecting, by the radar, second location information of the second user relative to the radar, comprising:
detecting, by the radar, a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar;
determining a distance between the first user and the second user according to the first position information and the second position information comprises:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
And determining the distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle and the included angle.
In the embodiment of the application, the distance, the elevation angle and the azimuth angle between the second user and the radar can be accurately determined through the radar, and then the distance between the first user and the second user can be accurately calculated. In addition, the method for calculating the distance between the first user and the second user is simple, the calculated amount is small, and the distance between the first user and the second user can be obtained rapidly.
In one possible embodiment, when the second user includes a plurality of users, if it is determined that the distance is less than a distance threshold, the second user is marked as a suspected user, including:
and if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold, marking each second user in the at least one user as a suspected user.
In the embodiment of the present application, when the distances corresponding to the plurality of second users are smaller than the distance threshold, each of the plurality of users is marked as a suspected user, so that the suspected user is avoided being missed, and the suspected user is accurately determined.
In one possible embodiment, if the distance is determined to be less than the distance threshold, after marking the second user as a suspected user, the method further includes:
and if the sign of the second user is determined to be abnormal, marking the second user as an abnormal user.
In this embodiment, after the second user is marked as a suspected user, whether the second user is an abnormal user may be further determined according to the sign of the second user, so as to follow-up the second user, and further detect other suspected users with a distance between the second user and the second user being smaller than a preset distance.
In a second aspect, there is provided an object detection apparatus provided in an object detection system including a radar and a camera, the object detection apparatus comprising:
the acquisition module is used for acquiring a real-time image of a first user if a face image of the first user matched with a target face image exists in a current acquisition picture, wherein the user corresponding to the target face image is an abnormal user marked as sign abnormality;
a determining module, configured to determine, according to the real-time image, first location information of the first user with respect to the radar;
The detection module is used for detecting second position information of a second user relative to the radar through the radar if the second user exists in a preset distance range of the first user;
the determining module is further configured to determine a distance between the first user and the second user according to the first location information and the second location information;
the marking module is used for marking the second user as a suspected user if the distance is determined to be smaller than a distance threshold;
the acquisition module is further configured to acquire a face image of the second user, where the suspected user is a user whose sign may be abnormal.
In a possible embodiment, the acquisition module is further configured to:
when a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring the target face image from an identification device before acquiring a real-time image of the first user, wherein the target face image is sent by the identification device after determining a user with abnormal sign.
In a possible embodiment, the determining module is specifically configured to:
determining a first position point of a face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
And obtaining a second position point of the first position point in a second coordinate system according to the coordinate conversion relation between the first coordinate system and the second coordinate system so as to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
In a possible embodiment, the second location point comprises a first distance between the first user and the radar, a first elevation angle and a first azimuth angle, and the detection module is specifically configured to:
a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar are detected by the radar.
In a possible embodiment, the determining module is specifically configured to:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
and determining the distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle and the included angle.
In a possible embodiment, the marking module is specifically configured to:
and when the plurality of second users are included, marking each second user in the at least one second user as a suspected user if the distance corresponding to the at least one second user in the plurality of second users is smaller than a distance threshold.
In a possible embodiment, the marking module is further configured to:
and marking the second user as an abnormal user if the sign of the second user is abnormal after marking the second user as a suspected user if the distance is determined to be smaller than a distance threshold.
In a third aspect, there is provided an object detection apparatus comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of the first aspects by executing the instructions stored by the memory.
In a fourth aspect, a computer readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform the method of any of the first aspects.
Drawings
Fig. 1 is an application scenario diagram of a target detection method provided in an embodiment of the present application;
fig. 2 is a flowchart of a target detection method according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a second position point in a second coordinate system according to an embodiment of the present disclosure;
FIG. 3B is a schematic view of projection points of a second location point and a third location point on a horizontal plane according to an embodiment of the present disclosure;
FIG. 3C is a schematic diagram of providing a second location point and a third location point in a second coordinate system according to an embodiment of the present application;
fig. 4 is a block diagram of an object detection device according to an embodiment of the present application;
fig. 5 is a block diagram of an object detection device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions provided by the embodiments of the present application, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
At present, images acquired by a camera are mainly utilized to detect the distance between users, however, when the ambient light is poor or the users move rapidly, the acquired images are blurred, so that the accuracy of detecting suspected infected persons is low, or the acquired images do not contain information of the suspected infected persons due to the reasons of height difference, shooting angle and the like of the people, so that the accuracy of detecting the suspected infected persons is low.
In view of this, embodiments of the present application provide an object detection method that may be performed by an object detection system, and may be implemented by a controller in the object detection system, where the controller may be a central processing unit (Central Processing Unit, CPU), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present application, for example: one or more microprocessors (digital signal processor, DSPs), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGAs). A schematic deployment of the object detection system is described below.
Referring to fig. 1, a schematic deployment diagram of a target detection system may be understood as an application scenario diagram of a target detection method provided in an embodiment of the present application, where the application scenario includes a target detection system 110 and an identification device 120, and the target detection system 110 includes a radar 130 and a camera 140.
Radar 130 and camera 140 may be two separate devices, radar 130 may be deployed in any location, camera 140 may be deployed within a first predetermined distance of radar 130, where controllers in object detection system 110 may control radar 130 and camera 140, respectively, and radar 130 may be coupled or integrated with camera 140, such as a camera radar device, where controllers in object detection system 110 together control radar 130 and camera 140. Among them, the radar 130 may employ, for example, millimeter wave radar.
Alternatively, in another case, the identification device 120 may be implemented directly by the camera 140.
It should be noted that, in fig. 1, the object detection system 110 includes one radar 130 and one camera 140 as an example, and the number of radars 130 and cameras 140 is not limited in practice. In fig. 1, one identification device 120 is taken as an example, and the number of identification devices 120 is not limited in practice.
The target detection method according to the embodiment of the present application may be applied to various places such as stations, airports, communities, schools, etc., and a plurality of radars 130 and a plurality of cameras 140 may be disposed at each corner of the place to cover the place. The respective functions of the object detection system 110 and the identification device 120 are briefly described below:
the identification device 120 may collect the sign of the user and if it is determined that the sign of the user is abnormal, mark the user as an abnormal user. Further, the recognition device 120 may capture a face image of the user and send the face image of the user as a target face image to the target detection system 110.
If the target detection system 110 monitors in real time, if it is determined that a face image of a first user matching the target face image exists in the current acquisition frame, a real-time image of the first user may be acquired, and according to the real-time image, first position information of the first user relative to the radar 130 is determined, and second position information of a second user within a preset distance range of the first user relative to the radar 130 is detected, and the first position information and the second position information are combined, so that a distance between the first user and the second user is accurately determined. Further, the target detection system 110 marks the second user with the third distance smaller than the distance threshold as a suspected user, and acquires a facial image of the second user, and may acquire other information of the suspected user with possibly abnormal signs according to the facial image of the second user. Wherein the method of determining the distance between the first user and the second user will be described below.
Based on the application scenario discussed in fig. 1, the following description will take an example of the target detection system 110 in fig. 1 performing the target detection method. Referring to fig. 2, a flow chart of a target detection method is shown, and the method includes:
s210, if a face image of a first user matched with the target face image exists in the current acquisition picture, acquiring a real-time image of the first user.
The target detection system 110 may pre-store one or more target face images, where the user corresponding to the target face image is an abnormal user marked as sign abnormality. The pre-stored target face image may be obtained from the recognition device 120, and in particular may be a face image of the abnormal user that the recognition device 120 transmits after determining the abnormal user of the sign.
For example, the identification device 120 may collect a user's signs, which refer to an indication of whether the body is presenting with a light or critical condition, such as whether the body temperature is exceeded or coughing is not present, etc. If it is determined that the sign of the user is abnormal, for example, the sign of the human body does not meet a normal condition, for example, the normal condition is that the body temperature is within a preset temperature range, and if the body temperature of a certain user is not within the preset temperature range, the sign of the user is abnormal. Further, the recognition apparatus 120 marks the user whose sign is abnormal as an abnormal user, and captures a face image of the abnormal user, and transmits the face image of the abnormal user as a target face image to the target detection system 110.
Further, the identifying device 120 may also collect identity information of the abnormal user, and associate the face image of the abnormal user with the identity information of the abnormal user. The identity information is used to characterize the identity of the user, such as name, identification card number, etc.
Since there may be multiple users in the current acquisition frame of the target detection system 110, which may include an abnormal user and a normal user, in order to accurately lock the abnormal user, in the embodiment of the present application, after the target detection system 110 acquires the target face image from the recognition device 120, the target detection system 110 may match the target face image of the abnormal user with the face image of each user in the current acquisition frame.
If there is a face image of the first user matching the target face image in the current acquisition frame, it indicates that the first user may be an abnormal user, so the target detection system 110 may acquire a real-time image of the first user, where the real-time image may be a frame currently monitored by the camera 140, and the real-time image includes at least a face area of the first user and may further include other areas except the face area of the first user.
The following description refers to an example of how the target detection system 110 determines that a face image of a first user matching the target face image exists in the current acquisition frame:
The target detection system 110 may extract and store the face features of the target face image after receiving the target face image, calculate the similarity between the face features of the target face image and the face features of the face image of the first user after extracting the face features of the face image of the first user in the current acquisition picture, for example calculate the euclidean distance or cosine similarity, and determine that the target face image matches the face image of the first user if the similarity is greater than or equal to the preset threshold, and determine that the target face image does not match the face image of the first user if the similarity is less than the preset threshold.
It should be noted that, the abnormal user determined by the identifying device 120 may include one or more target face images corresponding to the abnormal user that is collected correspondingly, when the abnormal user determined by the identifying device 120 includes a plurality of target face images, and correspondingly, the target detecting system 110 receives the plurality of target face images sent by the identifying device 120, then the target detecting system 110 may match each target face image with the face image of each user in the current collected frame. If there are a plurality of face images of users matching the target face image in the current acquisition frame of the target detection system 110, it indicates that there are a plurality of abnormal users in the current acquisition frame, and the target detection system 110 may acquire the face images of the plurality of abnormal users, respectively.
S220, determining first position information of the first user relative to the radar according to the real-time image.
After obtaining the real-time image of the first user, but in fact radar 130 does not determine the specific location of the first user, object detection system 110 may determine the first location information of the first user relative to radar 130 based on the real-time image in embodiments of the present application.
The manner in which the object detection system 110 determines the first location information is described below:
since the real-time image is acquired by the object detection system 110, a first position point of the first user in a first coordinate system with the camera 140 as a reference point can be determined, and the first position point is converted into a second position point in a second coordinate system with the radar 130 as a reference point, and the second position point is used as the first position information.
The object detection system 110 may determine the first location point of the first user in the first coordinate system in a variety of ways, as described below:
first, the target detection system 110 may determine a first location point of the first user in the first coordinate system according to a preset position of the camera 140 and an area ratio of the rectangular frame where the face area is located to the real-time image, where the preset position refers to a location point set by the camera 140 in advance.
Specifically, after the real-time image of the first user is collected, the target detection system 110 may detect a face area corresponding to the first user in the real-time image, where the face area is a part of the real-time image, and specifically may determine, for example, a position of the face area of the first user in the real-time image through a pre-trained face detection model. In order to facilitate positioning of the face region, the face region may be marked with a rectangular frame, and the position of the face region in the real-time image may be any point of the face region.
Second, the target detection system 110 may further establish a third coordinate system with the real-time image as a reference point, specifically, for example, establish the third coordinate system with a center point of the real-time image as an origin, and the third coordinate system, for example, is a plane rectangular coordinate system, so as to accurately determine a position of the face region of the first user in the real-time image. The object detection system 110 may further convert the position of the face region of the first user in the real-time image into a first position point in the first coordinate system according to the conversion relationship between the third coordinate system and the first coordinate system.
Wherein the first location point is e.g. (x 1 ,y 1 ,z 1 ),x 1 An abscissa, y, representing the first location point 1 Representing the ordinate, z, of the first position point 1 Representing the vertical coordinates of the first location point. The object detection system 110 may pre-store a conversion relationship between the third coordinate system and the first coordinate system. Or the target detection system 110 may obtain a conversion relationship between the third coordinate system and the first coordinate system according to the aperture imaging model through a triangle similarity relationship, so as to determine a first position point of the face region of the first user in the first coordinate system. The aperture imaging model is established by the real-time image and the lens optical center of the camera 140 for collecting the real-time image, namely the aperture.
Further, after determining the first location point of the first user in the first coordinate system, the target detection system 110 may convert the first location point into the second location point in the second coordinate system in order to facilitate the radar 130 to detect the location of the first user, and the following description will be given of the conversion process:
the object detection system 110 may establish a first coordinate system using the camera 140 as a reference point, for example, a cartesian coordinate system established using the position of the camera 140 as a coordinate origin. The target detection system 110 may establish a second coordinate system with the radar 130 as a reference point, for example, a spherical coordinate system established with the position of the radar 130 as the origin of coordinates.
In order to obtain a more accurate first coordinate system and a more accurate second coordinate system later, in this embodiment of the present application, the target detection system 110 may deploy the relative positions of the radar 130 and the camera 140 according to a preset relative position, or adjust the relative positions of the radar 130 and the camera 140, for example, the radar 130 is mounted on an adjustable device, the position of the radar 130 or the camera 140 is adjusted by the adjustable device, and each coordinate in the first coordinate system and the second coordinate system is calibrated according to the relative position, so as to establish a coordinate conversion relationship between the first coordinate system and the second coordinate system based on the coordinates in the first coordinate system and the second coordinate system, where the coordinate conversion relationship may be represented by a coordinate conversion matrix.
Further, the object detection system 110 determines that the first location point corresponds to a location point in the second coordinate system according to the conversion relationship between the first coordinate system and the second coordinate system, and for convenience of description, the location point of the first location point in the second coordinate system may be referred to as a second location point. The second location point includes a first distance, a first elevation angle, and a first azimuth angle between the first user and radar 130.
Second location point e.g. (R 111 ),R 1 Represents a first distance, beta, i.e., a straight line distance between the first user and radar 130 1 Representing a first elevation angle, i.e. a first distance R 1 Angle alpha between projection on horizontal plane and vertical plane 1 Representing a first azimuth angle, i.e. a first distance R 1 The projection on the horizontal plane forms an angle with a certain initial direction in the horizontal direction.
Referring to fig. 3A, a schematic diagram of a second location point in a second coordinate system is provided in an embodiment of the present application. Wherein O represents radar 130, A 1 A second location point representing the first user, A 2 Representing a second location point A 1 At the projection point of the horizontal plane, R 1 Represents a first distance, beta 1 Representing a first elevation angle, alpha 1 Represents a first azimuth angle, r 1 Represents a first distance R 1 Projection length on horizontal plane, i.e. projection point A 2 Distance from radar 130.
S230, if the second user exists in the preset distance range of the first user, detecting second position information of the second user relative to the radar 130.
Specifically, according to the foregoing steps, which corresponds to the determined location information of the first user, the target detection system 110 may further determine, according to the first location information of the first user, whether the second user exists within the preset distance range of the first user, and description is given below of a manner in which the target detection system 110 determines whether the second user exists within the preset distance range:
For example, the target detection system 110 transmits a probe beam to a first user according to the first location information of the first user, determines that a second user exists within a preset distance range of the first user according to the intensity variation of the received probe beam, and determines that the second user exists within the preset distance range of the first user if the intensity variation of the received probe beam does not satisfy the preset value.
Further, after determining that the second user exists within the preset distance range, the target detection system 110 may detect second position information of the second user relative to the radar 130, where the second position information may be represented by referring to a second coordinate system, and a position point of the second user in the second coordinate system is referred to as a third position point, and the third position point is used as the second position information. The third location point includes a second distance, a second elevation angle, and a second azimuth angle between the second user and radar 130. The following exemplary description of the manner in which the object detection system 110 detects second location information of a second user is presented:
for example, radar 130 in object detection system 110 may transmit three different types of beams, which may each detect different parameters. For example, radar 130 transmits a first type of beam to detect a second distance between radar 130 and a second user, transmits a second type of beam, such as a narrow elevation beam, to detect a second elevation angle between radar 130 and the second user, transmits a third type of beam, such as a sharp azimuth beam, to detect a second azimuth angle between radar 130 and the second user.
Or for example, radar 130 in target detection system 110 includes a distance tracking component for transmitting a first type of beam to detect a second distance between radar 130 and a second user, an azimuth tracking component for transmitting a second type of beam to detect a second elevation angle between radar 130 and the second user, and an elevation tracking component for transmitting a third type of beam to detect a second azimuth angle between radar 130 and the second user.
It should be noted that, if the target detection system 110 determines that there are a plurality of second users within the preset distance range of the first user, the target detection system 110 may detect the position information of each second user with respect to the radar 130.
S240, determining the distance between the first user and the second user according to the first position information and the second position information.
Specifically, the distance between the first user and the second user may be referred to as a third distance, and the object detection system 110 obtains, according to the first azimuth angle, the second azimuth angle, and the angular relationship, an included angle between the second location point and the third location point, where the included angle is an included angle between a projection of the first distance on the horizontal plane and a projection of the second distance on the horizontal plane. And obtaining a projection length of the first distance on the horizontal plane and a fourth distance from the second position point to the horizontal plane according to the first distance and the first elevation angle, obtaining a projection length of the second distance on the horizontal plane and a fifth distance from the third position point to the horizontal plane according to the second distance and the second elevation angle, and calculating a sixth distance between a projection point of the second position point on the horizontal plane and a projection point of the third position point on the horizontal plane according to the projection length of the first distance on the horizontal plane, the projection length of the second distance on the horizontal plane and an included angle between the second position point and the third position point by using a cosine theorem. Further, the object detection system 110 calculates a height difference between the second location point and the third location point according to the fourth distance and the fifth distance, and calculates a third distance between the first user and the second user using the Pythagorean theorem through the height difference and the sixth distance.
For example, the formula for calculating the angle between the second position point and the third position point is as follows:
α 3 =α 21
wherein alpha is 1 For a first azimuth angle, alpha 2 For a second azimuth angle alpha 3 Is the included angle between the second position point and the third position point. The formula for calculating the third distance is as follows:
H 1 =R 1 sinβ 1
H 2 =R 2 sinβ 2
r 1 =R 1 cosβ 1
r 2 =R 2 cosβ 2
wherein R is 1 For a first distance, R 2 At a second distance beta 1 For a first elevation angle, beta 2 For a second elevation angle H 1 At a fourth distance, H 2 For a fifth distance r 1 For the projection length of the first distance in the horizontal plane, r 2 For the projection length of the second distance in the horizontal plane, r 3 For the sixth distance, R 3 Is the third distance.
Referring to fig. 3B, a schematic view of projection points of the second position point and the third position point on a horizontal plane is provided in the embodiment of the present application. Wherein A is 2 Representing a second location point A 1 Projection point on horizontal plane, B 2 Representing a third position point B 1 Projection point at horizontal plane, alpha 1 Represents a first azimuth angle, alpha 2 Representing a second azimuth angle, alpha 3 Representing a second location point A 1 And a third position point B 1 Included angle between r 1 I.e. OA 2 Corresponding to the first distance R 1 Projection length in horizontal plane, r 2 I.e. OB 2 Corresponding to the second distance R 2 Projection length in horizontal plane, r 3 A sixth distance. r is (r) 1 And r 2 Is equivalent to triangle OA 2 B 2 Alpha is equal to alpha 3 R is 1 And r 2 The included angle between the two can calculate the triangle OA by using the cosine theorem 2 B 2 Third edge r of (2) 3
Referring to fig. 3C, a schematic diagram of the second position point and the third position point in the second coordinate system is provided in the embodiment of the present application. Wherein r is 4 Corresponding to the height difference between the second position point and the third position point, r 3 Corresponding to the sixth distance, R 3 Corresponding to the third distance. It can be seen that r 3 And r 4 Equivalent to two sides of a right triangle, by using the Pythagorean theorem,calculating hypotenuse R of right triangle 3
S250, if the distance is smaller than the distance threshold, marking the second user as a suspected user, and collecting face images of the second user.
After determining the third distance, which is the distance between the first user and the second user, the object detection system 110 may compare the third distance to a distance threshold to determine whether the second user is in close contact with the first user. The distance threshold is a preset maximum safe distance, for example 1 meter. If the target detection system 110 determines that the third distance is less than the distance threshold, it indicates that the second user is most likely to be an abnormal user, marks the second user as a suspected user, the suspected user is a user whose sign is likely to be abnormal, and acquires a face image of the second user. If the target detection system 110 determines that the third distance is greater than or equal to the distance threshold, it indicates that the second user is not an abnormal user and no action is performed.
In one possible embodiment, considering that there may be a plurality of second users within the preset distance range of the first user, in this embodiment of the present application, if a third distance corresponding to at least one second user of the plurality of second users is smaller than the distance threshold, the target detection system 110 marks each second user of the at least one second user as a suspected user.
In one possible embodiment, in order to quickly acquire identity information of a suspected user, the target detection system 110 may also pre-store identity information of one or more users and face images corresponding to the identity information. After the target detection system 110 collects the face image of the second user, the face image of the second user may be directly matched with the face image pre-stored in the system, or the face image of the second user may be sent to an external database with rights to be matched with the face image pre-stored in the external database, so as to obtain the identity information of the second user.
In one possible embodiment, after the target detection system 110 marks the second user as a suspected user, the face image of the second user may be sent to the identifying device 120, the identifying device 120 may collect the sign of the second user, if it is determined that the sign of the second user is abnormal, mark the second user as an abnormal user, and send the face image of the second user as a target face image to the target detection system 110, where the target detection system 110 continues to detect whether there are other suspected users with a distance smaller than the distance threshold from the second user within the preset distance range of the second user, where the distance detection method is referred to the target detection method discussed above, and will not be repeated herein. If the identifying device 120 determines that the sign of the second user is normal, the detecting result of the normal sign may be sent to the target detecting system 110, and the target detecting system 110 may remove the mark that the second user is a suspected user according to the detecting result.
When the sign of the second user is determined to be abnormal, the second user is marked as an abnormal user, so that the second user is tracked continuously, other suspected users with the distance smaller than the distance threshold value between the second user and the second user are detected further, when the sign of the second user is determined to be normal, the mark of the second user as the suspected user is released, unnecessary marks of the second user are avoided, and the storage capacity of the system is reduced.
Based on the same inventive concept, the embodiments of the present application provide an object detection device, which is equivalent to the object detection system 110 discussed above, referring to fig. 4, and the object detection device includes:
the acquisition module 401 is configured to acquire a real-time image of a first user if a face image of the first user matching with a target face image exists in a current acquisition picture, where a user corresponding to the target face image is an abnormal user marked as sign abnormality;
a determining module 402, configured to determine, according to the real-time image, first location information of the first user with respect to the radar;
the detection module 403 is configured to detect, by the radar, second position information of the second user relative to the radar if the second user exists within a preset distance range of the first user;
A determining module 402, configured to determine a distance between the first user and the second user according to the first location information and the second location information;
a marking module 404, configured to mark the second user as a suspected user if the distance is less than the distance threshold;
the acquisition module 401 is further configured to acquire a face image of a second user, where the suspected user is a user whose sign may be abnormal.
In one possible embodiment, the acquisition module 401 is further configured to:
when a face image of a first user matched with the target face image exists in the current acquisition picture, acquiring the target face image from the identification equipment before acquiring the real-time image of the first user, wherein the target face image is sent by the identification equipment after determining the abnormal sign user.
In one possible embodiment, the determining module 402 is specifically configured to:
determining a first position point of a face area of a first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking a camera as a reference point;
and obtaining a second position point of the first position point in the second coordinate system according to the coordinate conversion relation between the first coordinate system and the second coordinate system so as to determine the first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
In one possible embodiment, the second location point includes a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar, and the detection module 403 is specifically configured to:
a second distance, a second elevation angle, and a second azimuth angle between a second user and the radar are detected by the radar.
In one possible embodiment, the determining module 402 is specifically configured to:
obtaining an included angle between a second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in a second coordinate system;
and determining the distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle and the included angle.
In one possible embodiment, the marking module 404 is specifically configured to:
and when the second users comprise a plurality of second users, marking each second user in the at least one second user as a suspected user if the distance corresponding to the at least one second user in the plurality of second users is smaller than the distance threshold.
In one possible embodiment, the marking module 404 is further configured to:
and if the sign of the second user is abnormal after the second user is marked as the suspected user if the distance is smaller than the distance threshold, marking the second user as an abnormal user.
Based on the same inventive concept, an embodiment of the present application provides an object detection apparatus, referring to fig. 5, which corresponds to the object detection system 110 discussed above, and includes:
at least one processor 501, and
a memory 502 communicatively coupled to the at least one processor 501;
wherein the memory 502 stores instructions executable by the at least one processor 501, the at least one processor 501 implementing the object detection method as previously discussed by executing the instructions stored by the memory 502.
The processor 501 may be a central processing unit (central processing unit, CPU), or may be a digital processing unit, or may be a combination of one or more of image processors, etc. The memory 502 may be a volatile memory (RAM), such as a random-access memory (RAM); the memory 502 may also be a non-volatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a Hard Disk Drive (HDD) or a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. Memory 502 may be a combination of the above.
As an example, the processor 501 in fig. 5 may implement the target detection method discussed above, and the processor 501 may also implement the functions of the target detection apparatus discussed above in fig. 4.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on a computer, cause the computer to perform an object detection method as discussed above.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A method of target detection, for use in a target detection system, the target detection system comprising a radar and a camera, the method comprising:
if a face image of a first user matched with a target face image exists in the current acquisition picture, acquiring a real-time image of the first user, wherein the user corresponding to the target face image is an abnormal user marked as sign abnormality;
determining first position information of the first user relative to the radar according to the real-time image;
if a second user exists in the preset distance range of the first user, detecting second position information of the second user relative to the radar through the radar, wherein the existence of the second user in the preset distance range of the first user comprises: the intensity change of the detection beam received by the radar does not meet a preset value, wherein the intensity change of the detection beam received by the radar refers to the intensity change of the detection beam sent to the first user by the radar according to the first position information;
determining a distance between the first user and the second user according to the first position information and the second position information;
If the distance is smaller than the distance threshold, marking the second user as a suspected user, and acquiring a face image of the second user through the camera, wherein the suspected user is a user with possibly abnormal sign.
2. The method of claim 1, wherein, when there is a face image of a first user matching a target face image in a current acquisition frame, before acquiring a real-time image of the first user, further comprising:
a target face image is acquired from a recognition device, wherein the target face image is transmitted by the recognition device after determining a user with abnormal signs.
3. The method of claim 1 or 2, wherein determining first location information of the first user relative to the radar from the real-time image comprises:
determining a first position point of a face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
and obtaining a second position point of the first position point in a second coordinate system according to the coordinate conversion relation between the first coordinate system and the second coordinate system so as to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
4. The method of claim 3, wherein the second location point comprises a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar; detecting, by the radar, second location information of the second user relative to the radar, comprising:
detecting, by the radar, a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar;
determining a distance between the first user and the second user according to the first position information and the second position information comprises:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
and determining the distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle and the included angle.
5. The method of claim 1, wherein when the second user comprises a plurality, if the distance is determined to be less than a distance threshold, marking the second user as a suspected user comprises:
And if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold value, marking each second user in the at least one second user as a suspected user.
6. The method of claim 1 or 2 or 5, further comprising, after marking the second user as a suspected user if the distance is determined to be less than a distance threshold:
and if the sign of the second user is determined to be abnormal, marking the second user as an abnormal user.
7. An object detection device, wherein the object detection device is disposed in an object detection system, the object detection system including a radar and a camera, the object detection device comprising:
the acquisition module is used for acquiring a real-time image of a first user if a face image of the first user matched with a target face image exists in a current acquisition picture, wherein the user corresponding to the target face image is an abnormal user marked as sign abnormality;
a determining module, configured to determine, according to the real-time image, first location information of the first user with respect to the radar;
the detection module is used for detecting second position information of the second user relative to the radar through the radar if the second user exists in the preset distance range of the first user Wherein the presence of a second user within the preset distance range of the first user includes: the intensity change of the detection beam received by the radar does not meet a preset value, wherein the intensity change of the detection beam received by the radar refers to the intensity change of the detection beam sent to the first user by the radar according to the first position information;
the determining module is further configured to determine a distance between the first user and the second user according to the first location information and the second location information;
the marking module is used for marking the second user as a suspected user if the distance is determined to be smaller than a distance threshold;
the acquisition module is further configured to acquire a face image of the second user, where the suspected user is a user whose sign may be abnormal.
8. The apparatus of claim 7, wherein the acquisition module is further to:
when a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring the target face image from an identification device before acquiring a real-time image of the first user, wherein the target face image is sent by the identification device after determining a user with abnormal sign.
9. A computer device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any of claims 1-6 by executing the memory stored instructions.
10. A computer readable storage medium storing computer instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-6.
CN202110095548.9A 2021-01-25 2021-01-25 Target detection method, device, equipment and medium Active CN112883809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110095548.9A CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110095548.9A CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112883809A CN112883809A (en) 2021-06-01
CN112883809B true CN112883809B (en) 2024-02-23

Family

ID=76050839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110095548.9A Active CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112883809B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327497B (en) * 2022-08-12 2023-10-10 南京慧尔视软件科技有限公司 Radar detection range determining method, radar detection range determining device, electronic equipment and readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318217A (en) * 2014-10-28 2015-01-28 吴建忠 Face recognition information analysis method and system based on distributed cloud computing
KR20170114045A (en) * 2016-03-31 2017-10-13 주식회사 아이유플러스 Apparatus and method for tracking trajectory of target using image sensor and radar sensor
CN110726974A (en) * 2019-10-17 2020-01-24 北京邮电大学 Radar detection method and device based on radar communication integration
CN111239728A (en) * 2020-02-26 2020-06-05 深圳雷研技术有限公司 Passenger counting method and system based on millimeter wave radar
CN111612814A (en) * 2020-02-04 2020-09-01 北京旷视科技有限公司 Method, device and electronic system for identifying and tracking heat-generating personnel
CN111638496A (en) * 2020-06-08 2020-09-08 上海眼控科技股份有限公司 Radar echo data processing method, computer device, and medium
CN111680583A (en) * 2020-05-25 2020-09-18 浙江大华技术股份有限公司 Method, system, computer device and readable storage medium for crowd marking
CN111913177A (en) * 2020-08-11 2020-11-10 中国原子能科学研究院 Method and device for detecting target object and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318217A (en) * 2014-10-28 2015-01-28 吴建忠 Face recognition information analysis method and system based on distributed cloud computing
KR20170114045A (en) * 2016-03-31 2017-10-13 주식회사 아이유플러스 Apparatus and method for tracking trajectory of target using image sensor and radar sensor
CN110726974A (en) * 2019-10-17 2020-01-24 北京邮电大学 Radar detection method and device based on radar communication integration
CN111612814A (en) * 2020-02-04 2020-09-01 北京旷视科技有限公司 Method, device and electronic system for identifying and tracking heat-generating personnel
CN111239728A (en) * 2020-02-26 2020-06-05 深圳雷研技术有限公司 Passenger counting method and system based on millimeter wave radar
CN111680583A (en) * 2020-05-25 2020-09-18 浙江大华技术股份有限公司 Method, system, computer device and readable storage medium for crowd marking
CN111638496A (en) * 2020-06-08 2020-09-08 上海眼控科技股份有限公司 Radar echo data processing method, computer device, and medium
CN111913177A (en) * 2020-08-11 2020-11-10 中国原子能科学研究院 Method and device for detecting target object and storage medium

Also Published As

Publication number Publication date
CN112883809A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
JP7318691B2 (en) Image processing device, image processing method, face authentication system and program
CN109977770B (en) Automatic tracking shooting method, device, system and storage medium
CN111291585B (en) GPS-based target tracking system, method and device and ball machine
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
US20170132458A1 (en) Method of apparatus for cross-modal face matching using polarimetric image data
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN105335726B (en) Recognition of face confidence level acquisition methods and system
CN111626125A (en) Face temperature detection method, system and device and computer equipment
CN111445531B (en) Multi-view camera navigation method, device, equipment and storage medium
TW201727537A (en) Face recognition system and face recognition method
WO2016070300A1 (en) System and method for detecting genuine user
US10915737B2 (en) 3D polarimetric face recognition system
WO2019076187A1 (en) Video blocking region selection method and apparatus, electronic device, and system
JP2019117579A5 (en)
CN111724496A (en) Attendance checking method, attendance checking device and computer readable storage medium
CN106997447A (en) Face identification system and face identification method
CN111307331A (en) Temperature calibration method, device, equipment and storage medium
CN111488775A (en) Device and method for judging degree of fixation
JP2018061114A (en) Monitoring device and monitoring method
CN111738132A (en) Method and device for measuring human body temperature, electronic equipment and readable storage medium
CN112883809B (en) Target detection method, device, equipment and medium
CN107580180B (en) Display method, device and equipment of view frame and computer readable storage medium
CN111583333A (en) Temperature measurement method and device based on visual guidance, electronic equipment and storage medium
KR101107120B1 (en) Device for sound source tracing and object recognition and method for sound source tracing and object recognition
CN111062313A (en) Image identification method, image identification device, monitoring system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant