CN112883809A - Target detection method, device, equipment and medium - Google Patents

Target detection method, device, equipment and medium Download PDF

Info

Publication number
CN112883809A
CN112883809A CN202110095548.9A CN202110095548A CN112883809A CN 112883809 A CN112883809 A CN 112883809A CN 202110095548 A CN202110095548 A CN 202110095548A CN 112883809 A CN112883809 A CN 112883809A
Authority
CN
China
Prior art keywords
user
distance
radar
face image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110095548.9A
Other languages
Chinese (zh)
Other versions
CN112883809B (en
Inventor
唐晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110095548.9A priority Critical patent/CN112883809B/en
Publication of CN112883809A publication Critical patent/CN112883809A/en
Application granted granted Critical
Publication of CN112883809B publication Critical patent/CN112883809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to the technical field of computers, in particular to the field of security monitoring, and provides a target detection method, a target detection device, target detection equipment and a target detection medium, which are used for solving the problem that the accuracy of detecting suspected infectors is not high. The method is applied to a target detection system, the target detection system comprises a radar and a camera, and the method comprises the following steps: the method comprises the steps of determining first position information of a first user relative to a radar according to a real-time image of the first user, detecting second position information of a second user relative to the radar within a preset distance range of the first user, accurately determining the distance between the first user and the second user by combining the first position information and the second position information, marking the second user as a suspected user if the distance is smaller than a distance threshold, and collecting a face image of the second user, so that the problem of low accuracy of detection of the suspected infected person is solved.

Description

Target detection method, device, equipment and medium
Technical Field
The application relates to the technical field of computers, in particular to the field of security monitoring, and provides a target detection method, a device, equipment and a medium.
Background
Maintaining a safe distance between users is an important way to ensure public health safety. Conversely, for some infectious diseases, if the distance between a user and the infected person is less than the safe distance, then the user is highly likely to be infected. In the situation of high-density people, such as a station or an airport, how to detect a suspected infected person whose distance from the infected person is less than a safe distance and obtain the information of the suspected infected person is significant for follow-up tracking of the suspected infected person.
At present, the distance between each user is mainly detected by images acquired through a camera, however, due to interference of various external factors, the quality of the shot images is possibly poor, and the accuracy of the result of detecting a suspected infected person is further influenced.
Disclosure of Invention
The embodiment of the application provides a target detection method, a target detection device and a target detection medium, which are used for solving the problem that the accuracy of detecting suspected infected persons is not high.
In a first aspect, an embodiment of the present invention provides a target detection method, which is applied to a target detection system, where the target detection system includes a radar and a camera, and the method includes:
if a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring a real-time image of the first user, wherein a user corresponding to the target face image is an abnormal user marked as abnormal physical sign;
determining first position information of the first user relative to the radar according to the real-time image;
if a second user exists in the preset distance range of the first user, detecting second position information of the second user relative to the radar through the radar;
determining the distance between the first user and the second user according to the first position information and the second position information;
if the distance is smaller than the distance threshold value, the second user is marked as a suspected user, and a face image of the second user is acquired through the camera, wherein the suspected user is a user with possibly abnormal physical signs.
According to the embodiment of the application, the first position information of the abnormal user relative to the radar is determined according to the real-time image of the abnormal user with abnormal physical signs, and then the second position information of the second user in the preset distance range of the abnormal user relative to the radar is detected through the radar, so that the interference of external factors can be reduced, the distance between the abnormal user and the second user can be accurately obtained, and the accuracy of a distance detection result is improved. In addition, in the embodiment of the application, after the distance between the abnormal user and the second user is determined to be smaller than the distance threshold, the second user is marked as a suspected user with possibly abnormal physical signs, so that the suspected user with the distance between the abnormal user and the second user smaller than the distance threshold is accurately detected, the face image of the suspected user is collected, other information of the suspected user can be quickly obtained according to the face image of the suspected user, and the suspected user can be conveniently tracked continuously in the later period.
In a possible embodiment, when a face image of a first user matching a target face image exists in a current acquisition picture, before acquiring a real-time image of the first user, the method further includes:
acquiring a target face image from a recognition device, wherein the target face image is transmitted by the recognition device after determining a user with abnormal physical signs.
According to the method and the device, the target face image of the abnormal user with abnormal physical signs can be directly obtained from the recognition device, and the target face image and the face image in the current collection picture can be conveniently and rapidly matched in the follow-up process, so that the abnormal user in the current collection picture can be rapidly locked.
In one possible embodiment, determining first position information of the first user relative to the radar from the real-time image comprises:
determining a first position point of the face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
and according to a coordinate conversion relation between the first coordinate system and a second coordinate system, obtaining a second position point of the first position point in the second coordinate system to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
In the embodiment of the application, according to the coordinate conversion relation between the first coordinate system and the second coordinate system, the first position point of the face area of the first user in the first coordinate system is converted into the second position point in the second coordinate system established by taking the radar as the reference point, so that the subsequent radar can accurately detect the first user and the second user in the preset distance range of the first user according to the second position point conveniently.
In one possible embodiment, the second location point comprises a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar; detecting, by the radar, second location information of the second user relative to the radar, including:
detecting, by the radar, a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar;
determining a distance between the first user and the second user according to the first location information and the second location information, including:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
determining a distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle, and the included angle.
In the embodiment of the application, the distance, the elevation angle and the azimuth angle between the second user and the radar can be accurately determined through the radar, and then the distance between the first user and the second user is accurately calculated. In addition, the method for calculating the distance between the first user and the second user is simple, the calculation amount is small, and the distance between the first user and the second user can be quickly obtained.
In a possible embodiment, when the second user includes a plurality of users, if it is determined that the distance is smaller than the distance threshold, the marking the second user as a suspected user includes:
and if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold, marking each second user in the at least one user as a suspected user.
In the embodiment of the application, when the distance corresponding to the plurality of second users is smaller than the distance threshold, each second user in the plurality of users is marked as a suspected user, so that suspected users are prevented from being missed, and therefore the suspected users are accurately determined.
In a possible embodiment, if it is determined that the distance is smaller than the distance threshold, after marking the second user as a suspected user, the method further includes:
and if the physical sign of the second user is determined to be abnormal, marking the second user as an abnormal user.
In the embodiment of the application, after the second user is marked as a suspected user, whether the second user is an abnormal user or not can be further determined according to the physical sign of the second user, so that the second user can be tracked later, and other suspected users whose distance to the second user is smaller than the preset distance can be further detected.
In a second aspect, there is provided a target detection apparatus disposed in a target detection system, the target detection system including a radar and a camera, the target detection apparatus including:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real-time image of a first user if a face image of the first user matched with a target face image exists in a current acquisition picture, and the user corresponding to the target face image is an abnormal user marked as abnormal physical sign;
a determining module for determining first position information of the first user relative to the radar according to the real-time image;
the detection module is used for detecting second position information of a second user relative to the radar through the radar if the second user exists in the preset distance range of the first user;
the determining module is further configured to determine a distance between the first user and the second user according to the first location information and the second location information;
a marking module, configured to mark the second user as a suspected user if it is determined that the distance is smaller than a distance threshold;
the acquisition module is further configured to acquire a face image of the second user, where the suspected user is a user whose physical signs may be abnormal.
In a possible embodiment, the acquisition module is further configured to:
when a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring the target face image from identification equipment before acquiring a real-time image of the first user, wherein the target face image is sent by the identification equipment after determining a user with abnormal physical signs.
In a possible embodiment, the determining module is specifically configured to:
determining a first position point of the face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
and according to a coordinate conversion relation between the first coordinate system and a second coordinate system, obtaining a second position point of the first position point in the second coordinate system to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
In a possible embodiment, the second location point comprises a first distance, a first elevation angle and a first azimuth angle between the first user and the radar, and the detection module is specifically configured to:
detecting, by the radar, a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar.
In a possible embodiment, the determining module is specifically configured to:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
determining a distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle, and the included angle.
In a possible embodiment, the marking module is specifically configured to:
when the second users comprise a plurality of second users, if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold, each second user in the at least one second user is marked as a suspected user.
In a possible embodiment, the marking module is further configured to:
if the distance is determined to be smaller than the distance threshold, the second user is marked as a suspected user, and if the physical sign of the second user is determined to be abnormal, the second user is marked as an abnormal user.
In a third aspect, there is provided an object detection apparatus comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of the first aspect by executing the instructions stored by the memory.
In a fourth aspect, a computer readable storage medium stores computer instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects.
Drawings
Fig. 1 is an application scenario diagram of a target detection method according to an embodiment of the present application;
fig. 2 is a flowchart of a target detection method according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a second location point in a second coordinate system according to an embodiment of the present disclosure;
fig. 3B is a schematic diagram of projected points of a second location point and a third location point on a horizontal plane according to an embodiment of the present disclosure;
FIG. 3C is a schematic diagram of the second location point and the third location point in the second coordinate system according to the embodiment of the present application;
fig. 4 is a structural diagram of an object detection apparatus according to an embodiment of the present application;
fig. 5 is a structural diagram of an object detection device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the drawings and specific embodiments.
At present, the distance between each user is mainly detected by using an image through an image acquired by a camera, however, when the ambient light is poor or the user moves rapidly, the acquired image is blurred, so that the accuracy of detecting a suspected infected person is not high, or due to the height difference, the shooting angle and other reasons of people, the acquired image does not contain the information of the suspected infected person, so that the accuracy of detecting the suspected infected person is also not high.
In view of this, an embodiment of the present Application provides a target detection method, which may be executed by a target detection system, and may be specifically implemented by a controller in the target detection system, where the controller may be specifically a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiment of the present Application, for example: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs). The following is a description of a schematic deployment of the object detection system.
Referring to fig. 1, a schematic diagram of a deployment of an object detection system, or an application scenario that can be understood as the object detection method provided in the embodiment of the present application, includes an object detection system 110 and an identification device 120, where the object detection system 110 includes a radar 130 and a camera 140.
Radar 130 and camera 140 may be two separate devices, radar 130 may be deployed at any location, camera 140 may be deployed within a first predetermined distance range of radar 130, where a controller in target detection system 110 may control radar 130 and camera 140, respectively, radar 130 may be coupled to or integrated with camera 140, such as a camera radar device, where a controller in target detection system 110 controls radar 130 and camera 140 together. The radar 130 may be, for example, a millimeter-wave radar.
Alternatively, in another case, the recognition device 120 may be implemented directly by the camera 140.
It should be noted that fig. 1 illustrates the target detection system 110 including one radar 130 and one camera 140, and the number of radars 130 and cameras 140 is not limited in practice. In fig. 1, one identification device 120 is taken as an example, and the number of identification devices 120 is not limited in practice.
The object detection method according to the embodiment of the present application may be applied to various locations, such as stations, airports, cells, schools, and the like, and the plurality of radars 130 and the plurality of cameras 140 may be disposed at each corner of the location to cover the location. The respective functions of the object detection system 110 and the identification device 120 will be briefly described below:
the recognition device 120 may collect the user's signs, and if it is determined that the user's signs are abnormal, mark the user as an abnormal user. Further, the recognition device 120 may capture a facial image of the user and send the facial image of the user as a target facial image to the target detection system 110.
If the target detection system 110 monitors in real time, if it is determined that the face image of the first user matched with the target face image exists in the current acquisition picture, the real-time image of the first user can be acquired, first position information of the first user relative to the radar 130 is determined according to the real-time image, second position information of a second user relative to the radar 130 within a preset distance range of the first user is detected, and the first position information and the second position information are combined, so that the distance between the first user and the second user is accurately determined. Further, the target detection system 110 marks the second user with the third distance smaller than the distance threshold as a suspected user, collects a face image of the second user, and may subsequently obtain other information of the suspected user with possibly abnormal physical signs according to the face image of the second user. A method of determining the distance between the first user and the second user will be described below.
Based on the application scenario discussed in fig. 1, the following description will take the target detection system 110 in fig. 1 to perform the target detection method as an example. Referring to fig. 2, a schematic flow chart of a target detection method is shown, the method including:
s210, if the face image of the first user matched with the target face image exists in the current acquisition picture, acquiring a real-time image of the first user.
The target detection system 110 may pre-store one or more target face images, where a user corresponding to the target face image is an abnormal user marked as abnormal physical sign. The pre-stored target face image may be obtained from the recognition device 120, and specifically, may be a face image of an abnormal user sent by the recognition device 120 after determining the user with abnormal physical signs.
For example, the recognition device 120 may collect a sign of the user, wherein the sign refers to an indication that the human body shows a serious condition and a critical degree, such as whether the body temperature is over-standard or coughing or not. If the abnormal physical signs of the user are determined, the abnormal physical signs refer to the fact that the indications of the human body performance do not meet normal conditions, for example, the normal conditions are that the body temperature is within a preset temperature range, and if the body temperature of a certain user is not within the preset temperature range, the abnormal physical signs of the user are determined. Further, the identification device 120 marks the user with abnormal physical signs as an abnormal user, and captures a face image of the abnormal user, and sends the face image of the abnormal user as a target face image to the target detection system 110.
Further, the identification device 120 may also collect identity information of the abnormal user, and associate the face image of the abnormal user with the identity information of the abnormal user. The identity information is used to characterize the identity of the user, such as name, identification number, etc.
Since there may be a plurality of users in the currently captured image of the target detection system 110, which may include an abnormal user and a normal user, in order to accurately lock the abnormal user, in this embodiment of the application, after the target detection system 110 acquires the target face image from the recognition device 120, the target detection system 110 may match the target face image of the abnormal user with the face image of each user in the currently captured image.
If the face image of the first user matching the target face image exists in the currently acquired picture, which indicates that the first user may be an abnormal user, the target detection system 110 may acquire a real-time image of the first user, where the real-time image may be a picture currently monitored by the camera 140, and the real-time image at least includes a face area of the first user and may include other areas except the face area of the first user.
In relation to how the target detection system 110 determines that the face image of the first user matching the target face image exists in the currently captured picture, the following example describes:
the target detection system 110 may extract and store the face sign of the target face image after receiving the target face image, calculate a similarity between the face feature of the target face image and the face feature of the face image of the first user in the current acquisition picture, for example, calculate a euclidean distance or a cosine similarity, determine that the target face image matches the face image of the first user if the similarity is greater than or equal to a preset threshold, and determine that the target face image does not match the face image of the first user if the similarity is less than the preset threshold.
It should be noted that the abnormal users determined by the recognition device 120 may include one or more abnormal users, the target face images corresponding to the acquired abnormal users also include one or more abnormal users, and when the abnormal users determined by the recognition device 120 include a plurality of abnormal users, correspondingly, the target detection system 110 receives a plurality of target face images sent by the recognition device 120, and then the target detection system 110 may match each target face image with the face image of each user in the currently acquired picture. If a plurality of face images of users matching with the target face image exist in the currently acquired picture of the target detection system 110, indicating that a plurality of abnormal users exist in the currently acquired picture, the target detection system 110 may acquire the face images of the plurality of abnormal users respectively.
S220, determining first position information of the first user relative to the radar according to the real-time image.
The target detection system 110 is after obtaining the real-time image of the first user, but the radar 130 does not actually determine the specific location of the first user, so in this embodiment, the target detection system 110 may determine the first location information of the first user relative to the radar 130 according to the real-time image.
The manner in which the object detection system 110 determines the first location information is described below:
the target detection system 110 may determine a first position point of the first user in a first coordinate system with the camera 140 as a reference point, convert the first position point into a second position point in a second coordinate system with the radar 130 as a reference point, and use the second position point as the first position information because the real-time image is acquired.
The object detection system 110 may determine the first location point of the first user in the first coordinate system in a variety of ways, as described below:
first, the target detection system 110 may determine a first position point of the first user in the first coordinate system according to a preset position of the camera 140 and an area ratio of a rectangular frame where the face region is located to the real-time image, where the preset position indicates a position point set in advance by the camera 140.
Specifically, after the target detection system 110 acquires the real-time image of the first user, the face region corresponding to the first user in the real-time image may be detected, where the face region is a part of the real-time image, and specifically, for example, the position of the face region of the first user in the real-time image may be determined by a pre-trained face detection model. In order to position the face region, the face region may be marked with a rectangular frame, and the position of the face region in the real-time image may be any point of the face region.
Secondly, the target detection system 110 may further establish a third coordinate system with the real-time image as a reference point, specifically, for example, with a central point of the real-time image as an origin, and the third coordinate system is, for example, a planar rectangular coordinate system, so as to accurately determine and obtain the position of the face region of the first user in the real-time image. Further, the target detection system 110 may convert the position of the face region of the first user in the real-time image into a first position point in the first coordinate system according to the conversion relationship between the third coordinate system and the first coordinate system.
Wherein the first location point is, for example, (x)1,y1,z1),x1The abscissa, y, representing the first location point1Denotes the ordinate, z, of the first location point1Representing the vertical coordinate of the first location point. The object detection system 110 may have a transformation relationship between the third coordinate system and the first coordinate system pre-stored. Or the target detection system 110 may obtain a transformation relationship between the third coordinate system and the first coordinate system through a triangular similarity relationship according to the pinhole imaging model, so as to determine the first position point of the face region of the first user in the first coordinate system. The pinhole imaging model is established by the real-time image and the lens optical center of the camera 140 collecting the real-time image, and the lens optical center of the camera 140 collecting the real-time image is a pinhole.
Further, after determining the first position point of the first user in the first coordinate system, the object detection system 110 may convert the first position point into a second position point in a second coordinate system in order to facilitate the radar 130 to detect the position of the first user, and the following describes the conversion process:
the object detection system 110 may establish a first coordinate system with the camera 140 as a reference point, such as a cartesian coordinate system established with the position of the camera 140 as an origin of coordinates. The target detection system 110 may establish a second coordinate system with the radar 130 as a reference point, for example, a spherical coordinate system established with the position of the radar 130 as a coordinate origin.
In order to obtain the first coordinate system and the second coordinate system more accurately, in the embodiment of the present application, the target detection system 110 may deploy the relative positions of the radar 130 and the camera 140 according to the preset relative positions, or adjust the relative positions of the radar 130 and the camera 140, for example, the radar 130 is mounted on an adjustable device, the position of the radar 130 or the camera 140 is adjusted by the adjustable device, and each coordinate in the first coordinate system and the second coordinate system is calibrated according to the relative position, and based on the coordinates in the first coordinate system and the second coordinate system, a coordinate transformation relationship between the first coordinate system and the second coordinate system is established, which may be expressed in a coordinate transformation matrix.
Further, the target detection system 110 determines the position point of the first position point in the second coordinate system according to the transformation relationship between the first coordinate system and the second coordinate system, and for convenience of description, the position point of the first position point in the second coordinate system may be referred to as the second position point. The second location point includes a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar 130.
Second location point e.g. (R)111),R1Represents a first distance, beta, that is, a straight-line distance between the first user and the radar 1301Representing a first elevation angle, i.e. a first distance R1Angle alpha between projection on horizontal plane and vertical plane1Representing a first azimuth angle, i.e. a first distance R1The projection on the horizontal plane forms an angle with a certain starting direction in the horizontal direction.
Referring to fig. 3A, a schematic diagram of a second position point in a second coordinate system is provided in the present embodiment. Wherein O represents radar 130, A1A second location point, A, representing the first user2Represents a second position point A1Projected point in the horizontal plane, R1Denotes a first distance, β1Representing a first elevation angle, alpha1Denotes a first azimuth angle, r1Represents a first distance R1Length of projection on horizontal plane, i.e. projection point A2Distance from the radar 130.
S230, if a second user exists within the preset distance range of the first user, detecting second position information of the second user relative to the radar 130.
Specifically, the target detection system 110 may determine whether a second user exists within a preset distance range of the first user according to the foregoing steps, and the following describes an example of a manner in which the target detection system 110 determines whether the second user exists within the preset distance range, where the method corresponds to determining the location information of the first user by the target detection system 110:
for example, the target detection system 110 sends a probe beam to a first user according to first location information of the first user, determines that a second user exists within a preset distance range of the first user according to a change in intensity of a received probe beam, and determines that the second user exists within the preset distance range of the first user if the change in intensity of the received probe beam does not satisfy a preset value.
Further, after determining that the second user exists within the preset distance range, the target detection system 110 may detect second position information of the second user relative to the radar 130, where the second position information may be represented by referring to a second coordinate system, and a position point of the second user in the second coordinate system is referred to as a third position point, and the third position point is referred to as the second position information. The third location point includes a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar 130. The following describes an example manner in which the object detection system 110 detects the second location information of the second user:
for example, the radar 130 in the object detection system 110 may transmit three different types of beams, which may each detect different parameters. For example, radar 130 transmits a first type of beam to detect a second distance between radar 130 and a second user, transmits a second type of beam, for example, a narrow elevation beam, to detect a second elevation angle between radar 130 and the second user, and transmits a third type of beam, for example, a sharp azimuth beam, to detect a second azimuth angle between radar 130 and the second user.
Or for example, radar 130 in target detection system 110 includes a range tracking component to transmit a first type of beam to probe a second range between radar 130 and a second user, an azimuth tracking component to transmit a second type of beam to probe a second elevation between radar 130 and a second user, and an elevation tracking component to transmit a third type of beam to probe a second azimuth between radar 130 and a second user.
It should be noted that if the target detection system 110 determines that a plurality of second users exist within the preset distance range of the first user, the target detection system 110 may detect the position information of each second user with respect to the radar 130, respectively.
And S240, determining the distance between the first user and the second user according to the first position information and the second position information.
Specifically, the distance between the first user and the second user may be referred to as a third distance, and the target detection system 110 obtains an included angle between the second position point and the third position point according to the first azimuth angle, the second azimuth angle, and the angle relationship, where the included angle is an included angle between a projection of the first distance on the horizontal plane and a projection of the second distance on the horizontal plane. And obtaining the projection length of the first distance on the horizontal plane and the fourth distance from the second position point to the horizontal plane according to the first distance and the first elevation angle, obtaining the projection length of the second distance on the horizontal plane and the fifth distance from the third position point to the horizontal plane according to the second distance and the second elevation angle, and calculating the sixth distance between the projection point of the second position point on the horizontal plane and the projection point of the third position point on the horizontal plane by utilizing the cosine law according to the projection length of the first distance on the horizontal plane, the projection length of the second distance on the horizontal plane and the included angle between the second position point and the third position point. Further, the object detection system 110 calculates a height difference between the second position point and the third position point according to the fourth distance and the fifth distance, and calculates a third distance between the first user and the second user by using the pythagorean theorem according to the height difference and the sixth distance.
For example, the formula for calculating the angle between the second position point and the third position point is as follows:
α3=α21
wherein alpha is1Is a first azimuth angle, α2At a second azimuth angle, α3Is the angle between the second position point and the third position point. The formula for calculating the third distance is as follows:
H1=R1 sinβ1
H2=R2 sinβ2
r1=R1 cosβ1
r2=R2 cosβ2
Figure BDA0002914010700000141
Figure BDA0002914010700000142
wherein R is1Is a first distance, R2Is a second distance, β1Is a first elevation angle, beta2At a second elevation angle, H1Is a fourth distance, H2Is a fifth distance, r1Is the projected length of the first distance in the horizontal plane, r2Is the projected length of the second distance in the horizontal plane, r3Is a sixth distance, R3Is the third distance.
Referring to fig. 3B, a schematic diagram of projection points of the second location point and the third location point on a horizontal plane is provided for the embodiment of the present application. Wherein A is2Represents a second position point A1Projected point on horizontal plane, B2Represents a third position point B1Projected point in the horizontal plane, α1Denotes a first azimuth angle, α2Representing a second azimuth angle, α3Represents a second position point A1And a third location point B1In betweenAngle of inclination r1Namely OA2Corresponding to the first distance R1Projected length in the horizontal plane, r2Namely OB2Corresponding to the second distance R2Projected length in the horizontal plane, r3Is the sixth distance. r is1And r2To be equivalent to a triangular OA2B2Two sides of (a)3Is r1And r2The angle between them can be calculated by using cosine theorem2B2Third side r of3
Referring to fig. 3C, a schematic diagram of the second location point and the third location point in the second coordinate system is provided for the embodiment of the present application. Wherein r is4Corresponding to the difference in height, r, between the second position point and the third position point3Corresponding to a sixth distance, R3Corresponding to the third distance. It can be seen that r3And r4Equivalent to two sides of the right triangle, and calculating the hypotenuse R of the right triangle by using the pythagorean theorem3
And S250, if the distance is determined to be smaller than the distance threshold, marking the second user as a suspected user, and acquiring a face image of the second user.
The object detection system 110, after determining the distance between the first user and the second user, i.e., the third distance, may compare the third distance to a distance threshold to determine whether the second user is in close contact with the first user. The distance threshold is a preset maximum safe distance, for example 1 meter. If the target detection system 110 determines that the third distance is smaller than the distance threshold, which indicates that the second user is very likely to be an abnormal user, the second user is marked as a suspected user, the suspected user is a user whose physical sign is likely to be abnormal, and a face image of the second user is acquired. If the target detection system 110 determines that the third distance is greater than or equal to the distance threshold, it indicates that the second user is not an abnormal user and does not perform any operation.
In a possible embodiment, considering that there may be a plurality of second users within a preset distance range of the first user, in this embodiment of the present application, if a third distance corresponding to at least one second user of the plurality of second users is smaller than a distance threshold, the target detection system 110 marks each second user of the at least one second user as a suspected user.
In a possible embodiment, in order to quickly obtain the identity information of the suspected user, the target detection system 110 may further pre-store the identity information of one or more users and a face image corresponding to the identity information. After the target detection system 110 collects the face image of the second user, the face image of the second user may be directly matched with the face image pre-stored in the system, or the face image of the second user may be sent to other external databases with authority, and matched with the face image pre-stored in the external databases, so as to obtain the identity information of the second user.
In a possible embodiment, after the target detection system 110 marks the second user as a suspected user, the facial image of the second user may be sent to the identification device 120, the identification device 120 may collect signs of the second user, if it is determined that the signs of the second user are abnormal, the second user is marked as an abnormal user, the facial image of the second user is sent to the target detection system 110 as a target facial image, and the target detection system 110 continues to detect whether there are other suspected users whose distance to the second user is smaller than the distance threshold within a preset distance range of the second user, where the distance detection method refers to the target detection method discussed above, and details are not repeated here. If the identification device 120 determines that the sign of the second user is normal, the detection result of the normal sign may be sent to the target detection system 110, and the target detection system 110 may remove the mark that the second user is a suspected user according to the detection result.
According to the embodiment of the application, when the abnormal physical sign of the second user is determined, the second user is marked as the abnormal user, so that the second user can be tracked continuously, other suspected users with the distance between the suspected users and the second user being smaller than the distance threshold value are further detected, when the normal physical sign of the second user is determined, the mark that the second user is the suspected user is removed, unnecessary marks on the second user are avoided, and the storage capacity of the system is reduced.
Based on the same inventive concept, the present application provides an object detection apparatus, which is disposed in the object detection system 110 discussed above, and referring to fig. 4, the object detection apparatus includes:
the acquisition module 401 is configured to acquire a real-time image of a first user if a face image of the first user matching a target face image exists in a current acquisition picture, where a user corresponding to the target face image is an abnormal user marked as abnormal physical sign;
a determining module 402, configured to determine first position information of the first user relative to the radar according to the real-time image;
a detecting module 403, configured to detect, if a second user exists within a preset distance range of the first user, second position information of the second user relative to the radar through the radar;
a determining module 402, configured to determine a distance between the first user and the second user according to the first location information and the second location information;
a marking module 404, configured to mark the second user as a suspected user if the distance is smaller than the distance threshold;
the acquisition module 401 is further configured to acquire a face image of a second user, where a suspected user is a user whose physical sign may be abnormal.
In a possible embodiment, the acquisition module 401 is further configured to:
when the face image of the first user matched with the target face image exists in the current acquisition picture, acquiring the target face image from the identification equipment before acquiring the real-time image of the first user, wherein the target face image is sent by the identification equipment after the user with abnormal physical signs is determined.
In a possible embodiment, the determining module 402 is specifically configured to:
determining a first position point of a face area of a first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking a camera as a reference point;
and obtaining a second position point of the first position point in a second coordinate system according to a coordinate conversion relation between the first coordinate system and the second coordinate system so as to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
In a possible embodiment, the second location point comprises a first distance, a first elevation angle and a first azimuth angle between the first user and the radar, and the detection module 403 is specifically configured to:
a second distance, a second elevation angle, and a second azimuth angle between a second user and the radar are detected by the radar.
In a possible embodiment, the determining module 402 is specifically configured to:
obtaining an included angle between the second position point and a third position point corresponding to the second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in a second coordinate system;
and determining the distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle and the included angle.
In a possible embodiment, the marking module 404 is specifically configured to:
when the second users include a plurality of second users, if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold, each second user in the at least one second user is marked as a suspected user.
In one possible embodiment, the tagging module 404 is further configured to:
if the distance is smaller than the distance threshold value, the second user is marked as a suspected user, and if the physical sign of the second user is abnormal, the second user is marked as an abnormal user.
Based on the same inventive concept, an object detection apparatus is provided in the embodiments of the present application, referring to fig. 5, the apparatus is equivalent to the object detection system 110 discussed above, and the apparatus includes:
at least one processor 501, and
a memory 502 communicatively coupled to the at least one processor 501;
wherein the memory 502 stores instructions executable by the at least one processor 501, the at least one processor 501 implementing the object detection method as discussed above by executing the instructions stored by the memory 502.
The processor 501 may be a Central Processing Unit (CPU), or one or more combinations of a digital processing unit, an image processor, and the like. The memory 502 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 502 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 502 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 502 may be a combination of the above memories.
As an example, the processor 501 in fig. 5 may implement the object detection method discussed above, and the processor 501 may also implement the function of the object detection apparatus discussed above in fig. 4.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the object detection method as discussed above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An object detection method is applied to an object detection system, wherein the object detection system comprises a radar and a camera, and the method comprises the following steps:
if a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring a real-time image of the first user, wherein a user corresponding to the target face image is an abnormal user marked as abnormal physical sign;
determining first position information of the first user relative to the radar according to the real-time image;
if a second user exists in the preset distance range of the first user, detecting second position information of the second user relative to the radar through the radar;
determining a distance between the first user and the second user according to the first position information and the second position information;
if the distance is smaller than the distance threshold value, the second user is marked as a suspected user, and a face image of the second user is acquired through the camera, wherein the suspected user is a user with possibly abnormal physical signs.
2. The method of claim 1, wherein when the face image of the first user matching the target face image exists in the current captured picture, before capturing the real-time image of the first user, further comprising:
acquiring a target face image from a recognition device, wherein the target face image is transmitted by the recognition device after determining a user with abnormal physical signs.
3. The method of claim 1 or 2, wherein determining first position information of the first user relative to the radar from the real-time image comprises:
determining a first position point of the face area of the first user in a first coordinate system, wherein the first coordinate system is a coordinate system established by taking the camera as a reference point;
and according to a coordinate conversion relation between the first coordinate system and a second coordinate system, obtaining a second position point of the first position point in the second coordinate system to determine first position information of the first user relative to the radar, wherein the second coordinate system is a coordinate system established by taking the radar as a reference point.
4. The method of claim 3, wherein the second location point comprises a first distance, a first elevation angle, and a first azimuth angle between the first user and the radar; detecting, by the radar, second location information of the second user relative to the radar, including:
detecting, by the radar, a second distance, a second elevation angle, and a second azimuth angle between the second user and the radar;
determining a distance between the first user and the second user according to the first location information and the second location information, including:
obtaining an included angle between the second position point and a third position point corresponding to a second user according to the first azimuth angle and the second azimuth angle, wherein the third position point is a position point of the second user in the second coordinate system;
determining a distance between the first user and the second user according to the first distance, the second distance, the first elevation angle, the second elevation angle, and the included angle.
5. The method of claim 1, wherein when the second user includes a plurality of users, if the distance is determined to be less than a distance threshold, then marking the second user as a suspected user comprises:
and if the distance corresponding to at least one second user in the plurality of second users is smaller than the distance threshold, marking each second user in the at least one second user as a suspected user.
6. The method of claim 1, 2 or 5, wherein after marking the second user as a suspected user if the distance is determined to be less than a distance threshold, further comprising:
and if the physical sign of the second user is determined to be abnormal, marking the second user as an abnormal user.
7. An object detection device, wherein the object detection device is disposed in an object detection system, the object detection system comprises a radar and a camera, and the object detection device comprises:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a real-time image of a first user if a face image of the first user matched with a target face image exists in a current acquisition picture, and the user corresponding to the target face image is an abnormal user marked as abnormal physical sign;
a determining module for determining first position information of the first user relative to the radar according to the real-time image;
the detection module is used for detecting second position information of a second user relative to the radar through the radar if the second user exists in the preset distance range of the first user;
the determining module is further configured to determine a distance between the first user and the second user according to the first location information and the second location information;
a marking module, configured to mark the second user as a suspected user if it is determined that the distance is smaller than a distance threshold;
the acquisition module is further configured to acquire a face image of the second user, where the suspected user is a user whose physical signs may be abnormal.
8. The apparatus of claim 7, wherein the acquisition module is further to:
when a face image of a first user matched with a target face image exists in a current acquisition picture, acquiring the target face image from identification equipment before acquiring a real-time image of the first user, wherein the target face image is sent by the identification equipment after determining a user with abnormal physical signs.
9. A computer device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any one of claims 1-6 by executing the instructions stored by the memory.
10. A computer-readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-6.
CN202110095548.9A 2021-01-25 2021-01-25 Target detection method, device, equipment and medium Active CN112883809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110095548.9A CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110095548.9A CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112883809A true CN112883809A (en) 2021-06-01
CN112883809B CN112883809B (en) 2024-02-23

Family

ID=76050839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110095548.9A Active CN112883809B (en) 2021-01-25 2021-01-25 Target detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112883809B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327497A (en) * 2022-08-12 2022-11-11 南京慧尔视软件科技有限公司 Radar detection range determining method and device, electronic equipment and readable medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318217A (en) * 2014-10-28 2015-01-28 吴建忠 Face recognition information analysis method and system based on distributed cloud computing
KR20170114045A (en) * 2016-03-31 2017-10-13 주식회사 아이유플러스 Apparatus and method for tracking trajectory of target using image sensor and radar sensor
CN110726974A (en) * 2019-10-17 2020-01-24 北京邮电大学 Radar detection method and device based on radar communication integration
CN111239728A (en) * 2020-02-26 2020-06-05 深圳雷研技术有限公司 Passenger counting method and system based on millimeter wave radar
CN111612814A (en) * 2020-02-04 2020-09-01 北京旷视科技有限公司 Method, device and electronic system for identifying and tracking heat-generating personnel
CN111638496A (en) * 2020-06-08 2020-09-08 上海眼控科技股份有限公司 Radar echo data processing method, computer device, and medium
CN111680583A (en) * 2020-05-25 2020-09-18 浙江大华技术股份有限公司 Method, system, computer device and readable storage medium for crowd marking
CN111913177A (en) * 2020-08-11 2020-11-10 中国原子能科学研究院 Method and device for detecting target object and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318217A (en) * 2014-10-28 2015-01-28 吴建忠 Face recognition information analysis method and system based on distributed cloud computing
KR20170114045A (en) * 2016-03-31 2017-10-13 주식회사 아이유플러스 Apparatus and method for tracking trajectory of target using image sensor and radar sensor
CN110726974A (en) * 2019-10-17 2020-01-24 北京邮电大学 Radar detection method and device based on radar communication integration
CN111612814A (en) * 2020-02-04 2020-09-01 北京旷视科技有限公司 Method, device and electronic system for identifying and tracking heat-generating personnel
CN111239728A (en) * 2020-02-26 2020-06-05 深圳雷研技术有限公司 Passenger counting method and system based on millimeter wave radar
CN111680583A (en) * 2020-05-25 2020-09-18 浙江大华技术股份有限公司 Method, system, computer device and readable storage medium for crowd marking
CN111638496A (en) * 2020-06-08 2020-09-08 上海眼控科技股份有限公司 Radar echo data processing method, computer device, and medium
CN111913177A (en) * 2020-08-11 2020-11-10 中国原子能科学研究院 Method and device for detecting target object and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327497A (en) * 2022-08-12 2022-11-11 南京慧尔视软件科技有限公司 Radar detection range determining method and device, electronic equipment and readable medium
CN115327497B (en) * 2022-08-12 2023-10-10 南京慧尔视软件科技有限公司 Radar detection range determining method, radar detection range determining device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN112883809B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN109977770B (en) Automatic tracking shooting method, device, system and storage medium
CN108111818B (en) Moving target actively perceive method and apparatus based on multiple-camera collaboration
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
US11024052B2 (en) Stereo camera and height acquisition method thereof and height acquisition system
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
TW201727537A (en) Face recognition system and face recognition method
WO2017215351A1 (en) Method and apparatus for adjusting recognition range of photographing apparatus
CN109905641B (en) Target monitoring method, device, equipment and system
WO2016070300A1 (en) System and method for detecting genuine user
CN108875507B (en) Pedestrian tracking method, apparatus, system, and computer-readable storage medium
US10915737B2 (en) 3D polarimetric face recognition system
CN111598865B (en) Hand-foot-mouth disease detection method, device and system based on thermal infrared and RGB double-shooting
JP7484985B2 (en) Authentication system, authentication method, and program
CN111670456B (en) Information processing apparatus, tracking method, and recording medium
JP2017049676A (en) Posture discrimination device and object detection device
US11015929B2 (en) Positioning method and apparatus
US9258491B2 (en) Imaging method and imaging device
CN106960027B (en) The UAV Video big data multidate association analysis method of spatial information auxiliary
CN112883809B (en) Target detection method, device, equipment and medium
CN111062313A (en) Image identification method, image identification device, monitoring system and storage medium
JP5047658B2 (en) Camera device
CN114463663A (en) Method and device for calculating height of person, electronic equipment and storage medium
CN115880643B (en) Social distance monitoring method and device based on target detection algorithm
Hanna et al. A System for Non-Intrusive Human Iris Acquisition and Identification.
CN110956054B (en) Iris image acquisition method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant