WO2021179624A1 - Monitoring method and system, electronic device, and storage medium - Google Patents

Monitoring method and system, electronic device, and storage medium Download PDF

Info

Publication number
WO2021179624A1
WO2021179624A1 PCT/CN2020/124151 CN2020124151W WO2021179624A1 WO 2021179624 A1 WO2021179624 A1 WO 2021179624A1 CN 2020124151 W CN2020124151 W CN 2020124151W WO 2021179624 A1 WO2021179624 A1 WO 2021179624A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
living body
target object
body detection
detection result
Prior art date
Application number
PCT/CN2020/124151
Other languages
French (fr)
Chinese (zh)
Inventor
方志军
李若岱
罗彬�
田士民
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Priority to JP2021536787A priority Critical patent/JP2022526207A/en
Priority to SG11202106842P priority patent/SG11202106842PA/en
Publication of WO2021179624A1 publication Critical patent/WO2021179624A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01KMEASURING TEMPERATURE; MEASURING QUANTITY OF HEAT; THERMALLY-SENSITIVE ELEMENTS NOT OTHERWISE PROVIDED FOR
    • G01K13/00Thermometers specially adapted for specific purposes
    • G01K13/20Clinical contact thermometers for use with humans or animals
    • G01K13/223Infrared clinical thermometers, e.g. tympanic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present disclosure relates to the field of computer vision technology, and relates to a monitoring method and system, electronic equipment, and storage medium.
  • Early detection refers to the early detection of suspicious persons by means of medical diagnosis and testing, which may include temperature detection.
  • Early isolation refers to reducing the contact between virus-carrying patients and others, and reducing the risk of cross-infection, which can include non-contact temperature testing, wearing masks and other measures.
  • body temperature can be detected by means of thermometers, thermal imaging, etc., but the physical examination and detection process is cumbersome, it is difficult to adapt to places with a large flow of people, and it is difficult to verify the identity of the suspected case when a suspected case with fever symptoms is detected information.
  • the embodiments of the present disclosure propose a monitoring method and system, electronic equipment, and storage medium.
  • a monitoring method which includes: performing target area recognition on a visible light image of a monitoring area, and obtaining a first area where a target object is located in the visible light image and a preset of the target object The second area where the part is located; according to the size information of the first area and the third area corresponding to the first area in the infrared image of the monitoring area, the target object is detected in vivo to obtain the in vivo detection Result; in the case that the result of the living body detection is a living body, the first area is subjected to identification processing to obtain the identity information of the target object, and the fourth area corresponding to the second area in the infrared image is obtained Perform temperature detection processing to obtain temperature information of the target object; in the case that the temperature information is greater than or equal to a preset temperature threshold, generate first warning information according to the identity information of the target object.
  • living body detection and temperature detection can be performed based on infrared images, the body temperature of the target object in the image can be quickly detected, the efficiency of body temperature detection can be improved, and it is suitable for places with a large flow of people.
  • the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
  • the target object is subjected to live detection to obtain a live detection result , Including: determining the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first area; determining the living body detection strategy according to the distance; For the position in the visible light image, the position of the third region is determined in the infrared image; according to the living body detection strategy, the living body detection processing is performed on the third region in the infrared image to obtain the living body detection result.
  • the living body detection strategy can be determined by the size of the first region, and the living body detection can be performed based on the living body detection strategy, and isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image.
  • determining the living body detection strategy according to the distance includes one of the following: when the distance is greater than or equal to a first distance threshold, determining the living body detection strategy to be based on A living body detection strategy based on the shape of the human body; when the distance is greater than or equal to the second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the shape of the head and neck; If it is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  • performing a living detection process on the third region according to the living body detection strategy to obtain the living body detection result includes: performing a living body detection process on the third region according to the living body detection strategy Morphology detection processing to obtain a morphology detection result; in the case that the morphology detection result is a living body shape, perform isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determine according to the living body detection strategy The first weight of the shape detection result and the second weight of the isothermal analysis result; the living body detection result is determined according to the first weight, the second weight, the shape detection result, and the isothermal analysis result.
  • the method further includes, in a case where the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the The preset target includes an object that occludes a partial area of the face.
  • the identity recognition process is performed on the first area to obtain the identity information of the target object, including: in the live body detection result
  • an identity recognition process is performed on the first area to obtain the identity information of the target object.
  • the identification method can be selected according to whether there is a preset target, and the accuracy of identification can be improved.
  • detecting whether a preset target is included in the first area, and obtaining the first detection result includes: performing detection processing on a face area in the first area to determine the face The feature missing result of the region; in the case that the feature missing result is a preset feature missing, detecting whether the face region includes a preset target, and obtaining the first detection result.
  • performing identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object includes one of the following: If the result is that the preset target does not exist, perform first identity recognition processing on the face area in the first area to obtain the identity information of the target object; when the first detection result is that the In the case of a preset target, a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the features of the unoccluded area of the face in the second identity recognition process The weight of is greater than the weight of the feature of the corresponding area in the first identification process.
  • the method further includes: in a case where the first detection result is that the preset target does not exist, generating second early warning information.
  • the method further includes: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
  • the method further includes: comparing the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object with the The visible light image and/or the infrared image are superimposed to obtain a detection image.
  • a monitoring system including: a visible light image acquisition part, an infrared image acquisition part, and a processing part, the processing part is configured to: The image performs target area recognition to obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located; according to the size information of the first area, and the first area In the third area corresponding to the infrared image acquired by the infrared image acquisition part, perform live detection on the target object to obtain the live detection result; in the case where the live detection result is a live body, perform identity recognition processing on the first area , Obtain the identity information of the target object, and perform temperature detection processing on the fourth area corresponding to the second area in the infrared image to obtain the temperature information of the target object; when the temperature information is greater than or equal to In the case of a preset temperature threshold, the first warning information is generated according to the identity information of the target object.
  • the processing part is further configured to: determine the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region; According to the distance, determine the living body detection strategy; according to the position of the first region in the visible light image, determine the position of the third region in the infrared image; according to the living body detection strategy, determine the position of the third region in the infrared image In the third area, a living body detection process is performed to obtain the living body detection result.
  • the processing part is further configured to: in a case where the distance is greater than or equal to a first distance threshold, determine the living body detection strategy as a living body detection strategy based on human body shape; or When the distance is greater than or equal to the second distance threshold and less than the first distance threshold, the living body detection strategy is determined to be a living body detection strategy based on the head and neck shape; or when the distance is smaller than the second distance In the case of a threshold value, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  • the processing part is further configured to: perform morphological detection processing on the third region according to the living body detection strategy to obtain a morphological detection result; when the morphological detection result is a living body morphology In the case of performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determining the first weight of the morphological detection result and the second weight of the isothermal analysis result according to the living body detection strategy; According to the first weight, the second weight, the morphological detection result, and the isothermal analysis result, the living body detection result is determined.
  • the processing part is further configured to: in a case where the living body detection result is a living body, detect whether a preset target is included in the first area, and obtain the first detection result, wherein , The preset target includes an article that occludes a partial area of the face, and the processing part is further configured to: in a case where the result of the detection of a living body is a living body, according to the result of the first detection, the first area is Perform identity recognition processing to obtain the identity information of the target object.
  • the processing part is further configured to: perform detection processing on the facial region in the first region to determine the feature missing result of the facial region; If the feature is missing, it is detected whether a preset target is included in the face area, and the first detection result is obtained.
  • the processing part is further configured to: in the case where the first detection result is that the preset target does not exist, perform a first operation on the face area in the first area. Identity recognition processing to obtain the identity information of the target object; or in the case where the first detection result is that the preset target exists, the second identity recognition processing is performed on the face area in the first area to obtain In the identity information of the target object, the weight of the feature of the unoccluded area of the face in the second identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
  • the processing part is further configured to: combine the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object, Perform superposition processing with the visible light image and/or infrared image to obtain a detection image.
  • the processing part is further configured to: obtain gender information of the target object; and determine the preset temperature threshold according to the gender information.
  • the processing part is further configured to generate second warning information when the first detection result is that the preset target does not exist.
  • an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the above-mentioned monitoring method.
  • a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing monitoring method is implemented.
  • a computer program including computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes for realizing the above-mentioned monitoring method.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of living body detection according to an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of identity recognition processing according to an embodiment of the present disclosure
  • FIGS. 4A and 4B show schematic diagrams of application of a monitoring method according to an embodiment of the present disclosure
  • Figure 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure
  • FIG. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure
  • FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in Fig. 1, the method includes:
  • step S11 perform target area recognition on the visible light image of the monitoring area, and obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located in the visible light image;
  • step S12 according to the size information of the first area and the third area corresponding to the first area in the infrared image of the monitoring area, perform a live detection on the target object to obtain a live detection result;
  • step S13 in the case that the result of the living body detection is a living body, the identity recognition processing is performed on the first area to obtain the identity information of the target object, and the corresponding information of the second area in the infrared image is obtained.
  • the fourth area performs temperature detection processing to obtain temperature information of the target object;
  • step S14 when the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object.
  • living body detection and temperature detection can be performed based on infrared images, the body temperature of the target object in the image can be quickly detected, the efficiency of body temperature detection can be improved, and it is suitable for places with a large flow of people.
  • the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
  • the monitoring method may be executed by a terminal device or other processing equipment, where the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, or a cordless Telephones, personal digital assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • UE User Equipment
  • PDAs personal digital assistants
  • Other processing equipment can be servers or cloud servers.
  • the monitoring method can be implemented by a processor invoking computer-readable instructions stored in the memory.
  • the monitoring method can be used in front-end monitoring equipment.
  • the front-end monitoring equipment may be a device that integrates a processor, a camera, and other parts into a whole.
  • the processor can control the camera and other parts to obtain The image of the monitoring area is monitored, and the monitoring method is executed by the processor.
  • the monitoring method can also be used in a server (for example, the processor that executes the monitoring method is located in the server, and the processor and the camera are not packaged as a whole, but are distributed in the monitoring area and the back end.
  • the monitoring system is composed, and the server can receive the image taken by the camera in the monitoring area, and execute the monitoring method on the image.
  • the processor may be a System on Chip (SoC)
  • the camera may include an infrared camera for acquiring infrared images and a camera for acquiring visible light images.
  • special sensors such as vanadium oxide uncooled infrared focal plane detectors can be used to capture spatial light signals. The sensors can be used in far-infrared cameras to obtain life light and perform temperature monitoring.
  • the camera can capture a video of the monitored area, and the video frames of the video are the visible light image and the infrared image.
  • the infrared camera and the visible light camera can be arranged in close positions, for example, the infrared camera and the visible light camera can be arranged adjacently, side by side, etc., or the two cameras can be arranged as a whole, etc.
  • the embodiment of the present disclosure does not limit the setting method. Therefore, in the images captured by the two cameras at the same time, the position of each target object is similar, that is, the position deviation of the same target object in the optical image and the corresponding infrared image is small, and the deviation can be based on The positional relationship between the two cameras is corrected.
  • the position of the target object in the third area in the infrared image can be determined according to the position of the first area in the visible light image (that is, the area where the target object is located).
  • the image captured by the camera may have poor image quality, for example, the image is blurred, the focal length error, and the camera is contaminated or blocked.
  • the image quality can be checked first.
  • the image quality of the visible light image can be detected, for example, the texture and the definition of the boundary of the visible light image can be detected.
  • the sharpness is greater than or equal to the sharpness threshold
  • the image quality can be considered to be good, and the next position detection and other processing can be performed.
  • the sharpness is less than the sharpness threshold, the image quality can be considered to be poor, and the visible light image and the infrared image corresponding to the visible light image can be deleted at the same time.
  • the visible light image can be identified by the target area to obtain the first area where the target object is located.
  • the first area may be the face area and/or the target object of the target object.
  • the human body area of the subject This processing can obtain the position coordinates of the first area where the target object is located, or the position coordinates of the detection frame containing the first area.
  • the temperature of the forehead and other areas can be measured.
  • the temperature of the forehead area can be determined by the pixel value of the infrared image, and then the body temperature of the target object can be obtained. Therefore, the second area where the preset part (for example, forehead and other areas) is located in the visible light image can be detected, and then the fourth area of the preset part in the infrared image can be determined, and the pixel value and temperature of the infrared image can be established in advance.
  • the body temperature of the target object can be determined according to the pixel value in the fourth area.
  • a living body is a biologically active, real biological modality exhibited by a human body with vital characteristics.
  • the prosthesis is a model that is similar to the living body (for example, photos, masks, headgear, etc.) made by imitating the biological characteristics of the living body.
  • a near-infrared image of a human face is used for living body detection. It is mainly based on the difference in imaging characteristics between the living body and the prosthesis under near-infrared light, such as the difference in optical flow, texture, color and other characteristics, so as to realize the distinction between the living body and the prosthesis.
  • the infrared image obtained by the far-infrared camera is a far-infrared image (ie, an image based on the light of life), which is not a commonly used near-infrared image. Therefore, it can pass through the first area where the target object is located. Size, determine the living body detection strategy, and improve the accuracy of living body detection.
  • step S12 may include: determining the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region; A living body detection strategy; according to the position of the first region in the visible light image, the position of the third region in the infrared image is determined; according to the living body detection strategy, the third region in the infrared image is performed The living body detection processing obtains the living body detection result.
  • the first area is the area where the target object in the visible light image is located.
  • the target object may be the face area of the target object.
  • the distance between the devices is negatively correlated, that is, the larger the area size, the smaller the distance, and the smaller the area size, the greater the distance.
  • a living body is characterized in an infrared image with unique vital characteristics (for example, a far-infrared image based on the light of life).
  • the blood vessels of the human body are all over the body (for example, the human body, the face, the neck and shoulders, etc.), but in response to the close position of the target object in the visible light image, only part of the body is shown in the visible light image, for example, only It presents the face, head and neck, etc., but the facial features are clearer. The farther the distance, the fewer facial features and the more human features. Based on different distances, different live detection strategies are combined to improve the accuracy of live detection, which is suitable for use at different distances.
  • the corresponding relationship between the reference size and the reference distance in the image can be established in advance, and the distance between the target object and the camera can be determined according to the proportional relationship between the size of the area where the target object is located and the reference size, and the reference distance.
  • Fig. 2 shows a schematic diagram of living body detection according to an embodiment of the present disclosure.
  • the distance between the target object and the camera can be determined according to the size information of the first area, and the living body detection strategy can be selected according to the distance.
  • the corresponding third area in the infrared image can be determined according to the first area in the visible light image.
  • the first area is the face area of the target object A in the visible light image (for example, the target object A is relatively close to the camera, and only the face area of the target object A can be captured), and the third area is an infrared image The face area of target A in the middle.
  • the first area is the head and neck area of the target object B in the visible light image (for example, the face and shoulders and neck areas of the target object B can be photographed, or the upper body area, etc.), and the third area is the head and neck area of the target object B in the infrared image area.
  • the first area is the human body area of the target object C in the visible light image (for example, the target object C is far away from the camera, and the human body area of the target object C can be photographed), and the third area is the human body area of the target object C in the infrared image .
  • determining the living body detection strategy according to the distance of the target object includes one of the following: when the distance is greater than or equal to a first distance threshold, the living body detection strategy is Determined as a living body detection strategy based on the shape of the human body; in a case where the distance is greater than or equal to a second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the shape of the head and neck; In a case where the distance is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  • the first distance threshold and the second distance threshold may be determined according to parameters such as the range and focal length of the detection area of the camera. For example, when the distance between the target object and the camera is greater than or equal to the first distance threshold, the image captured by the camera can include the whole body of the target object, or most of the body, and a living body detection strategy based on the shape of the human body can be selected .
  • the picture captured by the camera cannot include the whole body of the target object, but may include the face, shoulders and neck of the target object In the upper body area, or the upper body area, you can choose a live detection strategy based on the shape of the head and neck.
  • the image captured by the camera only includes the face area of the target object, and a living body detection strategy based on facial morphology can be selected.
  • the position of the third area is determined in the infrared image according to the position of the first area in the visible light image.
  • the visible light image may include one or more target objects, that is, include one or more first regions. According to the positional relationship between the infrared camera and the visible light camera quality inspection, and the position of the first region, the third The location of the area in the infrared image.
  • the infrared camera can acquire far-infrared images used to detect life rays, instead of near-infrared images used to detect image features, directly detect the third area of the target object in the far-infrared image.
  • the accuracy rate may be low.
  • the position of the third area corresponding to the first area can be obtained according to the position of the first area in the visible light image and the position relationship between the two cameras.
  • the position of the corresponding fourth area in the infrared image can be obtained through the position of the second area in the visible light image.
  • the selected living body detection strategy can be used to perform the living body detection processing on the third region in the infrared image.
  • performing living body detection processing on the third area to obtain the living body detection result includes: performing morphological detection processing on the third area according to the living body detection strategy to obtain a morphological detection result;
  • isothermal analysis processing is performed on the face area in the third area to obtain the isothermal analysis result;
  • the first weight and the total weight of the shape detection result are determined The second weight of the isothermal analysis result; the living body detection result is determined according to the first weight, the second weight, the morphological detection result, and the isothermal analysis result.
  • the morphological detection of the third region may be performed based on the living body detection strategy.
  • the third area may include the face area and the human body area of the target object.
  • the processing is to perform a shape detection process to obtain a human body feature amount to determine whether the human body shape of the target object is a living body shape (for example, whether the shape of the target object is a living body shape can be determined according to the characteristics of the target object's posture, movement, and shape).
  • the third area may include the face, shoulder and neck area of the target object, or the upper body area, and the feature extraction process may be performed through neural networks, etc.
  • the feature extraction process may be performed through neural networks, etc.
  • the third area may include the face area of the target object, with fewer human body features and more facial features, which can be achieved through neural networks, etc.
  • Method to perform feature extraction processing to obtain the feature amount of the face area for shape detection processing to determine whether the facial shape of the target object is a living form for example, it can be judged whether the target object is a living form based on the facial expression, texture and other characteristics of the target object ).
  • the shape detection result of the above-mentioned shape detection processing may be a result in the form of a score.
  • the morphology detection result can be considered as a living body shape.
  • the embodiment of the present disclosure does not limit the form of the morphological detection result.
  • isothermal analysis processing is performed on the face region in the third region to obtain an isothermal analysis result.
  • isothermal analysis can be used to determine whether the face area is a living body.
  • the real face of the target object has a uniform distribution of blood vessels, and if the target object wears a mask, headgear, etc. There is no blood vessel distribution in the body, and its isotherm is different from that of the real face.
  • the isotherm can be used to determine whether the facial area is alive or not.
  • the result of the isothermal analysis may be a result in the form of a score.
  • the result of the isothermal analysis may be considered to be a living body.
  • the embodiment of the present disclosure does not limit the form of the isothermal analysis result.
  • the proportion of the feature amount of the face area is different (for example, in the living body detection strategy based on human body shape, the third area includes the face area and the human body. Area, the proportion of the face area is small, and in the life detection strategy based on facial morphology, the third area only includes the face area, and the proportion of the face area is larger).
  • the first weight of the morphological detection result and the second weight of the isothermal analysis result can be determined according to the living body detection strategy.
  • the second weight based on the isothermal analysis result of the facial area may be smaller, and the first weight of the shape detection result may be larger.
  • the second weight of the isothermal analysis result based on the facial region may be larger, and the first weight of the morphological detection result may be smaller.
  • the second weight and the first weight may be close or equal. The embodiments of the present disclosure do not limit the values of the first weight and the second weight.
  • the morphological detection result and the isothermal analysis result may be weighted and summed according to the first weight and the second weight, respectively, to obtain the living body detection result.
  • the live body detection result can be a result in the form of a score.
  • the target object can be considered a live body; in the case where the live body detection result is less than the live body score threshold Next, it can be considered that the target object is a prosthesis or a non-human body, and no subsequent processing is required.
  • the living body detection strategy can be determined by the size of the first region, and the living body detection can be performed based on the living body detection strategy, and isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image.
  • step S13 in the case that the target object is a living body, the identity recognition processing may be performed on the target object.
  • the identity recognition processing may be performed on the target object.
  • in medical and health places such as hospitals, operating rooms, etc.
  • protective equipment such as masks, goggles, etc.
  • a certain degree of occlusion is caused to the facial area. Therefore, it is necessary to monitor whether the target object is wearing a mask and other preset items, and to determine the way of identification according to the detection result.
  • Fig. 3 shows a schematic diagram of identity recognition processing according to an embodiment of the present disclosure.
  • the method further includes: in a case where the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the preset target includes a partial area of the face
  • the first detection result is whether the target object wears a mask or other objects capable of shielding the facial area.
  • identity recognition may be performed according to the first detection result
  • step S13 may include: in the case that the living body detection result is a living body, according to the first detection result, performing identity recognition processing on the first area to obtain the The identity information of the target object.
  • a preset target for example, an item such as a mask
  • the facial region in the first region can be detected and processed through a neural network.
  • a feature map of the facial region can be obtained through a convolutional neural network, and a mask can be worn on the target object that can block the face
  • there are missing features in the feature map of the facial area For example, if the target object wears a mask, the features of the target object's nose and mouth cannot be detected, that is, the features of the nose and mouth are missing.
  • the target object wears sunglasses, the eye features cannot be detected, that is, the eye features are missing.
  • the target object in the absence of preset features (for example, nose and mouth features or eye features, etc.), the target object may wear masks, sunglasses and other items, and further detection may be performed, for example, ,
  • the face of the target object can be monitored by convolutional neural networks and other methods to detect whether there are preset targets such as masks.
  • the shape and texture of the mask can be detected to identify masks and other items, and the target object can be excluded from covering the face by hand or due to Other people’s occlusion, etc.
  • the method further includes: generating second warning information when the first detection result is that the preset target does not exist. Further, a second warning message can also be output through a warning device or a display to remind the target object to wear a mask.
  • the warning device may include an audio device, a warning lamp, etc., and may output the second warning information in a manner of sound or light, or may display information that the target object is not wearing a mask on the display.
  • the embodiment of the present disclosure does not limit the output mode of the second early warning information.
  • the method of identity recognition may also be determined according to the first detection result (that is, whether the target object wears an object that can cover the face).
  • perform identity recognition processing on the first area to obtain the identity information of the target object including one of the following: when the first detection result is that the preset target does not exist In the case of performing the first identity recognition processing on the face area in the first area to obtain the identity information of the target object; in the case where the first detection result is that the preset target exists, the The face area in the first area is subjected to a second identity recognition process to obtain the identity information of the target object, wherein the weight of the feature of the unoccluded area of the face in the second identity recognition process is greater than that in the first identity recognition process The weight of the feature of the corresponding area.
  • a neural network can be used for identity recognition processing, where the neural network used for the first identity recognition processing is different from the neural network used for the second identity recognition processing.
  • the neural network used for the first identity recognition processing is different from the neural network used for the second identity recognition processing.
  • the features of the unoccluded area have a larger weight
  • the attention mechanism of the neural network is focused on the features of the unoccluded area, and it is conducive to the feature of the unoccluded area for identity recognition.
  • all features of the facial region can be used for identity recognition processing, and the neural network can perform identity recognition based on all the features of the facial region.
  • the neural network can only obtain the features of the eyes and eyebrows (the nose and mouth features are missing), the neural network can increase the weights of the eye and eyebrow features in the training to Eye and eyebrow features for identification.
  • the neural network can perform identity recognition based on all the features of the facial area.
  • the neural network can determine the similarity between the eye and eyebrow features and the reference features in the database (the reference features of the eye and eyebrows), and the similarity is greater than or equal to the similarity threshold corresponding to the reference features
  • the identity information is determined as the identity information of the target object.
  • the neural network can determine the similarity between facial features and reference features in the database (face reference features), and determine the identity information corresponding to the reference features whose similarity is greater than or equal to the similarity threshold as the identity of the target object information.
  • the feature of the target object can be changed It is stored in the database, and an identity (for example, identity code, etc.) is added to the target object.
  • an identity for example, identity code, etc.
  • the identification method can be selected according to whether there is a preset target, and the accuracy of identification can be improved.
  • the gender of the target object can also be identified.
  • the preset temperature threshold can be set differently according to gender.
  • the average body temperature of women is 0.3°C higher than that of men.
  • Gender is used to set the preset temperature threshold, such as 37 degrees for men and 37.3 degrees for women.
  • the method further includes: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
  • the gender information of the target object can be determined based on the facial features of the target object. For example, when the target object is not wearing a mask, the gender of the target object can be determined based on all the features of the target object’s face. When the target object wears a mask, the gender of the target object can be determined according to the characteristics of the eyes and eyebrows of the target object. Alternatively, in the case where the facial features or eye and eyebrow features of the target object match the reference features in the database, the gender of the target object can be determined according to the gender recorded in the identity information corresponding to the reference feature in the database.
  • the embodiment of the present disclosure does not limit the method for determining gender.
  • the preset temperature threshold may be determined according to gender information. For example, if the average body temperature of women is higher than the average body temperature of men, the preset temperature threshold for women may be higher than the preset temperature threshold for men.
  • the embodiment of the present disclosure does not limit the method for determining the preset temperature threshold.
  • the fourth area (corresponding to the second area where the preset position in the visible light image is located) in the infrared image (for example, the forehead) is also temperature
  • the detection process determines the temperature information (for example, body temperature) of the target object.
  • the pixel value of the pixel in the infrared image may represent the color temperature value, and the color temperature value may be used to determine the temperature of the forehead area of the target object.
  • the embodiment of the present disclosure does not limit the manner of determining the temperature information.
  • step S14 when the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object. For example, during an epidemic, fever is a suspected symptom, and you can focus on monitoring the target object with symptoms.
  • the first warning information can be generated according to the identity information of the target object.
  • the target object For example, the target object’s information can be displayed on the display. Identity information and body temperature, so as to focus on monitoring the target object.
  • the first warning message can be issued through audio and other equipment to remind the epidemic prevention personnel to focus on the target object, or to isolate the target object.
  • the embodiment of the present disclosure does not limit the output mode of the first early warning information.
  • the method further includes: comparing the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object with the The visible light image and/or the infrared image are superimposed to obtain a detection image.
  • the visible light image and/or infrared image can be displayed through the display, and the identity information and/or temperature information of the target object can be displayed at the location of the target object in the visible light image and/or infrared image.
  • the identity information and/or temperature information of the target object may be superimposed on the first area where the target object in the visible light image is located, that is, the identity information and/or temperature information of the target object may be displayed in the first area.
  • the identity information and/or temperature information of the target object can be displayed in the third area of the infrared image.
  • the embodiment of the present disclosure does not limit the display mode.
  • the living body detection strategy can be determined by the size of the first area, and the living body detection strategy can be performed based on the living body detection strategy.
  • Isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image, and can be based on Whether there is a preset goal to select the identification method to improve the accuracy of identification.
  • temperature detection can be performed on preset parts, which can quickly detect the body temperature of the target object in the image, improve the efficiency of body temperature detection, and is suitable for places with a large flow of people.
  • the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
  • the monitoring method can be used in a system for monitoring the monitoring area.
  • An infrared camera 1 for obtaining infrared images by light a visible light camera 2 for obtaining visible light images, and a processor 3 (for example, a system-on-chip SoC).
  • the system may also include a supplementary light unit 4 used in an environment with poor lighting conditions, a display 5 for displaying visible light images, infrared images and/or warning information, and a communication unit for transmitting warning information, temperature information and other information 6.
  • the warning unit 7 for outputting early warning information, and the interactive unit 8 for inputting instructions, etc.
  • the infrared camera 1 and the visible light camera 2 can capture video frames of the monitored area, for example, visible light image AF (AF1-AF25) and infrared image BF (BF1-BF25).
  • the position correction of the visible light image AF and the infrared image BF can be performed, that is, the position of the third area in the infrared image BF can be determined according to the position of the first area in the visible light image AF.
  • the processor 3 can perform image quality detection on the visible light image AF, for example, the detection can detect the texture of the light image and the sharpness of the boundary, and delete the visible light image with poor image quality and the corresponding Infrared image. Further, when there is no target object in the visible light image, the visible light image and the corresponding infrared image are deleted. For example, if there is no person in the monitoring area, the visible light image and the corresponding infrared image can also be deleted.
  • the processor 3 may perform position detection processing on the visible light image AF to obtain the first area where the target object is located, and may also obtain the second area where the forehead of the target object is located. According to the positions of the first area and the second area in the visible light image, the position of the third area where the target object is located in the infrared image and the position of the fourth area where the forehead of the target object is located in the infrared image can be determined.
  • the processor 3 may determine the distance between the target object and the camera according to the size information of the first area, and determine the living body detection strategy based on the distance.
  • the living body detection strategy is determined as the living body detection strategy based on the human body shape.
  • the shape detection can be performed on the third area in the infrared image to obtain the shape detection score.
  • isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores.
  • a larger first weight can be assigned to the shape detection score, and a smaller second weight can be assigned to the isothermal analysis score, and the living body detection result can be obtained after the weighted summation.
  • the living body detection strategy is determined as the living body detection strategy based on the head and neck shape.
  • the shape detection can be performed on the third area in the infrared image to obtain the shape detection score.
  • isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores.
  • similar weights can be assigned to the morphological detection score and the isothermal analysis score, and after the weighted summation, the living body detection result can be obtained.
  • the living body detection strategy is determined as the living body detection strategy based on the facial morphology.
  • the shape detection can be performed on the third area in the infrared image to obtain the shape detection score.
  • isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores.
  • a smaller first weight can be assigned to the morphological detection score, and a larger second weight can be assigned to the isothermal analysis score. After the weighted summation, the living body detection result can be obtained.
  • the processor 3 can determine the method of identity recognition according to whether the target object is wearing a mask. For example, if the target object is not wearing a mask, it can be based on all the facial features of the target object and the data in the database. Compare with reference characteristics to determine the identity information of the target object. When the target object wears a mask, the target object's eye and eyebrow characteristics can be compared with reference characteristics in the database to determine the target object's identity information. Further, the second early warning information may also include the identity information of the target object, for example, it may be displayed on the display 5 that XXX (name) is not wearing a mask, etc.
  • the processor 3 may recognize the gender of the target object, for example, may recognize the gender of the target object according to the facial features of the target object, or obtain the gender of the target object according to the identity information of the target object.
  • the preset temperature threshold of the male target and the preset temperature threshold of the female target can be determined respectively according to the gender of the target object.
  • the preset temperature threshold may also be input through the interaction unit 8. The embodiment of the present disclosure does not limit the setting method of the preset temperature threshold.
  • the temperature of the fourth area may be monitored to obtain the body temperature of the target object.
  • the processor 3 may generate the first warning information.
  • the first warning information is displayed through the display 5, or the first warning information is played through the warning unit 7.
  • the identity information, body temperature, and position information of the camera ie, the location where the feverish target object appears
  • the communication unit 6 can be transmitted to the background database or server through the communication unit 6, so as to facilitate the tracking of suspected cases and the detection of suspected cases. Determination of the trajectory of the case.
  • the identity information and body temperature of the target object can be superimposed on the first area in the visible light image or the third area in the infrared image, and displayed on the display 5 to visually observe and monitor
  • the temperature information and identity information of each target object in the area, and the superimposed image can be stored.
  • the monitoring method can use visible light images and infrared images to perform temperature monitoring, which can reduce the hardware cost of the camera and can also reduce the pressure of data processing. It can also monitor whether each target object wears a mask. It can be used in fields such as monitoring of health and medical places or monitoring of epidemic prevention.
  • the embodiments of the present disclosure do not limit the application field of the monitoring method.
  • Fig. 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure. As shown in Fig. 5, the system includes: an infrared image acquisition part 11, a visible light image acquisition part 12, and a processing part 13,
  • the processing part 13 is configured to:
  • the result of the living body detection is a living body, perform identity recognition processing on the first area to obtain the identity information of the target object, and perform temperature control on the fourth area corresponding to the second area in the infrared image Detection processing to obtain temperature information of the target object;
  • first warning information is generated according to the identity information of the target object.
  • the processing part 13 may include a processor (for example, the processor 3 in FIG. 4A), and the infrared image acquisition part 11 may include an infrared camera (for example, the infrared camera 1 in FIG. 4A)
  • the visible light image acquisition part 12 may include a visible light camera (for example, the visible light camera 2 in FIG. 4A).
  • the processing part is further configured to:
  • the living body detection processing is performed on the third area in the infrared image to obtain the living body detection result.
  • the processing part is further configured to:
  • the living body detection strategy as a living body detection strategy based on human body shape
  • the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  • the processing part is further configured to:
  • the shape detection result is a living body shape
  • the living body detection result is determined.
  • the processing part is further configured to:
  • the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the preset target includes an object that occludes a partial area of the face,
  • the processing part is further configured to:
  • the living body detection result is a living body
  • an identity recognition process is performed on the first area to obtain the identity information of the target object.
  • the processing part is further configured to:
  • the feature missing result is a preset feature missing, detecting whether a preset target is included in the face area, and obtaining the first detection result.
  • the processing part is further configured to:
  • the first detection result is that the preset target does not exist
  • a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the second The weight of the feature of the unoccluded area of the face in the identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
  • the processing part is further configured to:
  • the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object are superimposed with the visible light image and/or infrared image to obtain a detection image.
  • the processing part is further configured to:
  • the processing part is further configured to:
  • second early warning information is generated.
  • embodiments of the present disclosure also provide monitoring systems, electronic devices, computer-readable storage media, and programs. All of the above can be used to implement any monitoring method provided by the embodiments of the present disclosure. For the corresponding technical solutions and descriptions, refer to the method section. The corresponding records will not be repeated here.
  • the functions or parts included in the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments.
  • the functions or parts included in the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments.
  • the embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and the above method is implemented when the computer program instructions are executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory configured to store executable instructions of the processor; wherein the processor is configured as the aforementioned method.
  • the embodiment of the present disclosure also provides a computer program product, including computer readable code, when the computer readable code runs on the device, the processor in the device executes instructions for implementing the monitoring method provided by any of the above embodiments .
  • the embodiments of the present disclosure also provide another computer program product for storing computer-readable instructions, which when executed, cause the computer to perform the operation of the monitoring method provided in any of the foregoing embodiments.
  • the electronic device can be provided as a terminal, server or other form of device.
  • Fig. 6 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, and an input/output (Input/Output, I/O) interface 812 , The sensor component 814, and the communication component 816.
  • a processing component 802 a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, and an input/output (Input/Output, I/O) interface 812 , The sensor component 814, and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more sub-parts to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia sub-part to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method to operate on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 806 provides power for various components of the electronic device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (Liquid Crystal Display, LCD) and a touch panel (TP).
  • the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data.
  • Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC).
  • the microphone is configured to receive external audio signals.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker configured to output audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface part.
  • the above-mentioned peripheral interface part may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 814 may also include a light sensor, such as a (Complementary Metal Oxide Semiconductor) or CCD (Charge-coupled Device) image sensor, which is configured to be used in imaging applications.
  • a light sensor such as a (Complementary Metal Oxide Semiconductor) or CCD (Charge-coupled Device) image sensor, which is configured to be used in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as Wi-Fi, 2G (2-Generation wireless telephone technology, second-generation mobile communication technology) or 3G (3-Generation wireless telephone technology, third-generation mobile communication technology) ), or a combination of them.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a Near Field Communication (NFC) part to facilitate short-range communication.
  • the NFC part can be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and others. Technology to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the electronic device 800 may be used by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processor (Digital Signal Processing, DSP), and digital signal processing device (Digital Signal Processing Device). , DSPD), programmable logic device (programmable logic device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor or other electronic components to implement the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processing
  • DSP Digital Signal Processing Device
  • DSPD programmable logic device
  • PLD programmable logic device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components to implement the above method.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
  • Fig. 7 is a block diagram showing an electronic device 1900 according to an exemplary embodiment.
  • the electronic device 1900 may be provided as a server.
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932 for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • the embodiments of the present disclosure may be systems, methods, and/or computer program products.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the embodiments of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Examples of computer-readable storage media include: portable computer disks, hard disks, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), erasable programmable Read-only memory (Electrical Programmable Read Only Memory, EPROM or flash memory), static random access memory (Static Random Access Memory, SRAM), portable compact disc read-only memory (Compact Disc Read-Only Memory, CD-ROM), digital multiple Function disks (Digital Video Disc, DVD), memory sticks, floppy disks, mechanical encoding devices, such as punch cards on which instructions are stored or raised structures in the grooves, and any suitable combination of the foregoing.
  • the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the embodiments of the present disclosure may be assembly instructions, instruction set architecture (Industry Standard Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or a combination of Or source code or object code written in any combination of multiple programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as "C" language or similar programming Language.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network-including local area network (LAN) or wide area network (WAN)-or it can be connected to an external computer (such as Use an Internet service provider to connect via the Internet).
  • the electronic circuit is customized by using the state information of the computer-readable program instructions, such as programmable logic circuit, field programmable gate array (FPGA) or programmable logic array (PLA), The electronic circuit can execute computer-readable program instructions to implement various aspects of the embodiments of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the computer program product can be implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a software development kit (SDK) and so on.
  • SDK software development kit

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Alarm Systems (AREA)
  • Image Processing (AREA)
  • Radiation Pyrometers (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

A monitoring method and system, an electronic device, and a storage medium. The method comprises: carrying out target region recognition on a visible light image of a monitoring region, and obtaining a first region where a target object is located and a second region where a preset part of the target object is located in the visible light image (S11); performing living body detection on the target object according to size information of the first region, and a third region, in an infrared image of the monitoring region, corresponding to the first region, to obtain a living body detection result (S12); when the living body detection result shows a living body, carrying out identity recognition on the first region to obtain identity information of the target object, and carrying out temperature measurement on a fourth region, in the infrared image, corresponding to the second region, to obtain temperature information of the target object (S13); and generating first early warning information according to the target object and the identity information thereof under the condition that the temperature information is greater than or equal to a preset temperature threshold (S14). According to the present method, living body detection and temperature measurement can be carried out on the basis of an infrared image, the body temperature of a target object in the image can be quickly measured, the body temperature measurement efficiency is improved, and the monitoring method is suitable for places having large pedestrian volume.

Description

监测方法及系统、电子设备和存储介质Monitoring method and system, electronic equipment and storage medium
相关申请的交叉引用Cross-references to related applications
本公开基于申请号为202010177537.0、申请日为2020年3月13日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。This disclosure is based on a Chinese patent application with an application number of 202010177537.0 and an application date of March 13, 2020, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference into this disclosure.
技术领域Technical field
本公开涉及计算机视觉技术领域,涉及一种监测方法及系统、电子设备和存储介质。The present disclosure relates to the field of computer vision technology, and relates to a monitoring method and system, electronic equipment, and storage medium.
背景技术Background technique
在呼吸道类病毒疫情防控中,早发现,早隔离是最原始且极为有效措施。早发现代指采用医学诊测手段尽早发现可疑人员,其中可包括体温检测。早隔离代指减小病毒携带患者与他人接触,减少交叉感染的风险,其中可包括无接触式体温检测,带口罩等措施。In the prevention and control of respiratory viruses, early detection and early isolation are the most primitive and extremely effective measures. Early detection refers to the early detection of suspicious persons by means of medical diagnosis and testing, which may include temperature detection. Early isolation refers to reducing the contact between virus-carrying patients and others, and reducing the risk of cross-infection, which can include non-contact temperature testing, wearing masks and other measures.
在相关技术中,可通过体温计、热成像图等方式进行体温检测,但体检检测过程繁琐,难以适应人流量较大的场所,且检测到具有发热症状的疑似病例时,难以核实疑似病例的身份信息。In related technologies, body temperature can be detected by means of thermometers, thermal imaging, etc., but the physical examination and detection process is cumbersome, it is difficult to adapt to places with a large flow of people, and it is difficult to verify the identity of the suspected case when a suspected case with fever symptoms is detected information.
发明内容Summary of the invention
本公开实施例提出了一种监测方法及系统、电子设备和存储介质。The embodiments of the present disclosure propose a monitoring method and system, electronic equipment, and storage medium.
根据本公开实施例的一方面,提供了一种监测方法,包括:对监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;根据所述第一区域的尺寸信息,以及所述第一区域在所述监测区域的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;在活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。According to an aspect of the embodiments of the present disclosure, a monitoring method is provided, which includes: performing target area recognition on a visible light image of a monitoring area, and obtaining a first area where a target object is located in the visible light image and a preset of the target object The second area where the part is located; according to the size information of the first area and the third area corresponding to the first area in the infrared image of the monitoring area, the target object is detected in vivo to obtain the in vivo detection Result; in the case that the result of the living body detection is a living body, the first area is subjected to identification processing to obtain the identity information of the target object, and the fourth area corresponding to the second area in the infrared image is obtained Perform temperature detection processing to obtain temperature information of the target object; in the case that the temperature information is greater than or equal to a preset temperature threshold, generate first warning information according to the identity information of the target object.
根据本公开的实施例的监测方法,可基于红外图像进行活体检测和温度检测,可快速检测图像中目标对象的体温,提高体温检测的效率,适用于人流量较大的场所。且可通过可见光图像识别目标对象的身份信息,有助于确定疑似病例的身份信息,提高防疫效率。According to the monitoring method of the embodiment of the present disclosure, living body detection and temperature detection can be performed based on infrared images, the body temperature of the target object in the image can be quickly detected, the efficiency of body temperature detection can be improved, and it is suitable for places with a large flow of people. And the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
在一种可能的实现方式中,根据所述第一区域的尺寸信息,以及所述第一区域在所述红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果,包括:根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;根据所述距离,确定活体检测策略;根据所述第一区域在可见光图像中的位置,在所述红外图像中确定所述第三区域的位置;根据所述活体检测策略,对红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。In a possible implementation manner, according to the size information of the first area and the third area corresponding to the first area in the infrared image, the target object is subjected to live detection to obtain a live detection result , Including: determining the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first area; determining the living body detection strategy according to the distance; For the position in the visible light image, the position of the third region is determined in the infrared image; according to the living body detection strategy, the living body detection processing is performed on the third region in the infrared image to obtain the living body detection result.
通过这种方式,可通过第一区域的尺寸确定活体检测策略,并基于活体检测策略进行活体检测,可利用等温分析提升在远红外图像中的活体检测精确度。In this way, the living body detection strategy can be determined by the size of the first region, and the living body detection can be performed based on the living body detection strategy, and isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image.
在一种可能的实现方式中,根据所述距离,确定活体检测策略,包括以下中的一种:在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a possible implementation manner, determining the living body detection strategy according to the distance includes one of the following: when the distance is greater than or equal to a first distance threshold, determining the living body detection strategy to be based on A living body detection strategy based on the shape of the human body; when the distance is greater than or equal to the second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the shape of the head and neck; If it is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
在一种可能的实现方式中,根据所述活体检测策略,对所述第三区域进行活体检测处理,获得所述活体检测结果,包括:根据所述活体检测策略,对所述第三区域进行形态检 测处理,获得形态检测结果;在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;根据所述活体检测策略,确定形态检测结果的第一权重和所述等温分析结果的第二权重;根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。In a possible implementation manner, performing a living detection process on the third region according to the living body detection strategy to obtain the living body detection result includes: performing a living body detection process on the third region according to the living body detection strategy Morphology detection processing to obtain a morphology detection result; in the case that the morphology detection result is a living body shape, perform isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determine according to the living body detection strategy The first weight of the shape detection result and the second weight of the isothermal analysis result; the living body detection result is determined according to the first weight, the second weight, the shape detection result, and the isothermal analysis result.
在一种可能的实现方式中,所述方法还包括,在活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,在活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,包括:在活体检测结果为活体的情况下,根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。In a possible implementation manner, the method further includes, in a case where the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the The preset target includes an object that occludes a partial area of the face. In the case that the live body detection result is a live body, the identity recognition process is performed on the first area to obtain the identity information of the target object, including: in the live body detection result In the case of a living body, according to the first detection result, an identity recognition process is performed on the first area to obtain the identity information of the target object.
通过这种方式,可根据是否存在预设目标来选择身份识别方法,可提升身份识别的准确度。In this way, the identification method can be selected according to whether there is a preset target, and the accuracy of identification can be improved.
在一种可能的实现方式中,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,包括:对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括预设目标,获得所述第一检测结果。In a possible implementation manner, detecting whether a preset target is included in the first area, and obtaining the first detection result includes: performing detection processing on a face area in the first area to determine the face The feature missing result of the region; in the case that the feature missing result is a preset feature missing, detecting whether the face region includes a preset target, and obtaining the first detection result.
在一种可能的实现方式中,根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,包括以下中的一种:在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In a possible implementation manner, performing identity recognition processing on the first area according to the first detection result to obtain the identity information of the target object includes one of the following: If the result is that the preset target does not exist, perform first identity recognition processing on the face area in the first area to obtain the identity information of the target object; when the first detection result is that the In the case of a preset target, a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the features of the unoccluded area of the face in the second identity recognition process The weight of is greater than the weight of the feature of the corresponding area in the first identification process.
在一种可能的实现方式中,所述方法还包括:在所述第一检测结果为不存在所述预设目标的情况下,生成第二预警信息。In a possible implementation manner, the method further includes: in a case where the first detection result is that the preset target does not exist, generating second early warning information.
在一种可能的实现方式中,所述方法还包括:获取所述目标对象的性别信息;根据所述性别信息,确定所述预设温度阈值。In a possible implementation manner, the method further includes: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
在一种可能的实现方式中,所述方法还包括:将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或红外图像进行叠加处理,获得检测图像。In a possible implementation, the method further includes: comparing the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object with the The visible light image and/or the infrared image are superimposed to obtain a detection image.
根据本公开实施例的一方面,提供了一种监测系统,包括:可见光图像获取部分、红外图像获取部分、处理部分,所述处理部分被配置为:对可见光图像获取部分获取的监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;根据所述第一区域的尺寸信息,以及所述第一区域在红外图像获取部分获取的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;在活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。According to an aspect of the embodiments of the present disclosure, a monitoring system is provided, including: a visible light image acquisition part, an infrared image acquisition part, and a processing part, the processing part is configured to: The image performs target area recognition to obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located; according to the size information of the first area, and the first area In the third area corresponding to the infrared image acquired by the infrared image acquisition part, perform live detection on the target object to obtain the live detection result; in the case where the live detection result is a live body, perform identity recognition processing on the first area , Obtain the identity information of the target object, and perform temperature detection processing on the fourth area corresponding to the second area in the infrared image to obtain the temperature information of the target object; when the temperature information is greater than or equal to In the case of a preset temperature threshold, the first warning information is generated according to the identity information of the target object.
在一种可能的实现方式中,所述处理部分进一步被配置为:根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;根据所述距离,确定活体检测策略;根据所述第一区域在可见光图像中的位置,在所述红外图像中确 定所述第三区域的位置;根据所述活体检测策略,对红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。In a possible implementation, the processing part is further configured to: determine the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region; According to the distance, determine the living body detection strategy; according to the position of the first region in the visible light image, determine the position of the third region in the infrared image; according to the living body detection strategy, determine the position of the third region in the infrared image In the third area, a living body detection process is performed to obtain the living body detection result.
在一种可能的实现方式中,所述处理部分进一步被配置为:在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;或者在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;或者在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a possible implementation manner, the processing part is further configured to: in a case where the distance is greater than or equal to a first distance threshold, determine the living body detection strategy as a living body detection strategy based on human body shape; or When the distance is greater than or equal to the second distance threshold and less than the first distance threshold, the living body detection strategy is determined to be a living body detection strategy based on the head and neck shape; or when the distance is smaller than the second distance In the case of a threshold value, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
在一种可能的实现方式中,所述处理部分进一步被配置为:根据所述活体检测策略,对所述第三区域进行形态检测处理,获得形态检测结果;在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;根据所述活体检测策略,确定形态检测结果的第一权重和所述等温分析结果的第二权重;根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。In a possible implementation manner, the processing part is further configured to: perform morphological detection processing on the third region according to the living body detection strategy to obtain a morphological detection result; when the morphological detection result is a living body morphology In the case of performing isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result; determining the first weight of the morphological detection result and the second weight of the isothermal analysis result according to the living body detection strategy; According to the first weight, the second weight, the morphological detection result, and the isothermal analysis result, the living body detection result is determined.
在一种可能的实现方式中,所述处理部分还被配置为:在活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,所述处理部分进一步被配置为:在活体检测结果为活体的情况下,根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。In a possible implementation manner, the processing part is further configured to: in a case where the living body detection result is a living body, detect whether a preset target is included in the first area, and obtain the first detection result, wherein , The preset target includes an article that occludes a partial area of the face, and the processing part is further configured to: in a case where the result of the detection of a living body is a living body, according to the result of the first detection, the first area is Perform identity recognition processing to obtain the identity information of the target object.
在一种可能的实现方式中,所述处理部分进一步被配置为:对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括预设目标,获得所述第一检测结果。In a possible implementation manner, the processing part is further configured to: perform detection processing on the facial region in the first region to determine the feature missing result of the facial region; If the feature is missing, it is detected whether a preset target is included in the face area, and the first detection result is obtained.
在一种可能的实现方式中,所述处理部分进一步被配置为:在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;或者在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In a possible implementation manner, the processing part is further configured to: in the case where the first detection result is that the preset target does not exist, perform a first operation on the face area in the first area. Identity recognition processing to obtain the identity information of the target object; or in the case where the first detection result is that the preset target exists, the second identity recognition processing is performed on the face area in the first area to obtain In the identity information of the target object, the weight of the feature of the unoccluded area of the face in the second identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
在一种可能的实现方式中,所述处理部分还被配置为:将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或红外图像进行叠加处理,获得检测图像。In a possible implementation manner, the processing part is further configured to: combine the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object, Perform superposition processing with the visible light image and/or infrared image to obtain a detection image.
在一种可能的实现方式中,所述处理部分还被配置为:获取所述目标对象的性别信息;根据所述性别信息,确定所述预设温度阈值。In a possible implementation manner, the processing part is further configured to: obtain gender information of the target object; and determine the preset temperature threshold according to the gender information.
在一种可能的实现方式中,所述处理部分还被配置为:在所述第一检测结果为不存在所述预设目标的情况下,生成第二预警信息。In a possible implementation manner, the processing part is further configured to generate second warning information when the first detection result is that the preset target does not exist.
根据本公开实施例的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行上述监测方法。According to an aspect of the embodiments of the present disclosure, there is provided an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the above-mentioned monitoring method.
根据本公开实施例的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述监测方法。According to an aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing monitoring method is implemented.
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述监测方法。According to an aspect of the present disclosure, there is provided a computer program including computer readable code, and when the computer readable code runs in an electronic device, a processor in the electronic device executes for realizing the above-mentioned monitoring method.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开实施例。It should be understood that the above general description and the following detailed description are only exemplary and explanatory, rather than limiting the embodiments of the present disclosure.
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。According to the following detailed description of exemplary embodiments with reference to the accompanying drawings, other features and aspects of the present disclosure will become clear.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开实施例的实施例,并与说明书一起用于说明本公开实施例的技术方案。The drawings herein are incorporated into the specification and constitute a part of the specification. These drawings illustrate embodiments that conform to the embodiments of the present disclosure, and are used together with the specification to describe the technical solutions of the embodiments of the present disclosure.
图1示出根据本公开实施例的图像处理方法的流程图;Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure;
图2示出根据本公开实施例的活体检测的示意图;FIG. 2 shows a schematic diagram of living body detection according to an embodiment of the present disclosure;
图3示出根据本公开的实施例的身份识别处理的示意图;Fig. 3 shows a schematic diagram of identity recognition processing according to an embodiment of the present disclosure;
图4A和图4B示出根据本公开的实施例的监测方法的应用示意图;4A and 4B show schematic diagrams of application of a monitoring method according to an embodiment of the present disclosure;
图5示出根据本公开的实施例的监测系统的框图;Figure 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure;
图6示出根据本公开的实施例的电子设备的框图;FIG. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure;
图7示出根据本公开的实施例的电子设备的框图。FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the drawings. The same reference numerals in the drawings indicate elements with the same or similar functions. Although various aspects of the embodiments are shown in the drawings, unless otherwise noted, the drawings are not necessarily drawn to scale.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The dedicated word "exemplary" here means "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" need not be construed as being superior or better than other embodiments.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations. In addition, the term "at least one" in this document means any one of a plurality of or any combination of at least two of the plurality, for example, including at least one of A, B, and C, may mean including A, Any one or more elements selected in the set formed by B and C.
另外,为了更好的说明本公开实施例,在下文的实施方式中给出了众多的细节。本领域技术人员应当理解,没有某些细节,本公开实施例同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开实施例的主旨。In addition, in order to better illustrate the embodiments of the present disclosure, numerous details are given in the following embodiments. Those skilled in the art should understand that the embodiments of the present disclosure can also be implemented without certain details. In some instances, the methods, means, elements, and circuits well-known to those skilled in the art have not been described in detail, so as to highlight the gist of the embodiments of the present disclosure.
图1示出根据本公开实施例的图像处理方法的流程图,如图1所示,所述方法包括:Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in Fig. 1, the method includes:
在步骤S11中,对监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;In step S11, perform target area recognition on the visible light image of the monitoring area, and obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located in the visible light image;
在步骤S12中,根据所述第一区域的尺寸信息,以及所述第一区域在所述监测区域的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;In step S12, according to the size information of the first area and the third area corresponding to the first area in the infrared image of the monitoring area, perform a live detection on the target object to obtain a live detection result;
在步骤S13中,在活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;In step S13, in the case that the result of the living body detection is a living body, the identity recognition processing is performed on the first area to obtain the identity information of the target object, and the corresponding information of the second area in the infrared image is obtained. The fourth area performs temperature detection processing to obtain temperature information of the target object;
在步骤S14中,在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。In step S14, when the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object.
根据本公开的实施例的监测方法,可基于红外图像进行活体检测和温度检测,可快速检测图像中目标对象的体温,提高体温检测的效率,适用于人流量较大的场所。且可通过可见光图像识别目标对象的身份信息,有助于确定疑似病例的身份信息,提高防疫效率。According to the monitoring method of the embodiment of the present disclosure, living body detection and temperature detection can be performed based on infrared images, the body temperature of the target object in the image can be quickly detected, the efficiency of body temperature detection can be improved, and it is suitable for places with a large flow of people. And the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
在一种可能的实现方式中,所述监测方法可以由终端设备或其它处理设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、 无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。其它处理设备可为服务器或云端服务器等。在一些可能的实现方式中,该监测方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In a possible implementation manner, the monitoring method may be executed by a terminal device or other processing equipment, where the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, or a cordless Telephones, personal digital assistants (PDAs), handheld devices, computing devices, in-vehicle devices, wearable devices, etc. Other processing equipment can be servers or cloud servers. In some possible implementations, the monitoring method can be implemented by a processor invoking computer-readable instructions stored in the memory.
在一种可能的实现方式中,所述监测方法可用于前端监测设备中,例如,前端监测设备可以是将处理器、摄像头等部分集成为一体的设备,可通过处理器控制摄像头等部分,获取监测区域的图像,并通过处理器执行所述监测方法。在一种可能的实现方式中,所述监测方法也可用于服务器(例如,执行监测方法的处理器位于服务器,处理器与摄像头等部分并未封装成为一体,而是分布于监测区域和后端处理器中,组成监测系统)中,服务器可接收在监测区域的摄像头拍摄的图像,并对该图像执行所述监测方法。In a possible implementation, the monitoring method can be used in front-end monitoring equipment. For example, the front-end monitoring equipment may be a device that integrates a processor, a camera, and other parts into a whole. The processor can control the camera and other parts to obtain The image of the monitoring area is monitored, and the monitoring method is executed by the processor. In a possible implementation, the monitoring method can also be used in a server (for example, the processor that executes the monitoring method is located in the server, and the processor and the camera are not packaged as a whole, but are distributed in the monitoring area and the back end. In the processor, the monitoring system is composed, and the server can receive the image taken by the camera in the monitoring area, and execute the monitoring method on the image.
在一种可能的实现方式中,所述处理器可以是系统级芯片(System on a Chip,SoC),所述摄像头可包括用于获取红外图像的红外摄像头,以及用于获取可见光图像的摄像头。在一种可能的实现方式中,可使用氧化钒非制冷红外焦平面探测器等特殊传感器来捕获空间光信号,该传感器可用于远红外摄像头中,以获取生命光线,进行温度监测。In a possible implementation manner, the processor may be a System on Chip (SoC), and the camera may include an infrared camera for acquiring infrared images and a camera for acquiring visible light images. In a possible implementation, special sensors such as vanadium oxide uncooled infrared focal plane detectors can be used to capture spatial light signals. The sensors can be used in far-infrared cameras to obtain life light and perform temperature monitoring.
在一种可能的实现方式中,摄像头可拍摄到监控区域的视频,该视频的视频帧即为可见光图像和红外图像。红外摄像头和可见光摄像头可设置在相近的位置,例如,红外摄像头和可见光摄像头可相邻设置、并列设置等,或者可将两个摄像头设置为一体等,本公开实施例对设置方式不做限制。因此,两个摄像头在同一时刻拍摄到的图像中,各目标对象的位置是近似的,即,同一目标对象在可将光图像及对应的红外图像中的位置偏差较小,且该偏差可根据两个摄像头的位置关系进行纠偏,例如,可根据可见光图像中的第一区域(即,目标对象所在区域)的位置,确定该目标对象在红外图像中的第三区域的位置。In a possible implementation, the camera can capture a video of the monitored area, and the video frames of the video are the visible light image and the infrared image. The infrared camera and the visible light camera can be arranged in close positions, for example, the infrared camera and the visible light camera can be arranged adjacently, side by side, etc., or the two cameras can be arranged as a whole, etc. The embodiment of the present disclosure does not limit the setting method. Therefore, in the images captured by the two cameras at the same time, the position of each target object is similar, that is, the position deviation of the same target object in the optical image and the corresponding infrared image is small, and the deviation can be based on The positional relationship between the two cameras is corrected. For example, the position of the target object in the third area in the infrared image can be determined according to the position of the first area in the visible light image (that is, the area where the target object is located).
在一种可能的实现方式中,摄像头拍摄到的图像可能存在图像质量较差的情况,例如,成像模糊、焦距误差、摄像头被污染或遮挡等情况。可首先对图像进行图像质量的检测。例如,可对可见光图像进行图像质量检测,例如,可检测可将光图像的纹理、边界的清晰度。在清晰度大于或等于清晰度阈值的情况下,则可认为图像质量较好,可进行下一步的位置检测等处理。在清晰度小于清晰度阈值的情况下,可认为图像质量较差,可同时删除该可见光图像和该可见光图像对应的红外图像。In a possible implementation manner, the image captured by the camera may have poor image quality, for example, the image is blurred, the focal length error, and the camera is contaminated or blocked. The image quality can be checked first. For example, the image quality of the visible light image can be detected, for example, the texture and the definition of the boundary of the visible light image can be detected. In the case where the sharpness is greater than or equal to the sharpness threshold, the image quality can be considered to be good, and the next position detection and other processing can be performed. When the sharpness is less than the sharpness threshold, the image quality can be considered to be poor, and the visible light image and the infrared image corresponding to the visible light image can be deleted at the same time.
在一种可能的实现方式中,在步骤S11中,可对可见光图像进行目标区域识别,可获得目标对象所在的第一区域,所述第一区域可以是目标对象的人脸区域和/或目标对象的人体区域。该处理可以得到目标对象所在的第一区域的位置坐标,或者包含第一区域的检测框的位置坐标。In a possible implementation manner, in step S11, the visible light image can be identified by the target area to obtain the first area where the target object is located. The first area may be the face area and/or the target object of the target object. The human body area of the subject. This processing can obtain the position coordinates of the first area where the target object is located, or the position coordinates of the detection frame containing the first area.
在一种可能的实现方式中,在温度检测的过程中,可对额头等区域的温度进行测量,例如,可通过红外图像的像素值来确定额头区域的温度,进而获得目标对象的体温。因此,可在可见光图像中检测预设部位(例如,额头等区域)所在的第二区域,进而可确定预设部位在红外图像中的第四区域,可预先建立红外图像的像素值与温度之间的对应关系,进而可根据第四区域中的像素值来确定目标对象的体温。In a possible implementation manner, in the process of temperature detection, the temperature of the forehead and other areas can be measured. For example, the temperature of the forehead area can be determined by the pixel value of the infrared image, and then the body temperature of the target object can be obtained. Therefore, the second area where the preset part (for example, forehead and other areas) is located in the visible light image can be detected, and then the fourth area of the preset part in the infrared image can be determined, and the pixel value and temperature of the infrared image can be established in advance. According to the corresponding relationship between the two, the body temperature of the target object can be determined according to the pixel value in the fourth area.
在一种可能的实现方式中,活体是由具备生命特征的人体表现出的具有生物活性,真实存在的生物模态。而假体是仿照活体所具有的生物特征所制作的,与活体相似的模型(例如,照片、面具、头套等)。通常,在活体检测过程中,采用人脸的近红外图像进行活体检测。其主要基于近红外光下活体与假体间的成像特征不同,例如,光流、纹理、颜色等特征差异从而实现活体与假体区分。然而,在温度监测过程中,远红外摄像获取到红外图像为远红外图像(即,基于生命光线的图像),并非通常使用的近红外光图像,因此,可通过目标对象所在的第一区域的尺寸,确定活体检测策略,提高活体检测的精确度。In a possible implementation, a living body is a biologically active, real biological modality exhibited by a human body with vital characteristics. The prosthesis is a model that is similar to the living body (for example, photos, masks, headgear, etc.) made by imitating the biological characteristics of the living body. Generally, in the process of living body detection, a near-infrared image of a human face is used for living body detection. It is mainly based on the difference in imaging characteristics between the living body and the prosthesis under near-infrared light, such as the difference in optical flow, texture, color and other characteristics, so as to realize the distinction between the living body and the prosthesis. However, in the process of temperature monitoring, the infrared image obtained by the far-infrared camera is a far-infrared image (ie, an image based on the light of life), which is not a commonly used near-infrared image. Therefore, it can pass through the first area where the target object is located. Size, determine the living body detection strategy, and improve the accuracy of living body detection.
在一种可能的实现方式中,步骤S12可包括:根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;根据所述距离,确定活体检测策略;根据所述第一区域在可见光图像中的位置,在所述红外图像中确定所述第三区域的位置;根据所述活体检测策略,对红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。In a possible implementation manner, step S12 may include: determining the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region; A living body detection strategy; according to the position of the first region in the visible light image, the position of the third region in the infrared image is determined; according to the living body detection strategy, the third region in the infrared image is performed The living body detection processing obtains the living body detection result.
在一种可能的实现方式中,第一区域为可见光图像中的目标对象所在区域,例如,可以是目标对象的人脸区域,该区域的尺寸可与目标对象与获取所述可见光图像的图像获取装置(可见光摄像头)之间的距离负相关,即,该区域尺寸越大,所述距离越小,该区域尺寸越小,所述距离越大。进一步地,活体以特有的生命特征表征于红外图像中(例如,基于生命光线的远红外图像)。人体血管遍布肌体(例如,人体、人脸、颈肩等部位),但响应于目标对象与在可见光图像中的位置较近的情况,使其在可见光图像中仅呈现出部分肌体,例如,仅呈现面部、头颈等,但面部特征更清晰。距离越远,其面部特征量越少,人体特征量越多。基于不同的距离,组合不同的活体检测策略,提高活体检测准确率,适用于不同距离使用。In a possible implementation, the first area is the area where the target object in the visible light image is located. For example, it may be the face area of the target object. The distance between the devices (visible light cameras) is negatively correlated, that is, the larger the area size, the smaller the distance, and the smaller the area size, the greater the distance. Further, a living body is characterized in an infrared image with unique vital characteristics (for example, a far-infrared image based on the light of life). The blood vessels of the human body are all over the body (for example, the human body, the face, the neck and shoulders, etc.), but in response to the close position of the target object in the visible light image, only part of the body is shown in the visible light image, for example, only It presents the face, head and neck, etc., but the facial features are clearer. The farther the distance, the fewer facial features and the more human features. Based on different distances, different live detection strategies are combined to improve the accuracy of live detection, which is suitable for use at different distances.
可以预先建立图像中的基准尺寸与基准距离之间的对应关系,根据目标对象所在区域的尺寸与基准尺寸之间的比例关系,以及基准距离,来确定目标对象与摄像头之间的距离。The corresponding relationship between the reference size and the reference distance in the image can be established in advance, and the distance between the target object and the camera can be determined according to the proportional relationship between the size of the area where the target object is located and the reference size, and the reference distance.
图2示出根据本公开实施例的活体检测的示意图,如图2所示,可根据第一区域的尺寸信息来确定目标对象与摄像头之间的距离,并根据距离选择活体检测策略。可根据可见光图像中的第一区域,确定红外图像中对应的第三区域。例如,所述第一区域为可见光图像中目标对象A的人脸区域(例如,目标对象A与摄像头距离较近,仅可拍摄到目标对象A的人脸区域),则第三区域为红外图像中目标对象A的人脸区域。第一区域为可见光图像中目标对象B的头颈区域(例如,可拍摄到目标对象B的人脸和肩颈部区域,或上半身区域等),则第三区域为红外图像中目标对象B的头颈区域。第一区域为可见光图像中目标对象C的人体区域(例如,目标对象C与摄像头距离较远,可拍摄到目标对象C的人体区域),则第三区域为红外图像中目标对象C的人体区域。Fig. 2 shows a schematic diagram of living body detection according to an embodiment of the present disclosure. As shown in Fig. 2, the distance between the target object and the camera can be determined according to the size information of the first area, and the living body detection strategy can be selected according to the distance. The corresponding third area in the infrared image can be determined according to the first area in the visible light image. For example, the first area is the face area of the target object A in the visible light image (for example, the target object A is relatively close to the camera, and only the face area of the target object A can be captured), and the third area is an infrared image The face area of target A in the middle. The first area is the head and neck area of the target object B in the visible light image (for example, the face and shoulders and neck areas of the target object B can be photographed, or the upper body area, etc.), and the third area is the head and neck area of the target object B in the infrared image area. The first area is the human body area of the target object C in the visible light image (for example, the target object C is far away from the camera, and the human body area of the target object C can be photographed), and the third area is the human body area of the target object C in the infrared image .
在一种可能的实现方式中,根据所述目标对象的距离,确定活体检测策略,包括以下中的一种:在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a possible implementation manner, determining the living body detection strategy according to the distance of the target object includes one of the following: when the distance is greater than or equal to a first distance threshold, the living body detection strategy is Determined as a living body detection strategy based on the shape of the human body; in a case where the distance is greater than or equal to a second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the shape of the head and neck; In a case where the distance is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
在一种可能的实现方式中,第一距离阈值和第二距离阈值可根据摄像头检测区域的范围、焦距等参数来确定。例如,在目标对象与摄像头的距离大于或等于第一距离阈值的情况下,则摄像头拍摄到的画面中,可包括目标对象的全身,或大部分身体,则可选择基于人体形态的活体检测策略。在目标对象与摄像头的距离小于第一距离阈值且大于或等于第二距离阈值的情况下,则摄像头拍摄到的画面中,无法包括目标对象的全身,但可包括目标对象的人脸和肩颈部区域,或上半身区域,则可选择基于头颈形态的活体检测策略。在目标对象与摄像头的距离小于第二距离阈值的情况下,则摄像头拍摄到的画面中,仅包括目标对象的人脸区域,则可选择基于面部形态的活体检测策略。In a possible implementation manner, the first distance threshold and the second distance threshold may be determined according to parameters such as the range and focal length of the detection area of the camera. For example, when the distance between the target object and the camera is greater than or equal to the first distance threshold, the image captured by the camera can include the whole body of the target object, or most of the body, and a living body detection strategy based on the shape of the human body can be selected . In the case that the distance between the target object and the camera is less than the first distance threshold and greater than or equal to the second distance threshold, the picture captured by the camera cannot include the whole body of the target object, but may include the face, shoulders and neck of the target object In the upper body area, or the upper body area, you can choose a live detection strategy based on the shape of the head and neck. In the case where the distance between the target object and the camera is less than the second distance threshold, the image captured by the camera only includes the face area of the target object, and a living body detection strategy based on facial morphology can be selected.
在一种可能的实现方式中,根据所述第一区域在可见光图像中的位置,在所述红外图像中确定所述第三区域的位置。在示例中,可见光图像中可包括一个或多个目标对象,即,包括一个或多个第一区域,可根据红外摄像头和可见光摄像头质检的位置关系,以及第一区域的位置,确定第三区域在红外图像中的位置。In a possible implementation manner, the position of the third area is determined in the infrared image according to the position of the first area in the visible light image. In an example, the visible light image may include one or more target objects, that is, include one or more first regions. According to the positional relationship between the infrared camera and the visible light camera quality inspection, and the position of the first region, the third The location of the area in the infrared image.
在一种可能的实现方式中,由于红外摄像头可获取用于检测生命光线的远红外图像,并非用于检测图像特征的近红外图像,直接在远红外图像中检测目标对象所在的第三区域的准确率可能较低。并且在存在多个目标对象的情况下,在红外图像中直接检测目标对象所在的第三区域,则各第三区域与可见光图像中各第一区域之间的对应关系难以明确。因此,可根据可见光图像中第一区域的位置以及上述两个摄像头之间的位置关系,获得与第一区域对应的第三区域的位置。类似地,可通过可见光图像中的第二区域的位置,获取红外图像中的对应的第四区域的位置。In a possible implementation, since the infrared camera can acquire far-infrared images used to detect life rays, instead of near-infrared images used to detect image features, directly detect the third area of the target object in the far-infrared image. The accuracy rate may be low. And when there are multiple target objects, if the third area where the target object is located is directly detected in the infrared image, the correspondence between each third area and each first area in the visible light image is difficult to clarify. Therefore, the position of the third area corresponding to the first area can be obtained according to the position of the first area in the visible light image and the position relationship between the two cameras. Similarly, the position of the corresponding fourth area in the infrared image can be obtained through the position of the second area in the visible light image.
在一种可能的实现方式中,在确定活体检测策略后,可利用所选择的活体检测策略对红外图像中的第三区域进行活体检测处理。根据所述活体检测策略,对所述第三区域进行活体检测处理,获得所述活体检测结果,包括:根据所述活体检测策略,对所述第三区域进行形态检测处理,获得形态检测结果;在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;根据所述活体检测策略,确定形态检测结果的第一权重和所述等温分析结果的第二权重;根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。In a possible implementation manner, after the living body detection strategy is determined, the selected living body detection strategy can be used to perform the living body detection processing on the third region in the infrared image. According to the living body detection strategy, performing living body detection processing on the third area to obtain the living body detection result includes: performing morphological detection processing on the third area according to the living body detection strategy to obtain a morphological detection result; In the case that the shape detection result is a living body shape, isothermal analysis processing is performed on the face area in the third area to obtain the isothermal analysis result; according to the living body detection strategy, the first weight and the total weight of the shape detection result are determined The second weight of the isothermal analysis result; the living body detection result is determined according to the first weight, the second weight, the morphological detection result, and the isothermal analysis result.
在一种可能的实现方式中,可基于活体检测策略,对第三区域进行形态检测。In a possible implementation manner, the morphological detection of the third region may be performed based on the living body detection strategy.
在一种可能的实现方式中,在基于人体形态的活体检测策略中,第三区域中可包括目标对象的人脸区域和人体区域,人体特征量较多,可通过神经网络等方式进行特征提取处理,以获取人体特征量进行形态检测处理,判断目标对象的人体形态是否为活体形态(例如,可根据目标对象的姿势、动作、形状等特征来判断目标对象的形态是否为活体形态)。In a possible implementation, in the living body detection strategy based on the human body shape, the third area may include the face area and the human body area of the target object. There are many human body features, which can be extracted through neural networks. The processing is to perform a shape detection process to obtain a human body feature amount to determine whether the human body shape of the target object is a living body shape (for example, whether the shape of the target object is a living body shape can be determined according to the characteristics of the target object's posture, movement, and shape).
在一种可能的实现方式中,在基于头颈形态的活体检测策略中,第三区域中可包括目标对象的人脸和肩颈部区域,或上半身区域,可通过神经网络等方式进行特征提取处理,以获取人脸和肩颈部区域特征量进行形态检测处理,判断目标对象的头颈部形态是否为活体形态(例如,可根据目标对象的姿势、动作、形状等特征来判断目标对象是否为活体形态)。In a possible implementation, in the life detection strategy based on the head and neck shape, the third area may include the face, shoulder and neck area of the target object, or the upper body area, and the feature extraction process may be performed through neural networks, etc. , To obtain the features of the face and shoulders and neck regions for shape detection processing to determine whether the head and neck shape of the target object is a living shape (for example, it can be judged whether the target object is based on the characteristics of the target object's posture, movement, shape Living form).
在一种可能的实现方式中,在基于面部形态的活体检测策略中,第三区域中可包括目标对象的人脸区域,人体特征量较少,人脸特征量较多,可通过神经网络等方式进行特征提取处理,以获取人脸区域特征量进行形态检测处理,判断目标对象的面部形态是否为活体形态(例如,可根据目标对象面部的表情、纹理等特征来判断目标对象是否为活体形态)。In a possible implementation, in the life detection strategy based on facial morphology, the third area may include the face area of the target object, with fewer human body features and more facial features, which can be achieved through neural networks, etc. Method to perform feature extraction processing to obtain the feature amount of the face area for shape detection processing to determine whether the facial shape of the target object is a living form (for example, it can be judged whether the target object is a living form based on the facial expression, texture and other characteristics of the target object ).
在一种可能的实现方式中,上述形态检测处理的形态检测结果可以是分数形式的结果。例如,在形态检测的分数大于或等于第一分数阈值的情况下,可认为形态检测结果为活体形态。本公开实施例对形态检测结果的形式不做限制。In a possible implementation manner, the shape detection result of the above-mentioned shape detection processing may be a result in the form of a score. For example, in the case where the score of the morphology detection is greater than or equal to the first score threshold, the morphology detection result can be considered as a living body shape. The embodiment of the present disclosure does not limit the form of the morphological detection result.
在一种可能的实现方式中,在所述形态检测结果为活体形态的情况下,对所述第三区域中的人脸区域进行等温分析处理,获得等温分析结果。In a possible implementation manner, in a case where the shape detection result is a living body shape, isothermal analysis processing is performed on the face region in the third region to obtain an isothermal analysis result.
在一种可能的实现方式中,可通过等温分析来确定人脸区域是否为活体,例如,目标对象的真实的面部具有均匀的血管分布,而如果目标对象戴面具、头套等假体,由于假体没有血管分布,其等温线和真实面部的等温线具有差异,可根据等温分析来确定面部区域是否为活体。In a possible implementation manner, isothermal analysis can be used to determine whether the face area is a living body. For example, the real face of the target object has a uniform distribution of blood vessels, and if the target object wears a mask, headgear, etc. There is no blood vessel distribution in the body, and its isotherm is different from that of the real face. The isotherm can be used to determine whether the facial area is alive or not.
在一种可能的实现方式中,等温分析结果可以是分数形式的结果,例如,在等温分析的分数大于或等于第二分数阈值的情况下,可认为等温分析结果为活体。本公开实施例对等温分析结果的形式不做限制。In a possible implementation manner, the result of the isothermal analysis may be a result in the form of a score. For example, when the score of the isothermal analysis is greater than or equal to the second score threshold, the result of the isothermal analysis may be considered to be a living body. The embodiment of the present disclosure does not limit the form of the isothermal analysis result.
在一种可能的实现方式中,由于在不同的活体检测策略中,面部区域的特征量所占比例不同(例如,在基于人体形态的活体检测策略中,第三区域中包括人脸区域和人体区域, 则人脸区域所占比例较小,而在基于面部形态的活体检测策略中,第三区域中仅包括人脸区域,则人脸区域所占比例较大)。可根据活体检测策略,确定形态检测结果的第一权重和等温分析结果的第二权重。In a possible implementation manner, because in different living body detection strategies, the proportion of the feature amount of the face area is different (for example, in the living body detection strategy based on human body shape, the third area includes the face area and the human body. Area, the proportion of the face area is small, and in the life detection strategy based on facial morphology, the third area only includes the face area, and the proportion of the face area is larger). The first weight of the morphological detection result and the second weight of the isothermal analysis result can be determined according to the living body detection strategy.
例如,在基于人体形态的活体检测策略中,由于面部区域所占比例较小,则基于面部区域的等温分析结果的第二权重可较小,形态检测结果的第一权重可较大。例如,在基于面部形态的活体检测策略中,由于面部区域所占比例较大,则基于面部区域的等温分析结果的第二权重可较大,形态检测结果的第一权重可较小。例如,在基于头颈形态的活体检测策略中,第二权重和第一权重可接近或相等。本公开实施例对第一权重和第二权重的取值不做限制。For example, in a living body detection strategy based on human body shape, since the proportion of the facial area is small, the second weight based on the isothermal analysis result of the facial area may be smaller, and the first weight of the shape detection result may be larger. For example, in a living body detection strategy based on facial morphology, since the proportion of the facial region is relatively large, the second weight of the isothermal analysis result based on the facial region may be larger, and the first weight of the morphological detection result may be smaller. For example, in a living body detection strategy based on the shape of the head and neck, the second weight and the first weight may be close or equal. The embodiments of the present disclosure do not limit the values of the first weight and the second weight.
在一种可能的实现方式中,可根据第一权重和第二权重,分别对形态检测结果和等温分析结果进行加权求和处理,获得活体检测结果。在一种可能的实现方式中,活体检测结果可以是分数形式的结果,在活体检测结果大于或等于活体分数阈值的情况下,可认为目标对象为活体;在活体检测结果小于活体分数阈值的情况下,可认为目标对象为假体,或非人体,无需进行后续处理。In a possible implementation manner, the morphological detection result and the isothermal analysis result may be weighted and summed according to the first weight and the second weight, respectively, to obtain the living body detection result. In a possible implementation, the live body detection result can be a result in the form of a score. When the live body detection result is greater than or equal to the live body score threshold, the target object can be considered a live body; in the case where the live body detection result is less than the live body score threshold Next, it can be considered that the target object is a prosthesis or a non-human body, and no subsequent processing is required.
通过这种方式,可通过第一区域的尺寸确定活体检测策略,并基于活体检测策略进行活体检测,可利用等温分析提升在远红外图像中的活体检测精确度。In this way, the living body detection strategy can be determined by the size of the first region, and the living body detection can be performed based on the living body detection strategy, and isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image.
在一种可能的实现方式中,在步骤S13中,可在目标对象为活体的情况下,对目标对象进行身份识别处理。在一种可能的实现方式中,在医疗卫生场所(例如医院、手术室等)或者在疫情期间的公共场所,通常需要佩戴防护用具(例如,口罩,护目镜等),然而在佩戴后,会对面部区域造成一定程度的遮挡,因此,需要监测目标对象是否佩戴口罩等预设物品,并根据检测结果来确定身份识别的方式。In a possible implementation manner, in step S13, in the case that the target object is a living body, the identity recognition processing may be performed on the target object. In a possible implementation, in medical and health places (such as hospitals, operating rooms, etc.) or in public places during an epidemic, it is usually necessary to wear protective equipment (such as masks, goggles, etc.). A certain degree of occlusion is caused to the facial area. Therefore, it is necessary to monitor whether the target object is wearing a mask and other preset items, and to determine the way of identification according to the detection result.
图3示出根据本公开的实施例的身份识别处理的示意图。所述方法还包括:在活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,所述第一检测结果即为目标对象是否佩戴口罩等能够遮挡面部区域的物品。进一步地,可根据第一检测结果进行身份识别,步骤S13可包括:在活体检测结果为活体的情况下,根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。Fig. 3 shows a schematic diagram of identity recognition processing according to an embodiment of the present disclosure. The method further includes: in a case where the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the preset target includes a partial area of the face For the object to be shielded, the first detection result is whether the target object wears a mask or other objects capable of shielding the facial area. Further, identity recognition may be performed according to the first detection result, and step S13 may include: in the case that the living body detection result is a living body, according to the first detection result, performing identity recognition processing on the first area to obtain the The identity information of the target object.
在一种可能的实现方式中,可首先确定目标对象的面部是否存在预设目标(例如,口罩等物品)。检测所述第一区域内是否包括预设目标,获得所述第一检测结果,包括:对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括预设目标,获得所述第一检测结果。In a possible implementation manner, it may first be determined whether there is a preset target (for example, an item such as a mask) on the face of the target object. Detecting whether a preset target is included in the first area, and obtaining the first detection result includes: performing detection processing on a face area in the first area to determine a feature missing result of the face area; In the case that the feature missing result is a preset feature missing, detecting whether the face area includes a preset target, and obtaining the first detection result.
在一种可能的实现方式中,可通过神经网络对第一区域中的面部区域进行检测处理,例如,可通过卷积神经网络获取面部区域的特征图,在目标对象佩戴口罩等能够遮挡面部的物品(预设目标)的情况下,则面部区域的特征图中存在特征的缺失,例如,目标对象佩戴口罩,则无法检测到目标对象口鼻部的特征,即,口鼻部特征缺失,在目标对象佩戴墨镜的情况下,则无法检测到眼部特征,即,眼部特征缺失。In a possible implementation, the facial region in the first region can be detected and processed through a neural network. For example, a feature map of the facial region can be obtained through a convolutional neural network, and a mask can be worn on the target object that can block the face In the case of an item (preset target), there are missing features in the feature map of the facial area. For example, if the target object wears a mask, the features of the target object's nose and mouth cannot be detected, that is, the features of the nose and mouth are missing. When the target object wears sunglasses, the eye features cannot be detected, that is, the eye features are missing.
在一种可能的实现方式中,在预设特征(例如,口鼻部特征或眼部特征等)缺失的情况下,则目标对象可能佩戴口罩、墨镜等物品,还可进行进一步的检测,例如,可通过卷积神经网络等方法监测目标对象的面部是否存在口罩等预设目标,例如,可检测口罩的形状、纹理等特征,以识别口罩等物品,并排除目标对象用手遮挡面部或由于其他人的遮挡等情况。In a possible implementation, in the absence of preset features (for example, nose and mouth features or eye features, etc.), the target object may wear masks, sunglasses and other items, and further detection may be performed, for example, , The face of the target object can be monitored by convolutional neural networks and other methods to detect whether there are preset targets such as masks. For example, the shape and texture of the mask can be detected to identify masks and other items, and the target object can be excluded from covering the face by hand or due to Other people’s occlusion, etc.
在一种可能的实现方式中,在预设特征(例如,口鼻部特征或眼部特征等)未缺失(即,面部特征完整)的情况下,则不存在口罩等预设目标。如果监控区域为医院等医疗卫生场所,或者疫情期间的公共场所,则不戴口罩可能属于违规行为。所述方法还包括:在所述第一检测结果为不存在所述预设目标的情况下,生成第二预警信息。进一步地,还可通过警告器件或显示器输出第二警告消息,以提醒目标对象佩戴口罩。例如,警告器件可包括音响、报警灯等器件,可以以声音或灯光等方式输出第二预警信息,或者,可在显示器上显示该目标对象未佩戴口罩的信息。本公开实施例对第二预警信息的输出方式不做限制。In a possible implementation, if the preset features (for example, nose and mouth features, eye features, etc.) are not missing (ie, facial features are complete), there is no preset target such as a mask. If the surveillance area is a hospital or other medical and health place, or a public place during the epidemic, not wearing a mask may be a violation of regulations. The method further includes: generating second warning information when the first detection result is that the preset target does not exist. Further, a second warning message can also be output through a warning device or a display to remind the target object to wear a mask. For example, the warning device may include an audio device, a warning lamp, etc., and may output the second warning information in a manner of sound or light, or may display information that the target object is not wearing a mask on the display. The embodiment of the present disclosure does not limit the output mode of the second early warning information.
在一种可能的实现方式中,还可根据第一检测结果(即,目标对象是否佩戴可遮挡面部的物品),确定身份识别的方法。根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,包括以下中的一种:在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In a possible implementation manner, the method of identity recognition may also be determined according to the first detection result (that is, whether the target object wears an object that can cover the face). According to the first detection result, perform identity recognition processing on the first area to obtain the identity information of the target object, including one of the following: when the first detection result is that the preset target does not exist In the case of performing the first identity recognition processing on the face area in the first area to obtain the identity information of the target object; in the case where the first detection result is that the preset target exists, the The face area in the first area is subjected to a second identity recognition process to obtain the identity information of the target object, wherein the weight of the feature of the unoccluded area of the face in the second identity recognition process is greater than that in the first identity recognition process The weight of the feature of the corresponding area.
在一种可能的实现方式中,可使用神经网络进行身份识别处理,其中,用于第一身份识别处理的神经网络与用于第二身份识别处理的神经网络不同。在示例中,在目标对象的面部区域被部分遮挡,则仅有未遮挡区域可用于身份识别的面部区域的情况下,在神经网络训练的过程中,对未遮挡区域的特征所占权重较大,例如,将神经网络的注意力机制集中于未遮挡区域的特征,并利于未遮挡区域的特征进行身份识别。In a possible implementation manner, a neural network can be used for identity recognition processing, where the neural network used for the first identity recognition processing is different from the neural network used for the second identity recognition processing. In the example, when the face area of the target object is partially occluded, only the unoccluded area can be used for the face area for identity recognition. In the process of neural network training, the features of the unoccluded area have a larger weight For example, the attention mechanism of the neural network is focused on the features of the unoccluded area, and it is conducive to the feature of the unoccluded area for identity recognition.
在一种可能的实现方式中,在目标对象的面部区域未受遮挡,则面部区域的特征均可用于身份识别处理的情况下,神经网络可根据面部区域的全部特征进行身份识别。In a possible implementation manner, in the case where the facial region of the target object is not occluded, all features of the facial region can be used for identity recognition processing, and the neural network can perform identity recognition based on all the features of the facial region.
例如,在目标对象佩戴口罩,则神经网络仅可获取眼部和眉部的特征(口鼻部特征缺失)的情况下,神经网络可在训练中增加眼部和眉部特征的权重,以根据眼部和眉部特征进行身份识别。在目标对象未佩戴口罩,神经网络可获取面部区域的全部特征的情况下,神经网络可根据面部区域的全部特征进行身份识别。For example, when the target object wears a mask, the neural network can only obtain the features of the eyes and eyebrows (the nose and mouth features are missing), the neural network can increase the weights of the eye and eyebrow features in the training to Eye and eyebrow features for identification. In the case that the target object is not wearing a mask and the neural network can obtain all the features of the facial area, the neural network can perform identity recognition based on all the features of the facial area.
例如,神经网络可确定眼部和眉部特征与数据库中的参考特征(眼部和眉部的参考特征)之间的相似度,并将相似度大于或等于相似度阈值的参考特征所对应的身份信息确定为目标对象的身份信息。类似地,神经网络可确定面部特征与数据库中的参考特征(面部参考特征)之间的相似度,并将相似度大于或等于相似度阈值的参考特征所对应的身份信息确定为目标对象的身份信息。For example, the neural network can determine the similarity between the eye and eyebrow features and the reference features in the database (the reference features of the eye and eyebrows), and the similarity is greater than or equal to the similarity threshold corresponding to the reference features The identity information is determined as the identity information of the target object. Similarly, the neural network can determine the similarity between facial features and reference features in the database (face reference features), and determine the identity information corresponding to the reference features whose similarity is greater than or equal to the similarity threshold as the identity of the target object information.
例如,在数据库中不存在与目标对象的特征相似度大于或等于相似度阈值的参考特征(即,目标对象与数据库中现有的身份信息均不匹配)的情况下,可将目标对象的特征存储于数据库中,并为目标对象添加身份标识(例如,身份编码等)。本公开实施例对身份识别的方法不做限制。For example, if there is no reference feature in the database whose feature similarity with the target object is greater than or equal to the similarity threshold (that is, the target object does not match the existing identity information in the database), the feature of the target object can be changed It is stored in the database, and an identity (for example, identity code, etc.) is added to the target object. The embodiment of the present disclosure does not limit the method of identification.
通过这种方式,可根据是否存在预设目标来选择身份识别方法,可提升身份识别的准确度。In this way, the identification method can be selected according to whether there is a preset target, and the accuracy of identification can be improved.
在一种可能的实现方式中,还可识别目标对象的性别。在一种可能的实现方式中,男女的平均体温有一定差异,因此,可根据性别,设定的预设温度阈值不同,例如,女性平均体温高于男性0.3℃,在温度监测中,可根据性别来设置预设温度阈值,例如男性37度,女性37.3度。所述方法还包括:获取所述目标对象的性别信息;根据所述性别信息,确定所述预设温度阈值。In a possible implementation, the gender of the target object can also be identified. In a possible implementation, there is a certain difference between the average body temperature of men and women. Therefore, the preset temperature threshold can be set differently according to gender. For example, the average body temperature of women is 0.3°C higher than that of men. Gender is used to set the preset temperature threshold, such as 37 degrees for men and 37.3 degrees for women. The method further includes: acquiring gender information of the target object; and determining the preset temperature threshold according to the gender information.
在一种可能的实现方式中,可根据目标对象的面部特征判断目标对象的性别信息,例如,在目标对象未佩戴口罩的情况下,可根据目标对象的面部的所有特征确定目标对象的性别,在目标对象佩戴口罩的情况下,则可根据目标对象的眼部和眉部特征确定目标对象的性别。或者,在目标对象的面部特征或者眼部和眉部特征与数据库中的参考特征匹配的情况下,则可根据数据库中参考特征对应的身份信息中记录的性别来确定目标对象的性别。本公开实施例对性别的确定方法不做限制。In a possible implementation manner, the gender information of the target object can be determined based on the facial features of the target object. For example, when the target object is not wearing a mask, the gender of the target object can be determined based on all the features of the target object’s face. When the target object wears a mask, the gender of the target object can be determined according to the characteristics of the eyes and eyebrows of the target object. Alternatively, in the case where the facial features or eye and eyebrow features of the target object match the reference features in the database, the gender of the target object can be determined according to the gender recorded in the identity information corresponding to the reference feature in the database. The embodiment of the present disclosure does not limit the method for determining gender.
在一种可能的实现方式中,可根据性别信息,确定预设温度阈值,例如,女性平均体温高于男性平均体温,则针对女性的预设温度阈值可高于针对男性的预设温度阈值。本公开实施例对预设温度阈值的确定方法不做限制。In a possible implementation manner, the preset temperature threshold may be determined according to gender information. For example, if the average body temperature of women is higher than the average body temperature of men, the preset temperature threshold for women may be higher than the preset temperature threshold for men. The embodiment of the present disclosure does not limit the method for determining the preset temperature threshold.
在一种可能的实现方式中,在步骤S13中,还可对红外图像中预设部位(例如,额头)所在的第四区域(与可见光图像中预设部位所在的第二区域对应)进行温度检测处理,确定目标对象的温度信息(例如,体温)。例如,红外图像中像素点的像素值可表示色温值,可利用色温值来确定目标对象的额头区域的温度,本公开实施例对确定温度信息的方式不做限制。In a possible implementation manner, in step S13, the fourth area (corresponding to the second area where the preset position in the visible light image is located) in the infrared image (for example, the forehead) is also temperature The detection process determines the temperature information (for example, body temperature) of the target object. For example, the pixel value of the pixel in the infrared image may represent the color temperature value, and the color temperature value may be used to determine the temperature of the forehead area of the target object. The embodiment of the present disclosure does not limit the manner of determining the temperature information.
在一种可能的实现方式中,在步骤S14中,在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。例如,在疫情期间,发热属于疑似症状,可重点监控存在发生症状的目标对象。经检测,在目标对象的体温大于或等于预设温度阈值的情况下,则认为目标对象存在发热症状,可根据目标对象的身份信息生成第一预警信息,例如,可在显示器上显示目标对象的身份信息以及体温,以便重点监控该目标对象,又例如,可通过音响等设备发出第一预警信息,以提示防疫人员重点关注目标对象,或将目标对象进行隔离等处理。本公开实施例对第一预警信息的输出方式不做限制。In a possible implementation manner, in step S14, when the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object. For example, during an epidemic, fever is a suspected symptom, and you can focus on monitoring the target object with symptoms. After detection, when the body temperature of the target object is greater than or equal to the preset temperature threshold, it is considered that the target object has fever symptoms, and the first warning information can be generated according to the identity information of the target object. For example, the target object’s information can be displayed on the display. Identity information and body temperature, so as to focus on monitoring the target object. For example, the first warning message can be issued through audio and other equipment to remind the epidemic prevention personnel to focus on the target object, or to isolate the target object. The embodiment of the present disclosure does not limit the output mode of the first early warning information.
在一种可能的实现方式中,所述方法还包括:将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或红外图像进行叠加处理,获得检测图像。In a possible implementation, the method further includes: comparing the position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object with the The visible light image and/or the infrared image are superimposed to obtain a detection image.
在一种可能的实现方式中,可通过显示器对可见光图像和/或红外图像进行显示,并可在可见光图像和/或红外图像中目标对象所在位置显示目标对象的身份信息和/或温度信息。例如,可将目标对象的身份信息和/或温度信息叠加在可见光图像中的目标对象所在的第一区域,即,在第一区域中显示目标对象的身份信息和/或温度信息。类似地,可在红外图像的第三区域中显示目标对象的身份信息和/或温度信息。本公开实施例对显示方式不做限制。In a possible implementation manner, the visible light image and/or infrared image can be displayed through the display, and the identity information and/or temperature information of the target object can be displayed at the location of the target object in the visible light image and/or infrared image. For example, the identity information and/or temperature information of the target object may be superimposed on the first area where the target object in the visible light image is located, that is, the identity information and/or temperature information of the target object may be displayed in the first area. Similarly, the identity information and/or temperature information of the target object can be displayed in the third area of the infrared image. The embodiment of the present disclosure does not limit the display mode.
根据本公开的实施例的监测方法,可通过第一区域的尺寸确定活体检测策略,并基于活体检测策略进行活体检测,可利用等温分析提升在远红外图像中的活体检测精确度,并可根据是否存在预设目标来选择身份识别方法,提升身份识别的准确度。进一步地,可对预设部位进行温度检测,可快速检测图像中目标对象的体温,提高体温检测的效率,适用于人流量较大的场所。且可通过可见光图像识别目标对象的身份信息,有助于确定疑似病例的身份信息,提高防疫效率。According to the monitoring method of the embodiment of the present disclosure, the living body detection strategy can be determined by the size of the first area, and the living body detection strategy can be performed based on the living body detection strategy. Isothermal analysis can be used to improve the living body detection accuracy in the far-infrared image, and can be based on Whether there is a preset goal to select the identification method to improve the accuracy of identification. Further, temperature detection can be performed on preset parts, which can quickly detect the body temperature of the target object in the image, improve the efficiency of body temperature detection, and is suitable for places with a large flow of people. And the identification information of the target object can be identified through the visible light image, which helps to determine the identity information of the suspected case and improve the efficiency of epidemic prevention.
图4A和图4B示出根据本公开的实施例的监测方法的应用示意图,如图4A所示,所示监测方法可用于对监测区域实施监控的系统中,该监控系统可包括用于采集生命光线以获得红外图像的红外摄像头1,用于获取可见光图像的可见光摄像头2,处理器3(例如,系统级芯片SoC)。该系统还可包括在光照条件较差的环境中使用的补光单元4,用于显示可见光图像、红外图像和/或预警信息的显示器5,用于传输预警信息、温度信息等信息的通信单元6,用于输出预警信息的警告单元7,以及用于输入指令的交互单元8等。4A and 4B show a schematic diagram of the application of the monitoring method according to an embodiment of the present disclosure. As shown in FIG. 4A, the monitoring method can be used in a system for monitoring the monitoring area. An infrared camera 1 for obtaining infrared images by light, a visible light camera 2 for obtaining visible light images, and a processor 3 (for example, a system-on-chip SoC). The system may also include a supplementary light unit 4 used in an environment with poor lighting conditions, a display 5 for displaying visible light images, infrared images and/or warning information, and a communication unit for transmitting warning information, temperature information and other information 6. The warning unit 7 for outputting early warning information, and the interactive unit 8 for inputting instructions, etc.
在一种可能的实现方式中,如图4B所示,红外摄像头1和可见光摄像头2可拍摄监控区域的视频帧,例如,可见光图像AF(AF1-AF25)和红外图像BF(BF1-BF25)。并可对可见光图像AF和红外图像BF进行位置纠偏,即,可根据可见光图像AF中第一区域的位置确定红外图像BF中第三区域的位置。In a possible implementation, as shown in FIG. 4B, the infrared camera 1 and the visible light camera 2 can capture video frames of the monitored area, for example, visible light image AF (AF1-AF25) and infrared image BF (BF1-BF25). The position correction of the visible light image AF and the infrared image BF can be performed, that is, the position of the third area in the infrared image BF can be determined according to the position of the first area in the visible light image AF.
在一种可能的实现方式中,处理器3可对可见光图像AF进行图像质量的检测,例如,检测可将光图像的纹理、边界的清晰度,并删除图像质量较差的可见光图像以及对应的红外图像。进一步地,在可见光图像中没有目标对象的情况下,删除该可见光图像及对应的红外图像。例如,监控区域中没有人,也可删除该可见光图像及对应的红外图像。In a possible implementation manner, the processor 3 can perform image quality detection on the visible light image AF, for example, the detection can detect the texture of the light image and the sharpness of the boundary, and delete the visible light image with poor image quality and the corresponding Infrared image. Further, when there is no target object in the visible light image, the visible light image and the corresponding infrared image are deleted. For example, if there is no person in the monitoring area, the visible light image and the corresponding infrared image can also be deleted.
在一种可能的实现方式中,处理器3可对可见光图像AF进行位置检测处理,获得目标对象所在的第一区域,还可获得目标对象的额头所在的第二区域。根据第一区域和第二区域在可见光图像中的位置,可确定目标对象在红外图像中所在的第三区域的位置以及目标对象的额头在红外图像中所在的第四区域的位置。In a possible implementation manner, the processor 3 may perform position detection processing on the visible light image AF to obtain the first area where the target object is located, and may also obtain the second area where the forehead of the target object is located. According to the positions of the first area and the second area in the visible light image, the position of the third area where the target object is located in the infrared image and the position of the fourth area where the forehead of the target object is located in the infrared image can be determined.
在一种可能的实现方式中,处理器3可根据第一区域的尺寸信息来确定目标对象与摄像头之间的距离,并基于距离来确定活体检测策略。In a possible implementation manner, the processor 3 may determine the distance between the target object and the camera according to the size information of the first area, and determine the living body detection strategy based on the distance.
在一种可能的实现方式中,在目标对象的距离大于或等于第一距离阈值的情况下,将活体检测策略确定为基于人体形态的活体检测策略。可对红外图像中的第三区域进行形态检测,获得形态检测分数。并对红外图像中面部区域进行等温分析,获得等温分析分数。进一步地,可根据基于人体形态的活体检测策略,为形态检测分数分配较大的第一权重,为等温分析分数分配较小的第二权重,加权求和后,可获得活体检测结果。In a possible implementation manner, in a case where the distance of the target object is greater than or equal to the first distance threshold, the living body detection strategy is determined as the living body detection strategy based on the human body shape. The shape detection can be performed on the third area in the infrared image to obtain the shape detection score. And perform isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores. Further, according to a living body detection strategy based on human body shape, a larger first weight can be assigned to the shape detection score, and a smaller second weight can be assigned to the isothermal analysis score, and the living body detection result can be obtained after the weighted summation.
在一种可能的实现方式中,在目标对象的距离小于第一距离,且大于或等于第二距离阈值的情况下,将活体检测策略确定为基于头颈形态的活体检测策略。可对红外图像中的第三区域进行形态检测,获得形态检测分数。并对红外图像中面部区域进行等温分析,获得等温分析分数。进一步地,可根据基于头颈形态的活体检测策略,为形态检测分数分配和等温分析分数分配相近的权重,加权求和后,可获得活体检测结果。In a possible implementation manner, when the distance of the target object is less than the first distance and greater than or equal to the second distance threshold, the living body detection strategy is determined as the living body detection strategy based on the head and neck shape. The shape detection can be performed on the third area in the infrared image to obtain the shape detection score. And perform isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores. Further, according to the living body detection strategy based on the head and neck morphology, similar weights can be assigned to the morphological detection score and the isothermal analysis score, and after the weighted summation, the living body detection result can be obtained.
在一种可能的实现方式中,在目标对象的距离小于第二距离阈值的情况下,将活体检测策略确定为基于面部形态的活体检测策略。可对红外图像中的第三区域进行形态检测,获得形态检测分数。并对红外图像中面部区域进行等温分析,获得等温分析分数。进一步地,可根据基于面部形态的活体检测策略,为形态检测分数分配较小的第一权重,为等温分析分数分配较大的第二权重,加权求和后,可获得活体检测结果。In a possible implementation manner, in a case where the distance of the target object is less than the second distance threshold, the living body detection strategy is determined as the living body detection strategy based on the facial morphology. The shape detection can be performed on the third area in the infrared image to obtain the shape detection score. And perform isothermal analysis on the facial area in the infrared image to obtain isothermal analysis scores. Further, according to the living body detection strategy based on facial morphology, a smaller first weight can be assigned to the morphological detection score, and a larger second weight can be assigned to the isothermal analysis score. After the weighted summation, the living body detection result can be obtained.
在一种可能的实现方式中,处理器3可在可见光图像的第一区域中检测目标对象是否佩戴口罩,在目标对象未佩戴口罩的情况下,可生成第二预警信息,并通过显示器5显示第二预警信息,或者通过警告单元7播放第二预警信息。In a possible implementation manner, the processor 3 can detect whether the target object is wearing a mask in the first area of the visible light image, and when the target object is not wearing a mask, it can generate second warning information and display it on the display 5. The second warning information, or the second warning information is played through the warning unit 7.
在一种可能的实现方式中,处理器3可根据目标对象是否佩戴口罩,确定身份识别的方法,例如,在目标对象未佩戴口罩的情况下,可根据目标对象的所有面部特征与数据库中的参考特征进行比对,确定目标对象的身份信息。在目标对象佩戴口罩的情况下,可根据目标对象的眼部和眉部特征与数据库中的参考特征进行比对,确定目标对象的身份信息。进一步地,第二预警信息还可包括目标对象的身份信息,例如,可在显示器5显示XXX(姓名)未带口罩等。In a possible implementation, the processor 3 can determine the method of identity recognition according to whether the target object is wearing a mask. For example, if the target object is not wearing a mask, it can be based on all the facial features of the target object and the data in the database. Compare with reference characteristics to determine the identity information of the target object. When the target object wears a mask, the target object's eye and eyebrow characteristics can be compared with reference characteristics in the database to determine the target object's identity information. Further, the second early warning information may also include the identity information of the target object, for example, it may be displayed on the display 5 that XXX (name) is not wearing a mask, etc.
在一种可能的实现方式中,处理器3可识别目标对象的性别,例如,可根据目标对象的面部特征识别目标对象的性别,或者根据目标对象的身份信息获取目标对象的性别。进一步地,可根据目标对象的性别,分别确定男性目标的预设温度阈值和女性目标的预设温度阈值。或者,还可通过交互单元8输入预设温度阈值,本公开实施例对预设温度阈值的 设定方式不做限制。In a possible implementation manner, the processor 3 may recognize the gender of the target object, for example, may recognize the gender of the target object according to the facial features of the target object, or obtain the gender of the target object according to the identity information of the target object. Further, the preset temperature threshold of the male target and the preset temperature threshold of the female target can be determined respectively according to the gender of the target object. Alternatively, the preset temperature threshold may also be input through the interaction unit 8. The embodiment of the present disclosure does not limit the setting method of the preset temperature threshold.
在一种可能的实现方式中,可对第四区域(即,红外图像中的额头所在区域)进行温度监测,获得目标对象的体温。在体温超过预设温度阈值的情况下,处理器3可生成第一预警信息。并通过显示器5显示第一预警信息,或通过警告单元7播放第一预警信息。进一步地,可将发热的目标对象的身份信息、体温和摄像头的位置信息(即,发热的目标对象出现的位置)通过通信单元6传输至后台数据库或服务器,便于对疑似病例的跟踪以及对疑似病例的活动轨迹的确定。In a possible implementation manner, the temperature of the fourth area (that is, the area where the forehead in the infrared image is located) may be monitored to obtain the body temperature of the target object. In the case where the body temperature exceeds the preset temperature threshold, the processor 3 may generate the first warning information. And the first warning information is displayed through the display 5, or the first warning information is played through the warning unit 7. Further, the identity information, body temperature, and position information of the camera (ie, the location where the feverish target object appears) can be transmitted to the background database or server through the communication unit 6, so as to facilitate the tracking of suspected cases and the detection of suspected cases. Determination of the trajectory of the case.
在一种可能的实现方式中,可将目标对象的身份信息和体温叠加在可见光图像中的第一区域,或红外图像中的第三区域,并在显示器5上进行显示,以直观地观测监控区域中各目标对象的温度信息及身份信息,并可将叠加后的图像进行存储。In a possible implementation manner, the identity information and body temperature of the target object can be superimposed on the first area in the visible light image or the third area in the infrared image, and displayed on the display 5 to visually observe and monitor The temperature information and identity information of each target object in the area, and the superimposed image can be stored.
在一种可能的实现方式中,所述监测方法可使用可见光图像和红外图像即可进行温度监测,可减少摄像头的硬件成本,还可降低数据处理压力。并可监测各目标对象是否佩戴口罩。可用于卫生医疗场所的监控或防疫监控等领域中。本公开实施例对所述监测方法的应用领域不做限制。In a possible implementation manner, the monitoring method can use visible light images and infrared images to perform temperature monitoring, which can reduce the hardware cost of the camera and can also reduce the pressure of data processing. It can also monitor whether each target object wears a mask. It can be used in fields such as monitoring of health and medical places or monitoring of epidemic prevention. The embodiments of the present disclosure do not limit the application field of the monitoring method.
图5示出根据本公开的实施例的监测系统的框图,如图5所示,所述系统包括:红外图像获取部分11、可见光图像获取部分12、处理部分13,Fig. 5 shows a block diagram of a monitoring system according to an embodiment of the present disclosure. As shown in Fig. 5, the system includes: an infrared image acquisition part 11, a visible light image acquisition part 12, and a processing part 13,
所述处理部分13被配置为:The processing part 13 is configured to:
对可见光图像获取部分12获取的监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;Perform target area recognition on the visible light image of the monitoring area acquired by the visible light image acquisition part 12, and obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located in the visible light image;
根据所述第一区域的尺寸信息,以及所述第一区域在红外图像获取部分11获取的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;Performing live detection on the target object according to the size information of the first area and the third area corresponding to the first area in the infrared image obtained by the infrared image obtaining section 11 to obtain a live detection result;
在活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;In the case that the result of the living body detection is a living body, perform identity recognition processing on the first area to obtain the identity information of the target object, and perform temperature control on the fourth area corresponding to the second area in the infrared image Detection processing to obtain temperature information of the target object;
在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。In the case that the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object.
在一种可能的实现方式中,所述处理部分13可包括处理器(例如,图4A中的处理器3),红外图像获取部分11可包括红外摄像头(例如,图4A中的红外摄像头1),可见光图像获取部分12可包括可见光摄像头(例如,图4A中的可见光摄像头2)。In a possible implementation, the processing part 13 may include a processor (for example, the processor 3 in FIG. 4A), and the infrared image acquisition part 11 may include an infrared camera (for example, the infrared camera 1 in FIG. 4A) The visible light image acquisition part 12 may include a visible light camera (for example, the visible light camera 2 in FIG. 4A).
在一种可能的实现方式中,所述处理部分进一步被配置为:In a possible implementation manner, the processing part is further configured to:
根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;Determine the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region;
根据所述距离,确定活体检测策略;Determine a living body detection strategy according to the distance;
根据所述第一区域在可见光图像中的位置,在所述红外图像中确定所述第三区域的位置;Determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
根据所述活体检测策略,对红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。According to the living body detection strategy, the living body detection processing is performed on the third area in the infrared image to obtain the living body detection result.
在一种可能的实现方式中,所述处理部分进一步被配置为:In a possible implementation manner, the processing part is further configured to:
在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;或者In the case that the distance is greater than or equal to the first distance threshold, determining the living body detection strategy as a living body detection strategy based on human body shape; or
在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;或者In a case where the distance is greater than or equal to the second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the head and neck shape; or
在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a case where the distance is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
在一种可能的实现方式中,所述处理部分进一步被配置为:In a possible implementation manner, the processing part is further configured to:
根据所述活体检测策略,对所述第三区域进行形态检测处理,获得形态检测结果;Performing morphological detection processing on the third region according to the living body detection strategy to obtain a morphological detection result;
在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;In the case that the shape detection result is a living body shape, perform isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
根据所述活体检测策略,确定形态检测结果的第一权重和所述等温分析结果的第二权重;Determining the first weight of the morphological detection result and the second weight of the isothermal analysis result according to the living body detection strategy;
根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。According to the first weight, the second weight, the morphological detection result, and the isothermal analysis result, the living body detection result is determined.
在一种可能的实现方式中,所述处理部分还被配置为:In a possible implementation manner, the processing part is further configured to:
在活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,In the case that the living body detection result is a living body, detecting whether a preset target is included in the first area, and obtaining the first detection result, wherein the preset target includes an object that occludes a partial area of the face,
所述处理部分进一步被配置为:The processing part is further configured to:
在活体检测结果为活体的情况下,根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。In the case that the living body detection result is a living body, according to the first detection result, an identity recognition process is performed on the first area to obtain the identity information of the target object.
在一种可能的实现方式中,所述处理部分进一步被配置为:In a possible implementation manner, the processing part is further configured to:
对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;Performing detection processing on the facial region in the first region, and determining a feature missing result of the facial region;
在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括预设目标,获得所述第一检测结果。In the case that the feature missing result is a preset feature missing, detecting whether a preset target is included in the face area, and obtaining the first detection result.
在一种可能的实现方式中,所述处理部分进一步被配置为:In a possible implementation manner, the processing part is further configured to:
在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;或者In the case where the first detection result is that the preset target does not exist, perform first identity recognition processing on the face area in the first area to obtain the identity information of the target object; or
在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In the case where the first detection result is that the preset target exists, a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the second The weight of the feature of the unoccluded area of the face in the identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
在一种可能的实现方式中,所述处理部分还被配置为:In a possible implementation manner, the processing part is further configured to:
将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或红外图像进行叠加处理,获得检测图像。The position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object are superimposed with the visible light image and/or infrared image to obtain a detection image.
在一种可能的实现方式中,所述处理部分还被配置为:In a possible implementation manner, the processing part is further configured to:
获取所述目标对象的性别信息;Acquiring gender information of the target object;
根据所述性别信息,确定所述预设温度阈值。Determine the preset temperature threshold according to the gender information.
在一种可能的实现方式中,所述处理部分还被配置为:In a possible implementation manner, the processing part is further configured to:
在所述第一检测结果为不存在所述预设目标的情况下,生成第二预警信息。In a case where the first detection result is that the preset target does not exist, second early warning information is generated.
可以理解,本公开实施例提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开实施例不再赘述。It can be understood that the various method embodiments mentioned in the embodiments of the present disclosure can be combined with each other to form a combined embodiment without violating the principle and logic. Due to space limitations, the embodiments of the present disclosure will not be repeated.
此外,本公开实施例还提供了监测系统、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开实施例提供的任一种监测方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。In addition, the embodiments of the present disclosure also provide monitoring systems, electronic devices, computer-readable storage media, and programs. All of the above can be used to implement any monitoring method provided by the embodiments of the present disclosure. For the corresponding technical solutions and descriptions, refer to the method section. The corresponding records will not be repeated here.
本领域技术人员可以理解,在实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above-mentioned method of implementation, the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process. The execution order of each step should be based on its function and possible internal logic. Sure.
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以被配置为执行上文方法实施例描述的方法,其实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述In some embodiments, the functions or parts included in the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments. For implementation, refer to the description of the above method embodiments. For brevity, here No longer
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行的情况下实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。The embodiment of the present disclosure also proposes a computer-readable storage medium on which computer program instructions are stored, and the above method is implemented when the computer program instructions are executed by a processor. The computer-readable storage medium may be a non-volatile computer-readable storage medium.
本公开实施例还提出一种电子设备,包括:处理器;被配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory configured to store executable instructions of the processor; wherein the processor is configured as the aforementioned method.
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任一实施例提供的监测方法的指令。The embodiment of the present disclosure also provides a computer program product, including computer readable code, when the computer readable code runs on the device, the processor in the device executes instructions for implementing the monitoring method provided by any of the above embodiments .
本公开实施例还提供了另一种计算机程序产品,用于存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的监测方法的操作。The embodiments of the present disclosure also provide another computer program product for storing computer-readable instructions, which when executed, cause the computer to perform the operation of the monitoring method provided in any of the foregoing embodiments.
电子设备可以被提供为终端、服务器或其它形态的设备。The electronic device can be provided as a terminal, server or other form of device.
图6是根据一示例性实施例示出的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。Fig. 6 is a block diagram showing an electronic device 800 according to an exemplary embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
参照图6,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(Input/Output,I/O)的接口812,传感器组件814,以及通信组件816。6, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, and an input/output (Input/Output, I/O) interface 812 , The sensor component 814, and the communication component 816.
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个子部分,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体子部分,以方便多媒体组件808和处理组件802之间的交互。The processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method. In addition, the processing component 802 may include one or more sub-parts to facilitate the interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia sub-part to facilitate the interaction between the multimedia component 808 and the processing component 802.
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method to operate on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc. The memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。The power supply component 806 provides power for various components of the electronic device 800. The power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(Touch panel,TP)。在屏幕包括触摸面板的情况下,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式的情况下,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a liquid crystal display (Liquid Crystal Display, LCD) and a touch panel (TP). In the case where the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风 (microphone,MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式的情况下,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,被配置为输出音频信号。The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC). When the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals. The received audio signal may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, the audio component 810 further includes a speaker configured to output audio signals.
I/O接口812为处理组件802和外围接口部分之间提供接口,上述外围接口部分可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface part. The above-mentioned peripheral interface part may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触的过程中检测附近物体的存在。传感器组件814还可以包括光传感器,如(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)或CCD(Charge-coupled Device,电荷耦合元件)图像传感器,被配置为在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。The sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation. For example, the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components. For example, the component is the display and the keypad of the electronic device 800. The sensor component 814 can also detect the electronic device 800 or the electronic device 800. The position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800. The sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor component 814 may also include a light sensor, such as a (Complementary Metal Oxide Semiconductor) or CCD (Charge-coupled Device) image sensor, which is configured to be used in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如Wi-Fi,2G(2-Generation wireless telephone technology,第二代移动通信技术)或3G(3-Generation wireless telephone technology,第三代移动通信技术),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(Near Field Communication,NFC)部分,以促进短程通信。例如,在NFC部分可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(Bluetooth,BT)技术和其他技术来实现。The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 can access wireless networks based on communication standards, such as Wi-Fi, 2G (2-Generation wireless telephone technology, second-generation mobile communication technology) or 3G (3-Generation wireless telephone technology, third-generation mobile communication technology) ), or a combination of them. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) part to facilitate short-range communication. For example, the NFC part can be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and others. Technology to achieve.
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(Digital signal processing device,DSPD)、可编程逻辑器件(programmable logic device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, the electronic device 800 may be used by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processor (Digital Signal Processing, DSP), and digital signal processing device (Digital Signal Processing Device). , DSPD), programmable logic device (programmable logic device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor or other electronic components to implement the above method.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
图7是根据一示例性实施例示出的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图7,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。Fig. 7 is a block diagram showing an electronic device 1900 according to an exemplary embodiment. For example, the electronic device 1900 may be provided as a server. Referring to FIG. 7, the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932 for storing instructions executable by the processing component 1922, such as application programs. The application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 1922 is configured to execute instructions to perform the above-described methods.
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输 出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。The electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 . The electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。In an exemplary embodiment, a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
本公开实施例可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开实施例的各个方面的计算机可读程序指令。The embodiments of the present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the embodiments of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以但不限于是电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrical Programmable Read Only Memory,EPROM或闪存)、静态随机存取存储器(Static Random Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。The computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. Examples of computer-readable storage media (non-exhaustive list) include: portable computer disks, hard disks, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), erasable programmable Read-only memory (Electrical Programmable Read Only Memory, EPROM or flash memory), static random access memory (Static Random Access Memory, SRAM), portable compact disc read-only memory (Compact Disc Read-Only Memory, CD-ROM), digital multiple Function disks (Digital Video Disc, DVD), memory sticks, floppy disks, mechanical encoding devices, such as punch cards on which instructions are stored or raised structures in the grooves, and any suitable combination of the foregoing. The computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
用于执行本公开实施例操作的计算机程序指令可以是汇编指令、指令集架构(Industry Standard Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(local area network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(programmable logic array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开实施例的各个方面。The computer program instructions used to perform the operations of the embodiments of the present disclosure may be assembly instructions, instruction set architecture (Industry Standard Architecture, ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or a combination of Or source code or object code written in any combination of multiple programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as "C" language or similar programming Language. Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network-including local area network (LAN) or wide area network (WAN)-or it can be connected to an external computer (such as Use an Internet service provider to connect via the Internet). In some embodiments, the electronic circuit is customized by using the state information of the computer-readable program instructions, such as programmable logic circuit, field programmable gate array (FPGA) or programmable logic array (PLA), The electronic circuit can execute computer-readable program instructions to implement various aspects of the embodiments of the present disclosure.
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开实施例的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Here, various aspects of the embodiments of the present disclosure are described with reference to the flowcharts and/or block diagrams of the methods, apparatuses (systems) and computer program products according to the embodiments of the present disclosure. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions on a computer, other programmable data processing device, or other equipment, so that a series of operation steps are executed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , So that the instructions executed on the computer, other programmable data processing apparatus, or other equipment realize the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the accompanying drawings show the possible implementation architecture, functions, and operations of the system, method, and computer program product according to multiple embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function. Executable instructions. In some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart, can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
该计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。The computer program product can be implemented by hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium. In another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (SDK) and so on.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。The embodiments of the present disclosure have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Without departing from the scope and spirit of the illustrated embodiments, many modifications and changes are obvious to those of ordinary skill in the art. The choice of terms used herein is intended to best explain the principles, practical applications, or technical improvements in the market of the various embodiments, or to enable other ordinary skilled in the art to understand the various embodiments disclosed herein.

Claims (20)

  1. 一种监测方法,包括:A monitoring method including:
    对监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;Performing target area recognition on the visible light image of the monitoring area, and obtaining the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located in the visible light image;
    根据所述第一区域的尺寸信息,以及所述第一区域在所述监测区域的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;Performing live detection on the target object according to the size information of the first area and the third area corresponding to the first area in the infrared image of the monitoring area to obtain a live detection result;
    在所述活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;In the case that the living body detection result is a living body, perform identity recognition processing on the first area to obtain the identity information of the target object, and compare the fourth area corresponding to the second area in the infrared image Performing temperature detection processing to obtain temperature information of the target object;
    在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。In the case that the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object.
  2. 根据权利要求1所述的方法,其中,所述根据所述第一区域的尺寸信息,以及所述第一区域在所述红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果,包括:The method according to claim 1, wherein said performing live detection on said target object according to size information of said first area and a third area corresponding to said first area in said infrared image, Obtain live test results, including:
    根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;Determine the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region;
    根据所述距离,确定活体检测策略;Determine a living body detection strategy according to the distance;
    根据所述第一区域在所述可见光图像中的位置,在所述红外图像中确定所述第三区域的位置;Determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
    根据所述活体检测策略,对所述红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。According to the living body detection strategy, the living body detection processing is performed on the third area in the infrared image to obtain the living body detection result.
  3. 根据权利要求2所述的方法,其中,所述根据所述距离,确定活体检测策略,包括以下中的一种:The method according to claim 2, wherein the determining a living body detection strategy according to the distance includes one of the following:
    在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;In a case where the distance is greater than or equal to the first distance threshold, determining the living body detection strategy as a living body detection strategy based on human body shape;
    在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;In a case where the distance is greater than or equal to a second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on head and neck morphology;
    在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a case where the distance is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  4. 根据权利要求2所述的方法,其中,所述根据所述活体检测策略,对所述第三区域进行活体检测处理,获得所述活体检测结果,包括:The method according to claim 2, wherein said performing a living body detection process on said third area according to said living body detection strategy to obtain said living body detection result comprises:
    根据所述活体检测策略,对所述第三区域进行形态检测处理,获得形态检测结果;Performing morphological detection processing on the third region according to the living body detection strategy to obtain a morphological detection result;
    在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;In the case that the shape detection result is a living body shape, perform isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
    根据所述活体检测策略,确定所述形态检测结果的第一权重和所述等温分析结果的第二权重;Determining the first weight of the morphological detection result and the second weight of the isothermal analysis result according to the living body detection strategy;
    根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。According to the first weight, the second weight, the morphological detection result, and the isothermal analysis result, the living body detection result is determined.
  5. 根据权利要求1所述的方法,其中,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    在所述活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,In the case that the living body detection result is a living body, it is detected whether a preset target is included in the first area, and the first detection result is obtained, wherein the preset target includes an object that occludes a partial area of the face ,
    所述对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,包括:The performing identity recognition processing on the first area to obtain the identity information of the target object includes:
    根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。According to the first detection result, perform identity recognition processing on the first area to obtain the identity information of the target object.
  6. 根据权利要求5所述的方法,其中,所述检测所述第一区域内是否包括预设目标,获得所述第一检测结果,包括:The method according to claim 5, wherein said detecting whether a preset target is included in said first area and obtaining said first detection result comprises:
    对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;Performing detection processing on the facial region in the first region, and determining a feature missing result of the facial region;
    在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括所述预设目标,获得所述第一检测结果。In the case that the feature missing result is a preset feature missing, detecting whether the preset target is included in the face area, and obtaining the first detection result.
  7. 根据权利要求5所述的方法,其中,所述根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,包括以下中的一种:The method according to claim 5, wherein said performing identity recognition processing on said first area according to said first detection result to obtain identity information of said target object comprises one of the following:
    在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;In the case where the first detection result is that the preset target does not exist, perform first identity recognition processing on the face area in the first area to obtain the identity information of the target object;
    在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In the case where the first detection result is that the preset target exists, a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the second The weight of the feature of the unoccluded area of the face in the identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
  8. 根据权利要求5所述的方法,其中,所述方法还包括:The method according to claim 5, wherein the method further comprises:
    在所述第一检测结果为不存在所述预设目标的情况下,生成第二预警信息。In a case where the first detection result is that the preset target does not exist, second early warning information is generated.
  9. 根据权利要求1至8任一项所述的方法,其中,所述方法还包括:The method according to any one of claims 1 to 8, wherein the method further comprises:
    将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或所述红外图像进行叠加处理,获得检测图像。The position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object are superimposed with the visible light image and/or the infrared image to obtain detection image.
  10. 一种监测系统,包括:可见光图像获取部分、红外图像获取部分、处理部分,A monitoring system includes: a visible light image acquisition part, an infrared image acquisition part, and a processing part,
    所述处理部分被配置为:The processing part is configured as:
    对可见光图像获取部分获取的监测区域的可见光图像进行目标区域识别,获得所述可见光图像中目标对象所在的第一区域以及所述目标对象的预设部位所在的第二区域;Performing target area recognition on the visible light image of the monitoring area acquired by the visible light image acquisition part, and obtain the first area where the target object is located in the visible light image and the second area where the preset part of the target object is located in the visible light image;
    根据所述第一区域的尺寸信息,以及所述第一区域在红外图像获取部分获取的红外图像中对应的第三区域,对所述目标对象进行活体检测,获得活体检测结果;Performing live detection on the target object according to the size information of the first area and the third area corresponding to the first area in the infrared image obtained by the infrared image obtaining part to obtain a live detection result;
    在所述活体检测结果为活体的情况下,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息,并对所述第二区域在所述红外图像中对应的第四区域进行温度检测处理,获得所述目标对象的温度信息;In the case that the living body detection result is a living body, perform identity recognition processing on the first area to obtain the identity information of the target object, and compare the fourth area corresponding to the second area in the infrared image Performing temperature detection processing to obtain temperature information of the target object;
    在所述温度信息大于或等于预设温度阈值的情况下,根据所述目标对象的身份信息生成第一预警信息。In the case that the temperature information is greater than or equal to a preset temperature threshold, first warning information is generated according to the identity information of the target object.
  11. 根据权利要求10所述的系统,其中,所述处理部分还被配置为:The system according to claim 10, wherein the processing part is further configured to:
    根据所述第一区域的尺寸信息,确定所述目标对象与获取所述可见光图像的图像获取装置之间的距离;Determine the distance between the target object and the image acquisition device that acquires the visible light image according to the size information of the first region;
    根据所述距离,确定活体检测策略;Determine a living body detection strategy according to the distance;
    根据所述第一区域在所述可见光图像中的位置,在所述红外图像中确定所述第三区域的位置;Determining the position of the third area in the infrared image according to the position of the first area in the visible light image;
    根据所述活体检测策略,对所述红外图像中的所述第三区域进行活体检测处理,获得所述活体检测结果。According to the living body detection strategy, the living body detection processing is performed on the third area in the infrared image to obtain the living body detection result.
  12. 根据权利要求11所述的系统,其中,所述处理部分还被配置为:The system according to claim 11, wherein the processing part is further configured to:
    在所述距离大于或等于第一距离阈值的情况下,将所述活体检测策略确定为基于人体形态的活体检测策略;或者In the case that the distance is greater than or equal to the first distance threshold, determining the living body detection strategy as a living body detection strategy based on human body shape; or
    在所述距离大于或等于第二距离阈值且小于所述第一距离阈值的情况下,将所述活体检测策略确定为基于头颈形态的活体检测策略;或者In a case where the distance is greater than or equal to the second distance threshold and less than the first distance threshold, determining the living body detection strategy as a living body detection strategy based on the head and neck shape; or
    在所述距离小于所述第二距离阈值的情况下,将所述活体检测策略确定为基于面部形态的活体检测策略。In a case where the distance is less than the second distance threshold, the living body detection strategy is determined as a living body detection strategy based on facial morphology.
  13. 根据权利要求11所述的系统,其中,所述处理部分还被配置为:The system according to claim 11, wherein the processing part is further configured to:
    根据所述活体检测策略,对所述第三区域进行形态检测处理,获得形态检测结果;Performing morphological detection processing on the third region according to the living body detection strategy to obtain a morphological detection result;
    在所述形态检测结果为活体形态的情况下,对所述第三区域中的面部区域进行等温分析处理,获得等温分析结果;In the case that the shape detection result is a living body shape, perform isothermal analysis processing on the face area in the third area to obtain an isothermal analysis result;
    根据所述活体检测策略,确定所述形态检测结果的第一权重和所述等温分析结果的第二权重;Determining the first weight of the morphological detection result and the second weight of the isothermal analysis result according to the living body detection strategy;
    根据所述第一权重、所述第二权重、所述形态检测结果和所述等温分析结果,确定所述活体检测结果。According to the first weight, the second weight, the morphological detection result, and the isothermal analysis result, the living body detection result is determined.
  14. 根据权利要求10所述的系统,其中,所述处理部分还被配置为:The system according to claim 10, wherein the processing part is further configured to:
    在所述活体检测结果为活体的情况下,检测所述第一区域内是否包括预设目标,获得所述第一检测结果,其中,所述预设目标包括对面部的部分区域进行遮挡的物品,In the case that the living body detection result is a living body, it is detected whether a preset target is included in the first area, and the first detection result is obtained, wherein the preset target includes an object that occludes a partial area of the face ,
    所述处理部分被配置为:The processing part is configured as:
    根据所述第一检测结果,对所述第一区域进行身份识别处理,获得所述目标对象的身份信息。According to the first detection result, perform identity recognition processing on the first area to obtain the identity information of the target object.
  15. 根据权利要求14所述的系统,其中,所述处理部分还被配置为:The system according to claim 14, wherein the processing part is further configured to:
    对所述第一区域中的面部区域进行检测处理,确定所述面部区域的特征缺失结果;Performing detection processing on the facial region in the first region, and determining a feature missing result of the facial region;
    在所述特征缺失结果为预设特征缺失的情况下,检测所述面部区域内是否包括所述预设目标,获得所述第一检测结果。In the case that the feature missing result is a preset feature missing, detecting whether the preset target is included in the face area, and obtaining the first detection result.
  16. 根据权利要求14所述的系统,其中,所述处理部分还被配置为:The system according to claim 14, wherein the processing part is further configured to:
    在所述第一检测结果为不存在所述预设目标的情况下,对所述第一区域中的面部区域进行第一身份识别处理,获得所述目标对象的身份信息;或者In the case where the first detection result is that the preset target does not exist, perform first identity recognition processing on the face area in the first area to obtain the identity information of the target object; or
    在所述第一检测结果为存在所述预设目标的情况下,对所述第一区域中的面部区域进行第二身份识别处理,获得所述目标对象的身份信息,其中,所述第二身份识别处理中面部的未遮挡区域的特征的权重大于第一身份识别处理中对应区域的特征的权重。In the case where the first detection result is that the preset target exists, a second identity recognition process is performed on the face area in the first area to obtain the identity information of the target object, wherein the second The weight of the feature of the unoccluded area of the face in the identity recognition processing is greater than the weight of the feature of the corresponding area in the first identity recognition processing.
  17. 根据权利要求10至16任一项所述的系统,其中,所述处理部分还被配置为:The system according to any one of claims 10 to 16, wherein the processing part is further configured to:
    将所述第一区域或所述第三区域的位置信息、所述目标对象的温度信息和所述目标对象的身份信息,与所述可见光图像和/或所述红外图像进行叠加处理,获得检测图像。The position information of the first area or the third area, the temperature information of the target object, and the identity information of the target object are superimposed with the visible light image and/or the infrared image to obtain detection image.
  18. 一种电子设备,包括:An electronic device including:
    处理器;processor;
    被配置为存储处理器可执行指令的存储器;A memory configured to store executable instructions of the processor;
    其中,所述处理器被配置为:执行权利要求1至9中任意一项所述的方法。Wherein, the processor is configured to execute the method according to any one of claims 1-9.
  19. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。A computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method according to any one of claims 1 to 9 is realized.
  20. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至9中任意一项所述的方法。A computer program, comprising computer readable code, when the computer readable code runs in an electronic device, a processor in the electronic device executes the method for implementing any one of claims 1 to 9 .
PCT/CN2020/124151 2020-03-13 2020-10-27 Monitoring method and system, electronic device, and storage medium WO2021179624A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021536787A JP2022526207A (en) 2020-03-13 2020-10-27 Monitoring methods and systems, electronic devices and storage media
SG11202106842P SG11202106842PA (en) 2020-03-13 2020-10-27 Monitoring method and system, electronic equipment, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010177537.0A CN111414831B (en) 2020-03-13 2020-03-13 Monitoring method and system, electronic device and storage medium
CN202010177537.0 2020-03-13

Publications (1)

Publication Number Publication Date
WO2021179624A1 true WO2021179624A1 (en) 2021-09-16

Family

ID=71493042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124151 WO2021179624A1 (en) 2020-03-13 2020-10-27 Monitoring method and system, electronic device, and storage medium

Country Status (5)

Country Link
JP (1) JP2022526207A (en)
CN (1) CN111414831B (en)
SG (1) SG11202106842PA (en)
TW (1) TWI768641B (en)
WO (1) WO2021179624A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166358A (en) * 2021-11-19 2022-03-11 北京图菱视频科技有限公司 Robot inspection system, method, equipment and storage medium for epidemic prevention inspection
CN115471984A (en) * 2022-07-29 2022-12-13 青岛海尔科技有限公司 Alarm event execution method and device, storage medium and electronic device
CN117297550A (en) * 2023-10-30 2023-12-29 北京鹰之眼智能健康科技有限公司 Information processing system

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414831B (en) * 2020-03-13 2022-08-12 深圳市商汤科技有限公司 Monitoring method and system, electronic device and storage medium
CN112057074A (en) * 2020-07-21 2020-12-11 北京迈格威科技有限公司 Respiration rate measuring method, respiration rate measuring device, electronic equipment and computer storage medium
CN112131976B (en) * 2020-09-09 2022-09-16 厦门市美亚柏科信息股份有限公司 Self-adaptive portrait temperature matching and mask recognition method and device
CN112215113A (en) * 2020-09-30 2021-01-12 张成林 Face recognition method and device
CN112232186B (en) * 2020-10-14 2024-02-27 盈合(深圳)机器人与自动化科技有限公司 Epidemic prevention monitoring method and system
CN112287798A (en) * 2020-10-23 2021-01-29 深圳市商汤科技有限公司 Temperature measuring method and device, electronic equipment and storage medium
CN112525355A (en) * 2020-12-17 2021-03-19 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN112750278A (en) * 2021-01-18 2021-05-04 上海燊义环保科技有限公司 Full-intelligent network nursing system
CN112883856B (en) * 2021-02-05 2024-03-29 浙江华感科技有限公司 Monitoring method, monitoring device, electronic equipment and storage medium
CN113158877A (en) * 2021-04-16 2021-07-23 上海云从企业发展有限公司 Imaging deviation analysis and biopsy method, imaging deviation analysis and biopsy device, and computer storage medium
CN113496564A (en) * 2021-06-02 2021-10-12 山西三友和智慧信息技术股份有限公司 Park artificial intelligence management and control system
CN113420629B (en) * 2021-06-17 2023-04-28 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN113420667B (en) * 2021-06-23 2022-08-02 工银科技有限公司 Face living body detection method, device, equipment and medium
CN113432720A (en) * 2021-06-25 2021-09-24 深圳市迈斯泰克电子有限公司 Temperature detection method and device based on human body recognition and temperature detection instrument
CN113687370A (en) * 2021-08-05 2021-11-23 上海炬佑智能科技有限公司 Detection method, detection device, electronic equipment and storage medium
CN113989695B (en) * 2021-09-18 2022-05-20 北京远度互联科技有限公司 Target tracking method and device, electronic equipment and storage medium
CN114427915A (en) * 2022-01-21 2022-05-03 深圳市商汤科技有限公司 Temperature control method, temperature control device, storage medium and electronic equipment
CN114894337B (en) * 2022-07-11 2022-09-27 深圳市大树人工智能科技有限公司 Temperature measurement method and device for outdoor face recognition
CN116849613A (en) * 2023-07-12 2023-10-10 北京鹰之眼智能健康科技有限公司 Trigeminal nerve functional state monitoring system
CN116628560A (en) * 2023-07-24 2023-08-22 四川互慧软件有限公司 Method and device for identifying snake damage case data based on clustering algorithm and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180085006A1 (en) * 2016-09-23 2018-03-29 International Business Machines Corporation Detecting oral temperature using thermal camera
CN108288028A (en) * 2017-12-29 2018-07-17 佛山市幻云科技有限公司 Campus fever monitoring method, device and server
CN108710841A (en) * 2018-05-11 2018-10-26 杭州软库科技有限公司 A kind of face living body detection device and method based on MEMs infrared sensor arrays
CN110411570A (en) * 2019-06-28 2019-11-05 武汉高德智感科技有限公司 Infrared human body temperature screening method based on human testing and human body tracking technology
CN110647955A (en) * 2018-06-26 2020-01-03 义隆电子股份有限公司 Identity authentication method
CN111414831A (en) * 2020-03-13 2020-07-14 深圳市商汤科技有限公司 Monitoring method and system, electronic device and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005100193A (en) * 2003-09-26 2005-04-14 King Tsushin Kogyo Kk Invasion monitoring device
CN101793562B (en) * 2010-01-29 2013-04-24 中山大学 Face detection and tracking algorithm of infrared thermal image sequence
JP5339476B2 (en) * 2011-05-09 2013-11-13 九州日本電気ソフトウェア株式会社 Image processing system, fever tracking method, image processing apparatus, control method thereof, and control program
CN102622588B (en) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 Dual-certification face anti-counterfeit method and device
CN102855496B (en) * 2012-08-24 2016-05-25 苏州大学 Block face authentication method and system
CN105912908A (en) * 2016-04-14 2016-08-31 苏州优化智能科技有限公司 Infrared-based real person living body identity verification method
CN106372601B (en) * 2016-08-31 2020-12-22 上海依图信息技术有限公司 Living body detection method and device based on infrared visible binocular images
CN107595254B (en) * 2017-10-17 2021-02-26 黄晶 Infrared health monitoring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180085006A1 (en) * 2016-09-23 2018-03-29 International Business Machines Corporation Detecting oral temperature using thermal camera
CN108288028A (en) * 2017-12-29 2018-07-17 佛山市幻云科技有限公司 Campus fever monitoring method, device and server
CN108710841A (en) * 2018-05-11 2018-10-26 杭州软库科技有限公司 A kind of face living body detection device and method based on MEMs infrared sensor arrays
CN110647955A (en) * 2018-06-26 2020-01-03 义隆电子股份有限公司 Identity authentication method
CN110411570A (en) * 2019-06-28 2019-11-05 武汉高德智感科技有限公司 Infrared human body temperature screening method based on human testing and human body tracking technology
CN111414831A (en) * 2020-03-13 2020-07-14 深圳市商汤科技有限公司 Monitoring method and system, electronic device and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166358A (en) * 2021-11-19 2022-03-11 北京图菱视频科技有限公司 Robot inspection system, method, equipment and storage medium for epidemic prevention inspection
CN114166358B (en) * 2021-11-19 2024-04-16 苏州超行星创业投资有限公司 Robot inspection method, system, equipment and storage medium for epidemic prevention inspection
CN115471984A (en) * 2022-07-29 2022-12-13 青岛海尔科技有限公司 Alarm event execution method and device, storage medium and electronic device
CN115471984B (en) * 2022-07-29 2023-09-15 青岛海尔科技有限公司 Method and device for executing alarm event, storage medium and electronic device
CN117297550A (en) * 2023-10-30 2023-12-29 北京鹰之眼智能健康科技有限公司 Information processing system
CN117297550B (en) * 2023-10-30 2024-05-03 北京鹰之眼智能健康科技有限公司 Information processing system

Also Published As

Publication number Publication date
JP2022526207A (en) 2022-05-24
SG11202106842PA (en) 2021-10-28
CN111414831B (en) 2022-08-12
CN111414831A (en) 2020-07-14
TW202134940A (en) 2021-09-16
TWI768641B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
WO2021179624A1 (en) Monitoring method and system, electronic device, and storage medium
US11961620B2 (en) Method and apparatus for determining health status
US9600688B2 (en) Protecting display of potentially sensitive information
KR102296396B1 (en) Apparatus and method for improving accuracy of contactless thermometer module
JP2023171650A (en) Systems and methods for identifying persons and/or identifying and quantifying pain, fatigue, mood and intent with protection of privacy
US20170249500A1 (en) Identification and de-identification within a video sequence
JP2017526079A (en) System and method for identifying eye signals and continuous biometric authentication
CN110956061A (en) Action recognition method and device, and driver state analysis method and device
JP2014036801A (en) Biological state observation system, biological state observation method and program
WO2017000491A1 (en) Iris image acquisition method and apparatus, and iris recognition device
JP2017514186A (en) Display control method and apparatus, electronic device
CN113591701A (en) Respiration detection area determination method and device, storage medium and electronic equipment
JP2017522104A (en) Eye state determination system
JP2021077265A (en) Line-of-sight detection method, line-of-sight detection device, and control program
Healy et al. Detecting demeanor for healthcare with machine learning
WO2020248389A1 (en) Region recognition method and apparatus, computing device, and computer readable storage medium
JP2019508165A (en) Malignant tissue determination based on thermal image contours
WO2020071086A1 (en) Information processing device, control method, and program
WO2021068387A1 (en) Non-contact vital sign detection device and system
JP2013022028A (en) Circulatory system diagnostic system
CN115019940A (en) Prediction method and device of digestive tract diseases based on eye images and electronic equipment
JP2019047234A (en) Information processing device, information processing method, and program
JP2012186821A (en) Face image processing device, face image processing method, electronic still camera, digital image processing device and digital image processing method
CN113066084A (en) Physical condition detection method and device, electronic equipment and storage medium
US11257246B2 (en) Image detection method and image detection device for selecting representative image of user

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021536787

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 521422669

Country of ref document: SA

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.01.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20924154

Country of ref document: EP

Kind code of ref document: A1