CN111881729A - Live body flow direction discrimination method, device and equipment based on thermal imaging and storage medium - Google Patents

Live body flow direction discrimination method, device and equipment based on thermal imaging and storage medium Download PDF

Info

Publication number
CN111881729A
CN111881729A CN202010550150.5A CN202010550150A CN111881729A CN 111881729 A CN111881729 A CN 111881729A CN 202010550150 A CN202010550150 A CN 202010550150A CN 111881729 A CN111881729 A CN 111881729A
Authority
CN
China
Prior art keywords
feature vector
flow direction
image
target
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010550150.5A
Other languages
Chinese (zh)
Other versions
CN111881729B (en
Inventor
袁方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010550150.5A priority Critical patent/CN111881729B/en
Publication of CN111881729A publication Critical patent/CN111881729A/en
Application granted granted Critical
Publication of CN111881729B publication Critical patent/CN111881729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Abstract

The embodiment of the invention discloses a live body flow direction discrimination method, a live body flow direction discrimination device, live body flow direction discrimination equipment and a storage medium based on thermal imaging. The method comprises the following steps: acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment; determining a target characteristic vector according to the plurality of target images; the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of a living body; and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value. The method determines the living body flow direction of a plurality of target images, avoids the problem of privacy disclosure, and has strong generalization capability.

Description

Live body flow direction discrimination method, device and equipment based on thermal imaging and storage medium
Technical Field
The invention relates to the technical field of image discrimination, in particular to a live body flow direction discrimination method, a live body flow direction discrimination device, live body flow direction discrimination equipment and a storage medium based on thermal imaging.
Background
The method has wide application requirements for detecting the flow direction of the living body in the fields of transportation, security protection, public safety and the like. However, the traditional living body flow direction detection is established on a two-dimensional or three-dimensional visual image, the requirement on the operation performance of the living body flow direction detection device is high, the device cost is high, and the two-dimensional or three-dimensional visual image has the problem of privacy disclosure.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a device and a storage medium for discriminating a living body flow direction based on thermal imaging.
In a first aspect, the present invention provides a live body flow direction discrimination method based on thermal imaging, where the method includes:
acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment;
determining a target characteristic vector according to the plurality of target images;
the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of a living body;
and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value.
In a second aspect, the present invention further provides a live body flow direction discriminating device based on thermal imaging, the device including:
the system comprises an image acquisition module, a temperature detection module and a control module, wherein the image acquisition module is used for acquiring a plurality of target images, the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and pixel values of pixel points of the live detection images are temperature values detected by the thermal imaging equipment;
the vector extraction module is used for determining a target characteristic vector according to the plurality of target images;
and the living body flow direction discrimination module is used for taking the target characteristic vector as the input of a living body flow direction discrimination model, the living body flow direction discrimination model is used for discriminating the flow direction of the living body, acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at multiple continuous moments based on the direction discrimination result value.
In a third aspect, the present invention also provides a storage medium storing a computer program of instructions, which, when executed by a processor, causes the processor to perform the steps of the method according to any one of the first aspect.
In a fourth aspect, the present invention also proposes a computer device comprising at least one memory storing a computer program of instructions, at least one processor, which, when executed by the processor, causes the processor to carry out the steps of the method of any one of the first aspects.
The embodiment of the invention has the following beneficial effects:
the invention provides a live body flow direction discrimination method, a device, equipment and a storage medium based on thermal imaging, which comprises the steps of firstly obtaining a plurality of target images, wherein the plurality of target images are live body detection images of target positions detected by the thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live body detection image is a temperature value detected by the thermal imaging equipment; then determining a target characteristic vector according to the plurality of target images; finally, the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of the living body; and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value. Since the temperature value in the scene is relatively low when there is no living body, and the temperature value in the area where the living body is present in the scene is relatively high when there is a living body, the target image formed based on the temperature values can determine whether or not the living body is present in the target image, and then the flow direction of the living body in the scene can be determined from a plurality of target images at a plurality of consecutive times. Because the whole living body flow direction screening process is based on a plurality of target images, the pixel points of the target images represent temperature values, the problem of privacy disclosure exists in comparison with the traditional two-dimensional or three-dimensional visual images, and the living body flow direction screening based on the temperature values can avoid the problem of privacy disclosure. And the living body flow direction discrimination model is adopted to discriminate the living body flow direction, so that the method has strong generalization capability. Therefore, the living body flow direction of a plurality of target images is determined, the problem of privacy disclosure is avoided, and the method has strong generalization capability.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow diagram of a method for live flow direction discrimination based on thermal imaging in one embodiment;
FIG. 2 is a flow chart of a live body flow direction discrimination model determination of the live body flow direction discrimination method based on thermal imaging of FIG. 1;
FIG. 3 is a schematic structural diagram of a convolutional neural network of the live body flow direction discriminating method based on thermal imaging of FIG. 2;
FIG. 4 is a flow chart of determining predicted feature vectors for the thermal imaging based live flow direction discrimination method of FIG. 2;
FIG. 5 is a flow chart of determining predicted feature vectors for the thermal imaging based live flow direction discrimination method of FIG. 4;
FIG. 6 is a flow chart for determining multiple target images for the thermal imaging based in-vivo flow direction screening method of FIG. 1;
FIG. 7 is a block diagram of a live flow direction discriminating device based on thermal imaging according to an embodiment;
FIG. 8 is a block diagram of a computer device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem that privacy is revealed when the traditional two-dimensional or three-dimensional visual image based living body flow direction discrimination is carried out, the invention provides a living body flow direction discrimination method based on thermal imaging. The terminal comprises a desktop terminal and a mobile terminal, wherein the desktop terminal comprises but is not limited to a desktop computer, an industrial personal computer and a vehicle-mounted computer, and the mobile terminal comprises but is not limited to a mobile phone, a tablet computer, a notebook computer, an intelligent watch and other wearable equipment; the server includes a high performance computer and a cluster of high performance computers.
In one embodiment, as shown in fig. 1, the live body flow direction discrimination method based on thermal imaging includes:
s102, acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment;
wherein, the plurality of target images are images for discriminating the flow direction of the living body.
The pixel value of each pixel point of the in-vivo detection image is the temperature value detected by the thermal imaging device, and the in-vivo detection image is an image reflecting the temperature condition in the scene. That is to say, the target image is also an image reflecting the temperature condition in the scene, and specifically, the pixel value of each pixel point of the target image is a temperature value. When no living body exists in the scene, the temperature value of each pixel point of the corresponding target image is low, and the target image is in a dark color as a whole; when a living body exists in a scene, the temperature value of the image area of the target image corresponding to the area where the living body exists is high, and the color of the image area is biased to be light color or even white. Each pixel point in the target image has a coordinate position, the coordinate position is the position of the pixel point in the target image, the coordinate position can be expressed as (x, y), wherein x represents the value of the abscissa point, y represents the value of the ordinate, and the abscissa and the ordinate are vertically arranged.
The target image may be determined from the results of the thermal imaging device. The thermal imaging device can shoot the temperature condition in the scene into a thermal imaging image (namely a living body detection image), and determine a target image according to the thermal imaging image; the thermal imaging device can also shoot the temperature condition in the scene into a thermal imaging video, and an image of a certain frame in the thermal imaging video is taken as a target image. Thermal imaging devices include, but are not limited to, Passive Infrared (PIT) devices.
The number of images of the plurality of target images is the same as the number of times of the plurality of consecutive times.
The time refers to a specific time point. The plurality of consecutive time points refer to a plurality of detection times during which the thermal imaging apparatus performs the continuous detection. The detection time is a specific time point.
Optionally, when the plurality of target images are determined based on the thermal imaging video, the number of images in the thermal imaging video of 0.25 second is used as the number of images of the plurality of target images, so that occurrence of a plurality of living body flows in an excessively long time can be avoided, and the situation that the direction of the living body flows cannot be determined in an excessively short time can also be avoided. For example, when the frame rate of the thermal imaging video is 16 frames/second, the number of images in the thermal imaging video of 0.25 second, that is, 4, is taken as the number of images of the plurality of target images, that is, the plurality of target images includes 4 target images, which is not limited in this example.
The target position is a scene in which the flow direction of the living body needs to be detected. It will be appreciated that the target location is generally the junction of two spaces, for example, the target location is where a door is located.
S104, determining a target characteristic vector according to the plurality of target images;
the image processing method comprises the steps of constructing an image matrix corresponding to each target image according to pixel values of all pixel points of the target image, wherein each vector element of the image matrix corresponding to each target image represents one pixel point in the target image, and the value of the vector element of the image matrix corresponding to each target image represents the pixel value of the pixel point in the target image, namely the value of the vector element of the image matrix corresponding to each target image represents a temperature value. And then, carrying out channel splicing on all image matrixes corresponding to each target image to obtain a target characteristic vector, namely, values of vector elements of the target characteristic vector indicate temperature values. The channel splicing refers to splicing in channel dimensions, the number of channels of the spliced target feature vectors is the same as the number of matrices of all image matrices corresponding to each target image, for example, 4 image matrices corresponding to each target image are subjected to channel splicing, and the number of channels of the obtained target feature vectors is 4. Moreover, the specification of the feature vector of each channel of the target feature vector is the same as the specification of the image matrix corresponding to each target image, that is, the number of dimensions of the first dimension of the target feature vector is the same as the number of dimensions of the first dimension of the image matrix corresponding to each target image, and the number of dimensions of the second dimension of the target feature vector is the same as the number of dimensions of the second dimension of the image matrix corresponding to each target image.
Wherein the first dimension is height and the second dimension is width. For example, if the image matrix corresponding to each target image is 24 × 32 × 1(1 is the number of channels), the first dimension of the image matrix corresponding to each target image is the number of horizontal vector elements in each channel, and the second dimension of the image matrix corresponding to each target image is the number of vertical vector elements in each channel, then the first dimension of the image matrix corresponding to each target image is 24, and the second dimension of the image matrix corresponding to each target image is 32.
S106, taking the target characteristic vector as an input of a living body flow direction discrimination model, wherein the living body flow direction discrimination model is used for discriminating the flow direction of a living body;
the target characteristic vectors are input into a living body flow direction discrimination model to discriminate the flow direction of the living body, the living body flow direction discrimination model discriminates the flow direction discrimination result value, the direction discrimination result value is used for determining the flow direction of the living body in a plurality of target images, and the expression mode of the target characteristic vectors is an expression mode which meets the requirement of the input living body flow direction discrimination model.
Optionally, the living body flow direction discrimination model is obtained based on convolutional neural network training. The method comprises the following steps of training a convolutional neural network by adopting a plurality of image feature vector samples, wherein one image feature vector sample corresponds to a plurality of image samples at a plurality of continuous moments at the same position; the generation rule of each image feature vector sample is the same as the generation rule of the target feature vector, so that the number of vector elements of each image feature vector sample is the same as the number of target feature vectors, and the meaning of the vector elements of each image feature vector sample is the same as the meaning of the target feature vectors. It is understood that the meaning of the same means that the relative coordinate positions of the scenes corresponding to the vector elements are the same, and the corresponding images have the same sequence, for example, the vector elements in the first row and the second column of the image feature vector sample and the vector elements in the first row and the second column of the target feature vector are all vector elements extracted from the relative coordinate positions (0.001cm and 0.002cm) of the scenes, and are all the first images (i.e., the first image sample in the image feature vector sample, and the first target image in the multiple target images), which is not limited by this example. The living body flow direction discrimination model is obtained based on convolutional neural network training, and the adaptability of a machine learning algorithm to a fresh sample is fully utilized, so that the living body flow direction discrimination model has strong generalization capability.
S108, obtaining a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow directions of the target position at the continuous multiple moments based on the direction discrimination result value.
And acquiring a direction discrimination result value obtained by inputting the target characteristic vector into a living body flow direction discrimination model for living body flow direction discrimination.
Optionally, the direction discrimination result is two values, one of the two values of the direction discrimination result represents the probability that the living body flow direction is the first direction, the other value of the direction discrimination result represents the probability that the living body flow direction is the second direction, the probability of the first direction and the probability of the second direction are probability values from 0 to 1, and the living body flow directions of the plurality of target images can be discriminated according to the probability of the first direction and the probability of the second direction.
Optionally, when the probability of the first direction is greater than the probability of the second direction, it indicates that the living body has a greater possibility of flowing in the first direction, and therefore the flow direction of the living body at the target position at the consecutive multiple times can be determined to be the first direction; when the probability of the first direction is smaller than the probability of the second direction, the living body is more likely to flow in the second direction, and therefore the flow direction of the living body at the target position at the continuous multiple moments can be determined to be the second direction; when the probability of the first direction and the probability of the second direction are both 0, the living body is not present in the plurality of target images; when the probability of the first direction is equal to the probability of the second direction, and the probability of the first direction and the probability of the second direction are not 0, the abnormality in the living body flow direction discrimination is described, and a living body flow direction discrimination abnormality reminding signal is generated and used for reminding a user of the abnormality in the living body flow direction discrimination, so that the user can analyze the problems in time, and the living body flow direction discrimination efficiency of a plurality of target images is improved.
Optionally, when the probability of the first direction is greater than both the probability of the second direction and the flow direction probability threshold, it indicates that the living body has a greater possibility of flowing in the first direction, and therefore it may be determined that the flow direction of the living body at the target position at the consecutive multiple times is the first direction; when the probability of the second direction is greater than both the probability of the first direction and the flow direction probability threshold, it indicates that the living body is more likely to flow in the second direction, and thus it can be determined that the flow direction of the living body at the target position at the plurality of consecutive times is the second direction. The flow direction probability threshold is a specific numerical value between 0 and 1, misjudgment can be avoided by setting the flow direction probability threshold, and the accuracy of living body flow direction discrimination is further improved.
Optionally, the flow probability threshold is greater than 0.5, for example, the flow probability threshold may be 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, which is not limited in this example.
When continuous living body flow direction discrimination is performed, the detection times of the plurality of target images adopted by two adjacent living body flow direction discrimination may be partially overlapped or not overlapped. The non-overlap may be a connected relationship or a spaced relationship. For example, the detection time of the multiple target images adopted for the first living body flow direction screening is located in a time period from 12:00 to 12:05, and the detection time of the multiple target images adopted for the second living body flow direction screening is located in a time period from 12:03 to 12:08, then there is an overlapping time period from 12:05 to 12:08, and the detection time of the multiple target images adopted for the first living body flow direction screening and the detection time of the multiple target images adopted for the second living body flow direction screening are partially overlapped, which is not specifically limited in this example. For another example, the detection time of the plurality of target images adopted for the first living body flow direction discrimination is in a time period of 12:00 to 12:05, and the detection time of the plurality of target images adopted for the second living body flow direction discrimination is in a time period of 12:05 to 12:10, so that the detection time of the plurality of target images adopted for the first living body flow direction discrimination and the detection time of the plurality of target images adopted for the second living body flow direction discrimination are in a connection relationship; the detection time of the multiple target images adopted for the first living body flow direction discrimination is within a time period of 12:00 to 12:05, the detection time of the multiple target images adopted for the second living body flow direction discrimination is within a time period of 12:08 to 12:13, and the thermal imaging device performs detection within the time period of 12:05 to 12:08, so that the detection time of the multiple target images adopted for the first living body flow direction discrimination and the detection time of the multiple target images adopted for the second living body flow direction discrimination have a spaced relationship, which is not specifically limited in this example.
In this embodiment, a plurality of target images are obtained first, where the target images are live detection images of target positions detected by a thermal imaging device at a plurality of consecutive times, and a pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging device; then determining a target characteristic vector according to the plurality of target images; finally, the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of the living body; and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value. Since the temperature value in the scene is relatively low when there is no living body, and the temperature value in the area where the living body is present in the scene is relatively high when there is a living body, the target image formed based on the temperature values can determine whether or not the living body is present in the target image, and then the flow direction of the living body in the scene can be determined from a plurality of target images at a plurality of consecutive times. Because the whole living body flow direction screening process is based on a plurality of target images, the pixel points of the target images represent temperature values, the problem of privacy disclosure exists in comparison with the traditional two-dimensional or three-dimensional visual images, and the living body flow direction screening based on the temperature values can avoid the problem of privacy disclosure. And the living body flow direction discrimination model is adopted to discriminate the living body flow direction, so that the method has strong generalization capability.
As shown in fig. 2, in one embodiment, the method further comprises:
s202, obtaining a plurality of image feature vector samples, wherein one image feature vector sample corresponds to a plurality of image samples at a plurality of continuous moments at the same position, the image feature vector sample is a multi-channel two-dimensional matrix, and the number of channels of the image feature vector sample is equal to the number of the image samples;
wherein the number of channels of the image feature vector samples is equal to the number of the plurality of image samples, including: the number of channels of feature vectors of the image feature vector samples is equal to the number of image samples in the plurality of image samples.
Wherein, the plurality of image feature vector samples are samples for a convolutional neural network. The number of the image feature vector samples in the plurality of image feature vector samples may be 500, 1000, 2000, 3000, 5000, 6000, which is not limited by this example.
And the pixel value of each pixel point of the image sample is a temperature value.
Each image feature vector sample carries a live flow direction calibration value. The living body flow direction calibration value is a vector containing 2 vector elements, one vector element of the living body flow direction calibration value represents that the living body flow direction is a calibration value in a first direction, and the other value of the living body flow direction calibration value represents that the living body flow direction is a calibration value in a second direction.
Optionally, when the image sample has an image area of a living body, the image area of the living body is a continuous area, and the pixel values of the pixels in the image area of the living body are greater than the preset pixel values, wherein the number of the pixels in the height direction of the image area of the living body is greater than the first preset number and the number of the pixels in the width direction of the image area of the living body is greater than the second preset number. By limiting the size of the image area of the living body, invalid data is removed, and the accuracy of the direction discrimination result value output by the living body flow direction discrimination model is improved.
Optionally, the first preset number is less than or equal to the second preset number. For example, the first preset number is 10 pixel points, and the second preset number is 12 pixel points, which is not specifically limited in this example.
It will be appreciated that the live streaming calibration values may be placed in the data list independently and stored separately from the image feature vector samples, or may be carried by and stored with each image feature vector sample.
S204, inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample, wherein the predicted feature vector is used for indicating the flow direction of a predicted living body corresponding to the image feature vector sample, and the number of vector elements of the predicted feature vector is 2;
the plurality of image feature vector samples are sequentially input into a convolutional neural network for feature extraction and processing, so that a predicted feature vector corresponding to each image feature vector sample is obtained, that is, each image feature vector sample corresponds to one predicted feature vector.
Wherein one vector element of the predicted feature vector is used for indicating the probability that the predicted living body flow direction is a first direction, and the other vector element of the predicted feature vector is used for indicating the probability that the predicted living body flow direction is a second direction.
S206, training the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample to obtain the living body flow direction discrimination model, wherein the living body flow direction calibration value corresponding to the image feature vector sample is used for indicating the real living body flow direction corresponding to the image feature vector sample.
Specifically, loss calculation is performed according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample, the parameter of the convolutional neural network is updated according to the loss, and when training is finished, the convolutional neural network obtained through training is used as a living body flow direction discrimination model.
Optionally, a machine learning platform may also be used to construct a living body flow direction discrimination model based on a convolutional neural network. For example, the machine learning platform may be a tensrflow (end-to-end open source machine learning platform facing all people) or a Keras, which is not limited by the examples herein.
Keras is an open source artificial neural network library written by Python (cross-platform computer programming language) and can be used as a high-level application program interface of Tensorflow, Microsoft-CNTK (a very powerful command line system and can create a neural network prediction system) and Theano (a Python library and used for defining, optimizing and simulating mathematical expression calculation) to design, debug, evaluate, apply and visualize a deep learning model.
The living body flow direction discrimination model is trained based on the convolutional neural network, and the trained living body flow direction discrimination model can accurately discriminate the living body flow directions in a plurality of target images because the convolutional neural network has strong image understanding capacity; because the parameters required by the convolutional neural network are less than those required by the traditional neural network, a living body flow direction discrimination model can be trained by adopting a small number of image characteristic vector samples, and the training efficiency is improved; due to the deep learning capability of the convolutional neural network, the low-order features of the image do not need to be extracted manually, so that the living body flow direction screening step is simplified, and the living body flow direction screening efficiency is improved.
As shown in fig. 3, in one embodiment, the convolutional neural network includes a convolutional layer, a pooling layer, and an abstract compression module connected in sequence.
As shown in fig. 4, in an embodiment, the inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample includes:
s402, inputting the plurality of image feature vector samples into a convolution layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample;
the plurality of image feature vector samples are sequentially input into a convolutional layer of the convolutional neural network for channel expansion and abstract feature extraction, so as to obtain a first feature vector corresponding to each image feature vector sample, that is, the number of the first feature vectors corresponding to each image feature vector sample is multiple, and each image feature vector sample corresponds to one first feature vector corresponding to each image feature vector sample.
By extracting the first feature vector, the expansion of the channel and the extraction of the abstract feature are realized at the same time.
The abstract features are the abstract features of the image feature vector sample and are semantics obtained by performing deep layer reasoning on a living object and a background in the image feature vector sample.
The channel expansion means that the number of channels of the extracted first feature vector is greater than the number of channels of the image feature vector sample, that is, the number of convolution channels of the convolution layer of the convolution neural network is greater than the number of channels of the image feature vector sample. For example, when the number of channels of the image feature vector sample is 4, the number of channels of the first feature vector may be any one of 32 channels, 64 channels, and 128 channels by channel expansion of the convolutional layer of the convolutional neural network, which is not limited in this example.
S404, inputting the first feature vector corresponding to each image feature vector sample into a pooling layer of the convolutional neural network for maximum pooling to obtain a second feature vector corresponding to each image feature vector sample;
and sequentially inputting all the first feature vectors corresponding to each image feature vector sample into a pooling layer of the convolutional neural network for maximum pooling so as to remove redundant information in the first feature vectors, thereby further extracting the abstract features of the image feature vector samples and simultaneously realizing dimension reduction.
Optionally, the pooling layer of the convolutional neural network performs maximum pooling on the first eigenvector in the first dimension and the second dimension by using a pooling matrix, a pooling step length and a non-overlapping pooling mode, so as to obtain a second eigenvector corresponding to each image eigenvector sample. It is understood that the preset pooling matrix of the pooling layer of the convolutional neural network may employ any one of a 2 × 2 matrix, a 3 × 3 matrix, a 4 × 4 matrix, and a 5 × 5 matrix.
Wherein the first dimension is height and the second dimension is width. For example, if the first feature vector is 24 × 32 × 64(64 is the number of channels), the first dimension of the first feature vector is the number of transverse vector elements in each channel, and the second dimension of the first feature vector is the number of longitudinal vector elements in each channel, the first dimension of the first feature vector is 24, and the second dimension of the first feature vector is 32. For another example, the image feature vector sample is 24 × 32 × 4(4 is the number of channels), the first dimension of the image feature vector sample is the number of horizontal vector elements in each channel, the second dimension of the image feature vector sample is the number of vertical vector elements in each channel, then the first dimension of the image feature vector sample is 24, and the second dimension of the image feature vector sample is 32.
For example, the first feature vector is 24 × 32 × 64 (where 24 is the first dimension, 32 is the second dimension, and 64 is the number of channels), and the pooling layer of the convolutional neural network maximally pools the first feature vector in the first dimension and the second dimension by using a 3 × 3 matrix, with a step size of 3, and in a non-overlapping pooling manner, to obtain feature vectors of 8 × 10 × 64 (where 8 is the first dimension, 10 is the second dimension, and 64 is the number of channels), that is, at this time, the first feature vector corresponding to each image feature vector sample is 24 × 32 × 64, and the second feature vector corresponding to each image feature vector sample is an feature vector of 8 × 10 × 64, which is not specifically limited in this example.
And S406, inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample.
And abstract compression processing is to further extract abstract features of the second feature vector, and simultaneously realize dimension reduction and channel reduction. And dimension reduction, namely reducing the number of the first dimension and the second dimension of the second feature vector. The down channel is a channel number for which the second feature vector is reduced.
According to the embodiment, the method and the device, the convolutional layer, the pooling layer and the abstract compression module of the convolutional neural network are adopted to perform abstract feature extraction, dimension reduction and compression on the image feature vector samples, so that the prediction feature vector corresponding to each image feature vector sample is obtained.
As shown in fig. 3, the abstract compression module includes a first discarding layer (also called Dropout layer), a compression layer, a first fully-connected layer, a second discarding layer, and a second fully-connected layer, which are connected in sequence, where the first discarding layer and the second discarding layer are both discarding layers. The first fully-connected layer and the second fully-connected layer are both fully-connected layers, the discard layer is also referred to as a Dropout layer, and the compression layer is also referred to as a scatter layer.
As shown in fig. 5, in an embodiment, the inputting the second feature vector corresponding to each sample of the image feature vector into an abstract compression module of the convolutional neural network for performing an abstract compression process to obtain a predicted feature vector corresponding to each sample of the image feature vector includes:
s502, inputting the second feature vector corresponding to each image feature vector sample into a first discarding layer of the convolutional neural network for random discarding to obtain a third feature vector corresponding to each image feature vector sample;
and sequentially inputting all the second feature vectors corresponding to each image feature vector sample into a first discarding layer of the convolutional neural network for random discarding, so as to increase the stability and the anti-overfitting capability of the living body flow direction screening model trained based on the convolutional neural network.
The random discarding refers to randomly selecting a part of the output of the data (i.e., the second feature vector corresponding to each image feature vector sample) input into the first discarding layer of the convolutional neural network as a discarding element, and multiplying the discarding element by 0.
And a first discarding layer of the convolutional neural network randomly discards the data in the second feature vector corresponding to each image feature vector sample by adopting a first preset discarding rate.
Wherein, the first preset discarding rate is a preset discarding rate. The first preset discard rate may be a value of 0.15 to 0.5. For example, the first preset discarding rate may be 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, which is not limited in this example. The discard rate is a discard data ratio, and the discard data number is divided by the total data to obtain the discard rate, for example, the data input to the first discard layer of the convolutional neural network is 8 × 10 × 64(8 is a first dimension, 10 is a second dimension, and 64 is the channel number), the discard is performed in the first dimension and the second dimension, and the discard rate is obtained by dividing the total number of the discarded data by 8 × 10 × 64.
Alternatively, the value of the first preset discarding rate may be determined according to the size of data input to the first discarding layer of the convolutional neural network and the size of data required to be output by the first discarding layer of the convolutional neural network.
Optionally, when the image feature vector samples are 24 × 32 × 4(24 is the first dimension of the image feature vector samples, the first discarding layer of the convolutional neural network randomly discards the data in the second feature vector corresponding to each image feature vector sample with a discarding rate of 0.2, so as to avoid discarding data insufficiently and also avoid the problem of excessive discarding.
S504, inputting the third feature vector corresponding to each image feature vector sample into a compression layer of the convolutional neural network for one-dimensional vector conversion to obtain a fourth feature vector corresponding to each image feature vector sample;
and sequentially inputting all the third feature vectors corresponding to each image feature vector sample into a compression layer of the convolutional neural network for one-dimensional vector conversion, wherein the compression layer of the convolutional neural network converts the third feature vectors corresponding to each image feature vector sample into one-dimensional vectors, that is, the fourth feature vectors corresponding to each image feature vector sample are one-dimensional vectors.
Wherein a compression layer of the convolutional neural network converts the third feature vector corresponding to each of the image feature vector samples into a one-dimensional vector, comprising: converting the third feature vector corresponding to each image feature vector sample into a single channel to obtain a single-channel feature vector; and then, performing one-dimensional vector conversion on a second dimension of the single-channel feature vector to obtain a fourth feature vector corresponding to each image feature vector sample, that is, the dimension number of the first dimension of the fourth feature vector corresponding to each image feature vector sample is 1, the dimension number of the second dimension of the fourth feature vector corresponding to each image feature vector sample is the same as the vector element number of the third feature vector corresponding to each image feature vector sample, and the channel number of the fourth feature vector corresponding to each image feature vector sample is 1.
S506, inputting the fourth feature vector corresponding to each image feature vector sample into a first full-connection layer of the convolutional neural network for abstract compression to obtain a fifth feature vector corresponding to each image feature vector sample;
and sequentially inputting all the fourth feature vectors corresponding to each image feature vector sample into a first full connection layer of the convolutional neural network for abstract compression, wherein the first full connection layer of the convolutional neural network performs abstract feature extraction and dimension compression on the fourth feature vectors corresponding to each image feature vector sample according to a first preset compression dimension, that is, the dimension of a fifth feature vector corresponding to each image feature vector sample is the same as the first preset compression dimension.
The first preset compression dimension is a preset dimension, for example, the first preset compression dimension may be 16 dimensions, 32 dimensions, 64 dimensions, or 128 dimensions, which is not limited in this example.
Optionally, when the image feature vector sample is 24 × 32 × 4(24 is a first dimension of the image feature vector sample, 32 is a second dimension of the image feature vector sample, and 4 is the number of channels), the first preset compression dimension adopts 32 dimensions.
S508, inputting the fifth feature vector corresponding to each image feature vector sample into a second discarding layer of the convolutional neural network for random discarding to obtain a sixth feature vector corresponding to each image feature vector sample;
and sequentially inputting all the fifth feature vectors corresponding to each image feature vector sample into a second discarding layer of the convolutional neural network for random discarding, so as to further increase the stability and the anti-overfitting capability of the living body flow direction discrimination model trained based on the convolutional neural network.
And a second discarding layer of the convolutional neural network randomly discards the data in the fifth feature vector corresponding to each image feature vector sample by adopting a second preset discarding rate.
The random discarding refers to randomly selecting a part of the output of the data (i.e., the second feature vector corresponding to each image feature vector sample) input into the first discarding layer of the convolutional neural network as a discarding element, and multiplying the discarding element by 0.
Wherein, the second preset discarding rate is a preset discarding rate. The second predetermined discard rate may be a value of 0.15 to 0.5, for example, the second predetermined discard rate may be 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, which is not limited by the example.
Alternatively, the value of the second preset discarding rate may be determined according to the size of data input to the second discarding layer of the convolutional neural network and the size of data required to be output by the second discarding layer of the convolutional neural network.
Optionally, when the image feature vector samples are 24 × 32 × 4(24 is the first dimension of the image feature vector samples, the second discarding layer of the convolutional neural network randomly discards the data in the fifth feature vector corresponding to each image feature vector sample with a discarding rate of 0.25, so as to avoid discarding data insufficiently and also avoid the problem of excessive discarding.
And S510, inputting the sixth feature vector corresponding to each image feature vector sample into a second full-connection layer of the convolutional neural network for abstract compression, so as to obtain a predicted feature vector corresponding to each image feature vector sample.
And sequentially inputting all the sixth feature vectors corresponding to the image feature vector samples into a second full-connection layer of the convolutional neural network for abstract compression, wherein the second full-connection layer of the convolutional neural network performs abstract feature extraction and dimension compression on the sixth feature vectors corresponding to the image feature vector samples according to a second preset compression dimension, that is, the dimension of the predicted feature vector corresponding to each image feature vector sample is the same as the second preset compression dimension.
Wherein the second preset compression dimension is a preset dimension.
Optionally, when the image feature vector sample is 24 × 32 × 4(24 is a first dimension of the image feature vector sample, 32 is a second dimension of the image feature vector sample, and 4 is the number of channels), and the direction discrimination result value obtained by living body flow direction discrimination is two result values (i.e., the probability in the first direction and the probability in the second direction), the second preset compression dimension is 2-dimensional.
In this embodiment, the second feature vector corresponding to each image feature vector sample is processed into two result values (i.e., a probability in the first direction and a probability in the second direction) for living body flow direction discrimination by discarding and abstract compression, so as to obtain a predicted living body flow direction corresponding to the image feature vector sample.
In one embodiment, the inputting the plurality of image feature vector samples into a convolutional layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample includes:
respectively inputting the plurality of image feature vector samples to a plurality of convolution channels in convolution layers of the convolution neural network, performing convolution operation on convolution kernels corresponding to the plurality of convolution channels and the plurality of image feature vector samples to obtain a first feature vector corresponding to each image feature vector sample, wherein the number of the convolution channels is greater than the number of the channels of the image feature vector samples.
Respectively inputting the plurality of image feature vector samples into a plurality of convolution channels in a convolution layer of the convolution neural network, wherein only one image feature vector sample is input at a time; and carrying out multidimensional convolution operation on all channels of the image feature vector samples input each time by convolution cores corresponding to the convolution channels respectively to obtain a convolution result corresponding to each convolution channel, wherein the convolution result corresponding to each convolution channel is a single-channel matrix, and combining the convolution results corresponding to the same image feature vector sample on the channels to obtain a first feature vector corresponding to each image feature vector sample. That is, the number of channels of the first feature vector corresponding to each of the image feature vector samples is the same as the number of convolution channels of the plurality of convolution channels. It will be appreciated that each convolution channel corresponds to a set of convolution kernels, and that each set of convolution kernels may include at least one convolution kernel. For example, image feature vector samples (24 the first dimension, 32 the second dimension, 4 the number of channels, and one image sample for each channel of the image feature vector samples) with the size of 24 × 32 × 4 are respectively input to 64 convolutional channels in the convolutional layer of the convolutional neural network; all convolution cores in each convolution channel perform 4-dimensional convolution calculation on 4 channels of the image feature vector samples to obtain a convolution result with the size of 24 × 32 × 1(24 is a first dimension, 32 is a second dimension, and 1 is the number of channels), which corresponds to each convolution channel, that is, for each image feature vector sample, 64 convolution channels perform 4-dimensional convolution operation to obtain 64 convolution results corresponding to each convolution channel; and combining 64 convolution results corresponding to each convolution channel in channel dimensions to obtain the first feature vector with the specification of 24 × 32 × 64(24 is a first dimension, 32 is a second dimension, and 64 is the number of channels) corresponding to each image feature vector sample. For another example, for an image feature vector sample with a size of 24 × 32 × 4, n1 convolution channels of convolution layers of the convolutional neural network are obtained by performing 4-dimensional convolution on all convolution cores of n1 convolution channels of 4 channels of the image feature vector sample to obtain a result of each-dimensional convolution computation, that is, 4-dimensional convolution computation is obtained by obtaining 4 results of each-dimensional convolution computation, adding 4 results of each-dimensional convolution computation according to corresponding positions (i.e., adding 4 values at target positions in the 4 results of each-dimensional convolution computation, where the target position is any one position in the results of convolution computation), obtaining a single-channel matrix corresponding to one convolution channel, thereby obtaining a single-channel matrix corresponding to the n1 convolution channel, and using the single-channel matrix corresponding to the n1 convolution channel as a convolution result of the image feature vector sample (i.e., a convolution result of the n1 convolution channel to each channel) The convolution result). Optionally, the number of the plurality of convolution channels may be any one of 32, 64, 128.
Optionally, when the image feature vector sample is 24 × 32 × 4(24 is a first dimension of the image feature vector sample, 32 is a second dimension of the image feature vector sample, and 4 is a channel number), the plurality of convolution channels includes 64 convolution channels, and at this time, the channel number of the first feature vector corresponding to each image feature vector sample is 64.
Optionally, the sizes of the convolution kernels corresponding to the plurality of convolution channels are the same, for example, the size of the convolution kernel may be any one of 2 × 2, 3 × 3, 4 × 4, and 5 × 5. Alternatively, when the image feature vector sample is 24 × 32 × 4(24 is the first dimension of the image feature vector sample, 32 is the second dimension of the image feature vector sample, and 4 is the number of channels), the specification of the convolution kernel is 5 × 5.
Optionally, when performing convolution operation on the convolution kernels corresponding to the convolution channels and the image feature vector samples, a filling manner of edge padding 0 is further adopted, so that the obtained degree of the first dimension of the first feature vector corresponding to each image feature vector sample is the same as the degree of the first dimension of the image feature vector sample, and the degree of the second dimension of the first feature vector corresponding to each image feature vector sample is the same as the degree of the second dimension of the image feature vector sample, thereby achieving the purpose of maintaining the dimensions of the first dimension and the second dimension of the image feature vector sample.
Optionally, a ReLU activation function is further used when convolution operation is performed on convolution kernels corresponding to the convolution channels and the image feature vector samples, and for a linear function, the expression capability of the ReLU activation function is stronger, and the problem of gradient disappearance does not exist, so that the convergence speed of the living body flow direction discrimination model obtained by the convolutional layer based convolutional neural network training using the ReLU activation function is maintained in a stable state.
In one embodiment, the training the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample to obtain the living body flow direction screening model includes:
calculating the loss of the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample; updating parameters in the convolutional neural network according to the loss, wherein the updated parameters of the convolutional neural network are used for calculating the predicted feature vector corresponding to each image feature vector sample next time; and repeatedly executing the steps of the method until the loss reaches a first convergence condition or the iteration number reaches a second convergence condition, and determining the convolutional neural network with the loss reaching the first convergence condition or the iteration number reaching the second convergence condition as the living body flow direction discrimination model.
And inputting the prediction characteristic vector corresponding to each image characteristic vector sample and the living body flow direction calibration value corresponding to each image characteristic vector sample into the preset loss function, and calculating the loss of the convolutional neural network. Inputting the loss of the convolutional neural network into a preset parameter updating function, and calculating to obtain a value to be updated of the parameter of the convolutional neural network, wherein the value to be updated of the parameter of the convolutional neural network comprises: a value to be updated of a parameter of the convolutional layer, a value to be updated of a parameter of the pooling layer, a value to be updated of a parameter of the first discarded layer, a value to be updated of a parameter of the compressed layer, a value to be updated of a parameter of the first fully-connected layer, a value to be updated of a parameter of the second discarded layer, and a value to be updated of a parameter of the second fully-connected layer: and updating the parameters of the convolutional neural network by using the value to be updated. And repeatedly executing iterative computation until the training is finished when the loss of the convolutional neural network reaches a first convergence condition or the iteration number of computing the prediction characteristic vector corresponding to each image characteristic vector sample reaches a second convergence condition, and taking the convolutional neural network after the training as a living body flow direction discrimination model.
The first convergence condition includes: the loss of the convolutional neural network calculated twice in succession satisfies the lipschitz condition (lipschitz continuity condition).
The second convergence condition is a predetermined natural number.
Optionally, the preset loss function J (θ) is:
Figure BDA0002542226570000121
wherein m is the number of the image feature vector samples in the plurality of image feature vector samples, y is a predicted feature vector corresponding to the ith image feature vector sample in the plurality of image feature vector samples, y(i)The loss value is a living body flow direction calibration value corresponding to the ith image feature vector sample in the plurality of image feature vector samples, and the output value of a preset loss function J (theta) is the loss of the convolutional neural network.
Optionally, a preset update function zjComprises the following steps:
Figure BDA0002542226570000122
wherein an update function z is presetjThe output value of (a) is a value to be updated, theta, of a parameter of the convolutional neural networkjFor the jth parameter of the convolutional neural network,
Figure BDA0002542226570000123
means to calculate the partial derivative of the preset loss function J (theta) of the convolutional neural network,
Figure BDA0002542226570000124
calculating the partial derivative of the jth parameter of the convolutional neural network; α refers to the learning rate of machine learning.
In the embodiment, parameters of the convolutional neural network are updated during training, so that the predicted feature vector corresponding to each image feature vector sample is closer to a living body flow direction calibration value corresponding to each image feature vector sample through each iterative computation; by setting any one of the first convergence condition and the second convergence condition as the training end condition, the training efficiency is improved.
As shown in fig. 6, in an embodiment, the acquiring the plurality of target images includes:
s602, acquiring images to be cleaned corresponding to each sampling period, wherein the images to be cleaned corresponding to one sampling period are thermal imaging images acquired in one sampling period;
specifically, the thermal imaging device performs detection according to sampling periods to obtain images to be cleaned corresponding to the sampling periods.
The sampling period is a number, such as 1 second, 2 seconds, 3 seconds, 10 seconds, 20 seconds, and is not limited in this example.
For example, when the sampling period is 1 second, the thermal imaging apparatus detects the target position by an acquisition time of 0 second (0 second is a time to start acquisition) to an acquisition time of 1 second (a first sampling period), an acquisition time of 1 second to an acquisition time of 2 seconds (a second sampling period), and an acquisition time of 2 seconds to an acquisition time of 3 seconds (a third sampling period), where the acquisition time of 0 second to the acquisition time of 1 second does not include 0 second and includes 1 second, the acquisition time of 1 second to the acquisition time of 2 seconds does not include 1 second and includes 2 seconds, and the acquisition time of 2 seconds to the acquisition time of 3 seconds does not include 2 seconds and includes 3 seconds, which is not specifically limited in this example.
Optionally, when the thermal imaging device obtains a thermal imaging video, the sampling period is 1 second, and 16 (frame rate of the video is 16 frames) live detection images are acquired per second, which is not limited in this example.
S604, if the number of the images to be cleaned corresponding to the target sampling period is smaller than a preset image threshold value, or the number of the pixels of the live body detection images smaller than the preset pixel point threshold value exist in the images to be cleaned corresponding to the target sampling period, discarding the images to be cleaned corresponding to the target sampling period, or taking the images to be cleaned corresponding to the target sampling period as alternative images, wherein the target sampling period is any sampling period;
if the number of the images to be cleaned corresponding to the target sampling period is smaller than a preset image threshold value, or if the number of pixel points of any image to be cleaned corresponding to the target sampling period is smaller than the preset pixel point threshold value, discarding all the images to be cleaned corresponding to the target sampling period.
And S606, determining the multiple target images from the candidate images according to the continuous multiple moments.
And taking the candidate images with the detection time same as the plurality of continuous moments in all the candidate images as target images.
According to the method and the device, all the images to be cleaned corresponding to the target sampling periods which do not meet the conditions are discarded, and the quality of the alternative images is improved, so that the quality of the target images is improved, and the accuracy of the screening results output by the living body flow direction screening model is further improved.
In one embodiment, the determining a target feature vector according to the plurality of target images includes: acquiring an image matrix corresponding to each target image according to the target images; and splicing all the image matrixes corresponding to each target image on a channel to obtain the target characteristic vectors, wherein the number of the channels of the target characteristic vectors is the same as that of the target images.
The image matrix corresponding to each target image is a channel two-dimensional matrix, and the value of the vector element of the image matrix corresponding to each target image expresses the pixel value of the pixel point of the target image. For example, the vector elements in row 3 and column 5 of the image matrix corresponding to each target image represent the pixel values of the pixels in row 3 and column 5 of the target image.
And splicing all image matrixes corresponding to each target image on channels, namely, the number of the channels of the target feature vector is the sum of the number of the channels of all the image matrixes corresponding to each target image. For example, the target images include 4 target images, the image matrix corresponding to each target image is a channel two-dimensional matrix, and if the sum of the channel numbers of all the image matrices corresponding to each target image is 4, the channel number of the target feature vector is 4. For another example, the image matrix corresponding to each target image is 24 × 32 × 1(24 is the first dimension of the image matrix corresponding to each target image, 32 is the second dimension of the image matrix corresponding to each target image, and 1 is the number of channels of the image matrix corresponding to each target image), and the target feature vectors having the size of 24 × 32 × 4 are obtained by performing channel stitching (24 is the first dimension of the target feature vector, 32 is the second dimension of the target feature vector, and 4 is the number of channels of the target feature vector).
In the embodiment, a plurality of one-channel two-dimensional matrixes are converted into a multi-channel two-dimensional matrix by splicing the channels.
In one embodiment, the determining a target feature vector according to the plurality of target images further includes: respectively carrying out living body screening on the plurality of target images to obtain a living body screening result corresponding to each target image; and judging whether all the living body discrimination results corresponding to each target image meet preset discrimination result conditions or not, and determining target characteristic vectors according to the plurality of target images when the preset discrimination result conditions are met.
And performing living body screening on each target image of the plurality of target images to obtain a living body screening result corresponding to each target image.
Optionally, the preset discrimination result condition means that a living body exists on at least one target image.
Optionally, when all the living body discrimination results corresponding to each target image do not meet the preset discrimination result condition, determining that the living body flow direction discrimination result is no living body.
In the embodiment, the living body flow direction discrimination is performed by determining the target characteristic vector, so that the living body flow direction discrimination based on the plurality of target images which do not meet the preset discrimination result condition is avoided, and the living body flow direction discrimination efficiency of the living body flow direction discrimination method based on thermal imaging is improved.
In one embodiment, the live flow direction discriminates the convolution layer of the model to receive the target feature vector;
feature channel expansion and abstract feature extraction are carried out on the target feature vector by the convolution layer of the living body flow direction screening model to obtain first feature vectors corresponding to the multiple target images, and the first feature vectors corresponding to the target images are input into the pooling layer of the living body flow direction screening model;
the pooling layer of the living body flow direction discrimination model performs maximum pooling on first feature vectors corresponding to the target images to obtain second feature vectors corresponding to the target images, and the second feature vectors corresponding to the target images are sent to a first discarding layer of the living body flow direction discrimination model;
the first discarding layer of the living body flow direction discrimination model randomly discards second feature vectors corresponding to the target images to obtain third feature vectors corresponding to the target images, and the third feature vectors corresponding to the target images are input into a compression layer of the living body flow direction discrimination model;
the compression layer of the living body flow direction discrimination model performs one-dimensional vector conversion on third feature vectors corresponding to the multiple target images to obtain fourth feature vectors corresponding to the multiple target images, and the fourth feature vectors corresponding to the multiple target images are input into a first full-connection layer of the living body flow direction discrimination model;
the first full-connection layer of the living body flow direction discrimination model performs abstract compression on fourth feature vectors corresponding to the target images to obtain fifth feature vectors corresponding to the target images, and the fifth feature vectors corresponding to the target images are input into a second discarding layer of the living body flow direction discrimination model;
the second discarding layer of the living body flow direction discrimination model randomly discards fifth feature vectors corresponding to the target images to obtain sixth feature vectors corresponding to the target images, and the sixth feature vectors corresponding to the target images are input into the second full-connection layer of the living body flow direction discrimination model;
and performing abstract compression on sixth feature vectors corresponding to the multiple target images by a second full-connection layer of the living body flow direction discrimination model to obtain seventh feature vectors corresponding to the multiple target images, and taking the seventh feature vectors corresponding to the multiple target images as the direction discrimination result values.
In one embodiment, as shown in fig. 7, a live body flow direction discriminating apparatus based on thermal imaging is provided, the apparatus including:
an image obtaining module 702, configured to obtain multiple target images, where the multiple target images are live detection images of target positions detected by a thermal imaging device at multiple consecutive times, and a pixel value of each pixel point of the live detection image is a temperature value detected by the thermal imaging device;
a vector extraction module 704, configured to determine a target feature vector according to the multiple target images;
and the living body flow direction screening module 706 is configured to use the target feature vector as an input of a living body flow direction screening model, where the living body flow direction screening model is configured to screen a living body flow direction, obtain a direction screening result value output by the living body flow direction screening model, and determine, based on the direction screening result value, a living body flow direction of the target position at the multiple consecutive moments.
In this embodiment, an image acquisition module 702 acquires a plurality of target images, where the plurality of target images are live detection images of target positions detected by a thermal imaging device at a plurality of consecutive times, and a pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging device; determining a target feature vector according to the plurality of target images through a vector extraction module 704; the target feature vector is used as an input of a living body flow direction discrimination model by the living body flow direction discrimination module 706, the living body flow direction discrimination model is used for discriminating a living body flow direction, obtaining a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow directions of the target position at the continuous multiple moments based on the direction discrimination result value. Since the temperature value in the scene is relatively low when there is no living body, and the temperature value in the area where the living body is present in the scene is relatively high when there is a living body, the target image formed based on the temperature values can determine whether or not the living body is present in the target image, and then the flow direction of the living body in the scene can be determined from a plurality of target images at a plurality of consecutive times. Because the whole living body flow direction screening process is based on a plurality of target images, the pixel points of the target images represent temperature values, the problem of privacy disclosure exists in comparison with the traditional two-dimensional or three-dimensional visual images, and the living body flow direction screening based on the temperature values can avoid the problem of privacy disclosure. And the living body flow direction discrimination model is adopted to discriminate the living body flow direction, so that the method has strong generalization capability.
FIG. 8 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be a terminal, and may also be a server. As shown in fig. 8, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement a live body flow direction discriminating method based on thermal imaging. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a live flow direction discrimination method based on thermal imaging. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a live body flow direction discrimination method based on thermal imaging provided by the present application can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in fig. 8. The memory of the computer device can store various program templates which form a kind of live body flow direction discrimination device based on thermal imaging. Such as an image acquisition module 702, a vector extraction module 704, and a living body flow direction discrimination module 706.
In one embodiment, a storage medium is proposed, storing a computer program of instructions which, when executed by a processor, causes the processor to carry out the following method steps when executed: acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment; determining a target characteristic vector according to the plurality of target images; the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of a living body; and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value.
In one embodiment, the computer program, when executed by the processor, is further operable to: acquiring a plurality of image characteristic vector samples, wherein one image characteristic vector sample corresponds to a plurality of image samples at a plurality of continuous moments at the same position, the image characteristic vector sample is a multi-channel two-dimensional matrix, and the number of channels of the image characteristic vector sample is equal to the number of the plurality of image samples; inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample, wherein the predicted feature vector is used for indicating the flow direction of a predicted living body corresponding to the image feature vector sample, and the number of vector elements of the predicted feature vector is 2; and training the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample to obtain the living body flow direction discrimination model, wherein the living body flow direction calibration value corresponding to the image feature vector sample is used for indicating the real living body flow direction corresponding to the image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the convolutional neural network comprises a convolutional layer, a pooling layer and an abstract compression module which are connected in sequence; the inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample comprises: inputting the plurality of image feature vector samples into a convolution layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample; inputting the first feature vector corresponding to each image feature vector sample into a pooling layer of the convolutional neural network for maximum pooling to obtain a second feature vector corresponding to each image feature vector sample; and inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the abstract compression module comprises a first discarding layer, a compression layer, a first full connection layer, a second discarding layer and a second full connection layer which are sequentially connected; inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample, including: inputting the second feature vector corresponding to each image feature vector sample into a first discarding layer of the convolutional neural network for random discarding to obtain a third feature vector corresponding to each image feature vector sample; inputting the third feature vector corresponding to each image feature vector sample into a compression layer of the convolutional neural network for one-dimensional vector conversion to obtain a fourth feature vector corresponding to each image feature vector sample; inputting the fourth feature vector corresponding to each image feature vector sample into a first full-connection layer of the convolutional neural network for abstract compression to obtain a fifth feature vector corresponding to each image feature vector sample; inputting the fifth feature vector corresponding to each image feature vector sample into a second discarding layer of the convolutional neural network for random discarding to obtain a sixth feature vector corresponding to each image feature vector sample; and inputting the sixth feature vector corresponding to each image feature vector sample into a second full-connection layer of the convolutional neural network for abstract compression to obtain a predicted feature vector corresponding to each image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the inputting the plurality of image feature vector samples into the convolutional layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample, includes: respectively inputting the plurality of image feature vector samples to a plurality of convolution channels in convolution layers of the convolution neural network, performing convolution operation on convolution kernels corresponding to the plurality of convolution channels and the plurality of image feature vector samples to obtain a first feature vector corresponding to each image feature vector sample, wherein the number of the convolution channels is greater than the number of the channels of the image feature vector samples.
In one embodiment, the computer program, when executed by the processor, is further operable to: the acquiring of the plurality of target images includes: acquiring images to be cleaned corresponding to each sampling period, wherein the images to be cleaned corresponding to one sampling period are thermal imaging images acquired in one sampling period; if the number of the images to be cleaned corresponding to the target sampling period is smaller than a preset image threshold value, or the number of the pixels in the images to be cleaned corresponding to the target sampling period is smaller than a preset pixel threshold value, discarding the images to be cleaned corresponding to the target sampling period, or taking the images to be cleaned corresponding to the target sampling period as alternative images, wherein the target sampling period is any sampling period; and determining the target images from the candidate images according to the continuous moments.
In one embodiment, the computer program, when executed by the processor, is further operable to: determining a target feature vector according to the plurality of target images includes: acquiring an image matrix corresponding to each target image according to the target images; and splicing all the image matrixes corresponding to each target image on a channel to obtain the target characteristic vectors, wherein the number of the channels of the target characteristic vectors is the same as that of the target images.
In one embodiment, the present invention also proposes a computer device comprising at least one memory, at least one processor, the memory storing a computer program of instructions which, when executed by the processor, causes the processor to carry out the following method steps: acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment; determining a target characteristic vector according to the plurality of target images; the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of a living body; and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value.
In one embodiment, the computer program, when executed by the processor, is further operable to: acquiring a plurality of image characteristic vector samples, wherein one image characteristic vector sample corresponds to a plurality of image samples at a plurality of continuous moments at the same position, the image characteristic vector sample is a multi-channel two-dimensional matrix, and the number of channels of the image characteristic vector sample is equal to the number of the plurality of image samples; inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample, wherein the predicted feature vector is used for indicating the flow direction of a predicted living body corresponding to the image feature vector sample, and the number of vector elements of the predicted feature vector is 2; and training the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample to obtain the living body flow direction discrimination model, wherein the living body flow direction calibration value corresponding to the image feature vector sample is used for indicating the real living body flow direction corresponding to the image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the convolutional neural network comprises a convolutional layer, a pooling layer and an abstract compression module which are connected in sequence; the inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample comprises: inputting the plurality of image feature vector samples into a convolution layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample; inputting the first feature vector corresponding to each image feature vector sample into a pooling layer of the convolutional neural network for maximum pooling to obtain a second feature vector corresponding to each image feature vector sample; and inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the abstract compression module comprises a first discarding layer, a compression layer, a first full connection layer, a second discarding layer and a second full connection layer which are sequentially connected; inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample, including: inputting the second feature vector corresponding to each image feature vector sample into a first discarding layer of the convolutional neural network for random discarding to obtain a third feature vector corresponding to each image feature vector sample; inputting the third feature vector corresponding to each image feature vector sample into a compression layer of the convolutional neural network for one-dimensional vector conversion to obtain a fourth feature vector corresponding to each image feature vector sample; inputting the fourth feature vector corresponding to each image feature vector sample into a first full-connection layer of the convolutional neural network for abstract compression to obtain a fifth feature vector corresponding to each image feature vector sample; inputting the fifth feature vector corresponding to each image feature vector sample into a second discarding layer of the convolutional neural network for random discarding to obtain a sixth feature vector corresponding to each image feature vector sample; and inputting the sixth feature vector corresponding to each image feature vector sample into a second full-connection layer of the convolutional neural network for abstract compression to obtain a predicted feature vector corresponding to each image feature vector sample.
In one embodiment, the computer program, when executed by the processor, is further operable to: the inputting the plurality of image feature vector samples into the convolutional layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample, includes: respectively inputting the plurality of image feature vector samples to a plurality of convolution channels in convolution layers of the convolution neural network, performing convolution operation on convolution kernels corresponding to the plurality of convolution channels and the plurality of image feature vector samples to obtain a first feature vector corresponding to each image feature vector sample, wherein the number of the convolution channels is greater than the number of the channels of the image feature vector samples.
In one embodiment, the computer program, when executed by the processor, is further operable to: the acquiring of the plurality of target images includes: acquiring images to be cleaned corresponding to each sampling period, wherein the images to be cleaned corresponding to one sampling period are thermal imaging images acquired in one sampling period; if the number of the images to be cleaned corresponding to the target sampling period is smaller than a preset image threshold value, or the number of the pixels in the images to be cleaned corresponding to the target sampling period is smaller than a preset pixel threshold value, discarding the images to be cleaned corresponding to the target sampling period, or taking the images to be cleaned corresponding to the target sampling period as alternative images, wherein the target sampling period is any sampling period; and determining the target images from the candidate images according to the continuous moments.
In one embodiment, the computer program, when executed by the processor, is further operable to: determining a target feature vector according to the plurality of target images includes: acquiring an image matrix corresponding to each target image according to the target images; and splicing all the image matrixes corresponding to each target image on a channel to obtain the target characteristic vectors, wherein the number of the channels of the target characteristic vectors is the same as that of the target images.
It should be noted that, the living body flow direction discriminating method based on thermal imaging, the living body flow direction discriminating device based on thermal imaging, the storage medium and the computer device described above belong to a general inventive concept, and the contents in the embodiments of the living body flow direction discriminating method based on thermal imaging, the living body flow direction discriminating device based on thermal imaging, the storage medium and the computer device may be mutually applicable.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A live body flow direction discrimination method based on thermal imaging, the method comprising:
acquiring a plurality of target images, wherein the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and the pixel value of each pixel point of the live detection images is a temperature value detected by the thermal imaging equipment;
determining a target characteristic vector according to the plurality of target images;
the target characteristic vector is used as the input of a living body flow direction discrimination model, and the living body flow direction discrimination model is used for discriminating the flow direction of a living body;
and acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at the continuous multiple moments based on the direction discrimination result value.
2. The live body flow direction discriminating method based on thermal imaging according to claim 1, further comprising:
acquiring a plurality of image characteristic vector samples, wherein one image characteristic vector sample corresponds to a plurality of image samples at a plurality of continuous moments at the same position, the image characteristic vector sample is a multi-channel two-dimensional matrix, and the number of channels of the image characteristic vector sample is equal to the number of the plurality of image samples;
inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample, wherein the predicted feature vector is used for indicating the flow direction of a predicted living body corresponding to the image feature vector sample, and the number of vector elements of the predicted feature vector is 2;
and training the convolutional neural network according to the predicted feature vector corresponding to each image feature vector sample and the living body flow direction calibration value corresponding to each image feature vector sample to obtain the living body flow direction discrimination model, wherein the living body flow direction calibration value corresponding to the image feature vector sample is used for indicating the real living body flow direction corresponding to the image feature vector sample.
3. The live body flow direction discrimination method based on thermal imaging according to claim 2, wherein the convolutional neural network comprises a convolutional layer, a pooling layer and an abstract compression module which are connected in sequence;
the inputting the plurality of image feature vector samples into a convolutional neural network for feature extraction and processing to obtain a predicted feature vector corresponding to each image feature vector sample comprises:
inputting the plurality of image feature vector samples into a convolution layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample;
inputting the first feature vector corresponding to each image feature vector sample into a pooling layer of the convolutional neural network for maximum pooling to obtain a second feature vector corresponding to each image feature vector sample;
and inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample.
4. The live body flow direction discrimination method based on thermal imaging according to claim 3, wherein the abstract compression module comprises a first discarding layer, a compression layer, a first full connection layer, a second discarding layer and a second full connection layer which are connected in sequence;
inputting the second feature vector corresponding to each image feature vector sample into an abstract compression module of the convolutional neural network for abstract compression processing to obtain a predicted feature vector corresponding to each image feature vector sample, including:
inputting the second feature vector corresponding to each image feature vector sample into a first discarding layer of the convolutional neural network for random discarding to obtain a third feature vector corresponding to each image feature vector sample;
inputting the third feature vector corresponding to each image feature vector sample into a compression layer of the convolutional neural network for one-dimensional vector conversion to obtain a fourth feature vector corresponding to each image feature vector sample;
inputting the fourth feature vector corresponding to each image feature vector sample into a first full-connection layer of the convolutional neural network for abstract compression to obtain a fifth feature vector corresponding to each image feature vector sample;
inputting the fifth feature vector corresponding to each image feature vector sample into a second discarding layer of the convolutional neural network for random discarding to obtain a sixth feature vector corresponding to each image feature vector sample;
and inputting the sixth feature vector corresponding to each image feature vector sample into a second full-connection layer of the convolutional neural network for abstract compression to obtain a predicted feature vector corresponding to each image feature vector sample.
5. The live body flow direction screening method based on thermal imaging according to claim 3, wherein the inputting the plurality of image feature vector samples into a convolutional layer of the convolutional neural network for channel expansion and abstract feature extraction to obtain a first feature vector corresponding to each image feature vector sample comprises:
respectively inputting the plurality of image feature vector samples to a plurality of convolution channels in convolution layers of the convolution neural network, performing convolution operation on convolution kernels corresponding to the plurality of convolution channels and the plurality of image feature vector samples to obtain a first feature vector corresponding to each image feature vector sample, wherein the number of the convolution channels is greater than the number of the channels of the image feature vector samples.
6. The live body flow direction screening method based on thermal imaging according to any one of claims 1 to 5, wherein the acquiring of the multiple target images comprises:
acquiring images to be cleaned corresponding to each sampling period, wherein the images to be cleaned corresponding to one sampling period are thermal imaging images acquired in one sampling period;
if the number of the images to be cleaned corresponding to the target sampling period is smaller than a preset image threshold value, or the number of the pixels in the images to be cleaned corresponding to the target sampling period is smaller than a preset pixel threshold value, discarding the images to be cleaned corresponding to the target sampling period, or taking the images to be cleaned corresponding to the target sampling period as alternative images, wherein the target sampling period is any sampling period;
and determining the target images from the candidate images according to the continuous moments.
7. The method for discriminating the flow direction of a living body based on thermal imaging according to any one of claims 1 to 5, wherein the determining a target feature vector according to the plurality of target images comprises:
acquiring an image matrix corresponding to each target image according to the target images;
and splicing all the image matrixes corresponding to each target image on a channel to obtain the target characteristic vectors, wherein the number of the channels of the target characteristic vectors is the same as that of the target images.
8. A live body flow direction discriminating apparatus based on thermal imaging, the apparatus comprising:
the system comprises an image acquisition module, a temperature detection module and a control module, wherein the image acquisition module is used for acquiring a plurality of target images, the plurality of target images are live detection images of target positions detected by thermal imaging equipment at a plurality of continuous moments, and pixel values of pixel points of the live detection images are temperature values detected by the thermal imaging equipment;
the vector extraction module is used for determining a target characteristic vector according to the plurality of target images;
and the living body flow direction discrimination module is used for taking the target characteristic vector as the input of a living body flow direction discrimination model, the living body flow direction discrimination model is used for discriminating the flow direction of the living body, acquiring a direction discrimination result value output by the living body flow direction discrimination model, and determining the living body flow direction of the target position at multiple continuous moments based on the direction discrimination result value.
9. A storage medium storing a computer program of instructions which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer device comprising at least one memory storing a program of computer instructions which, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 7, at least one processor.
CN202010550150.5A 2020-06-16 2020-06-16 Living body flow direction screening method, device, equipment and storage medium based on thermal imaging Active CN111881729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010550150.5A CN111881729B (en) 2020-06-16 2020-06-16 Living body flow direction screening method, device, equipment and storage medium based on thermal imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010550150.5A CN111881729B (en) 2020-06-16 2020-06-16 Living body flow direction screening method, device, equipment and storage medium based on thermal imaging

Publications (2)

Publication Number Publication Date
CN111881729A true CN111881729A (en) 2020-11-03
CN111881729B CN111881729B (en) 2024-02-06

Family

ID=73156784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010550150.5A Active CN111881729B (en) 2020-06-16 2020-06-16 Living body flow direction screening method, device, equipment and storage medium based on thermal imaging

Country Status (1)

Country Link
CN (1) CN111881729B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120740A1 (en) * 2016-12-29 2018-07-05 深圳光启合众科技有限公司 Picture classification method, device and robot
CN109949290A (en) * 2019-03-18 2019-06-28 北京邮电大学 Pavement crack detection method, device, equipment and storage medium
WO2020019760A1 (en) * 2018-07-27 2020-01-30 北京市商汤科技开发有限公司 Living body detection method, apparatus and system, and electronic device and storage medium
CN110738103A (en) * 2019-09-04 2020-01-31 北京奇艺世纪科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111104833A (en) * 2018-10-29 2020-05-05 北京三快在线科技有限公司 Method and apparatus for in vivo examination, storage medium, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120740A1 (en) * 2016-12-29 2018-07-05 深圳光启合众科技有限公司 Picture classification method, device and robot
WO2020019760A1 (en) * 2018-07-27 2020-01-30 北京市商汤科技开发有限公司 Living body detection method, apparatus and system, and electronic device and storage medium
CN111104833A (en) * 2018-10-29 2020-05-05 北京三快在线科技有限公司 Method and apparatus for in vivo examination, storage medium, and electronic device
WO2020088029A1 (en) * 2018-10-29 2020-05-07 北京三快在线科技有限公司 Liveness detection method, storage medium, and electronic device
CN109949290A (en) * 2019-03-18 2019-06-28 北京邮电大学 Pavement crack detection method, device, equipment and storage medium
CN110738103A (en) * 2019-09-04 2020-01-31 北京奇艺世纪科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曲景影;孙显;高鑫;: "基于CNN模型的高分辨率遥感图像目标识别", 国外电子测量技术, no. 08 *
龙敏;龙啸海;马莉;: "基于深度卷积神经网络的指纹活体检测算法研究", 信息网络安全, no. 06 *

Also Published As

Publication number Publication date
CN111881729B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
US11403876B2 (en) Image processing method and apparatus, facial recognition method and apparatus, and computer device
CN111444881B (en) Fake face video detection method and device
CN109543627B (en) Method and device for judging driving behavior category and computer equipment
CN110929622B (en) Video classification method, model training method, device, equipment and storage medium
CN109034078B (en) Training method of age identification model, age identification method and related equipment
WO2020228446A1 (en) Model training method and apparatus, and terminal and storage medium
CN111738231B (en) Target object detection method and device, computer equipment and storage medium
CN111738244B (en) Image detection method, image detection device, computer equipment and storage medium
CN111260055B (en) Model training method based on three-dimensional image recognition, storage medium and device
CN114202672A (en) Small target detection method based on attention mechanism
CN112070044B (en) Video object classification method and device
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN110176024B (en) Method, device, equipment and storage medium for detecting target in video
CN112183295A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN113469074B (en) Remote sensing image change detection method and system based on twin attention fusion network
CN112115860A (en) Face key point positioning method and device, computer equipment and storage medium
CN112990107B (en) Hyperspectral remote sensing image underwater target detection method and device and computer equipment
US20230343137A1 (en) Method and apparatus for detecting key point of image, computer device and storage medium
CN113449586A (en) Target detection method, target detection device, computer equipment and storage medium
CN111291716B (en) Sperm cell identification method, sperm cell identification device, computer equipment and storage medium
CN112883983A (en) Feature extraction method and device and electronic system
CN109871814B (en) Age estimation method and device, electronic equipment and computer storage medium
CN110705513A (en) Video feature extraction method and device, readable storage medium and computer equipment
CN111259838A (en) Method and system for deeply understanding human body behaviors in service robot service environment
CN111881729B (en) Living body flow direction screening method, device, equipment and storage medium based on thermal imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant