CN112784712B - Missing child early warning implementation method and device based on real-time monitoring - Google Patents

Missing child early warning implementation method and device based on real-time monitoring Download PDF

Info

Publication number
CN112784712B
CN112784712B CN202110024033.XA CN202110024033A CN112784712B CN 112784712 B CN112784712 B CN 112784712B CN 202110024033 A CN202110024033 A CN 202110024033A CN 112784712 B CN112784712 B CN 112784712B
Authority
CN
China
Prior art keywords
face
feature
image
points
missing child
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110024033.XA
Other languages
Chinese (zh)
Other versions
CN112784712A (en
Inventor
苗朝府
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Original Assignee
Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Chuangtong Lianzhi Internet Of Things Co ltd filed Critical Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Priority to CN202110024033.XA priority Critical patent/CN112784712B/en
Publication of CN112784712A publication Critical patent/CN112784712A/en
Application granted granted Critical
Publication of CN112784712B publication Critical patent/CN112784712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Abstract

The invention discloses a method and a device for realizing early warning of a missing child based on real-time monitoring, wherein the method comprises the following steps: acquiring and detecting face images appearing on streets in real time; extracting face characteristic data from the face image by using a trained convolutional neural network; calculating Euclidean distance between the face characteristic data and the face characteristic data of each pre-stored missing child human image, and judging whether the Euclidean distance is smaller than or equal to a preset threshold value; if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the pre-stored missing child face image are the same face, and then giving out early warning of the missing child. According to the technical scheme, the mobile phone or the vehicle-mounted automobile data recorder and the face recognition technology are combined, and the face recognition technology is applied to searching of the missing children, so that the strength of masses is exerted, and the success rate of searching of the missing children is improved.

Description

Missing child early warning implementation method and device based on real-time monitoring
Technical Field
The invention relates to the field of missing child early warning, in particular to a method and a device for realizing missing child early warning based on real-time monitoring, electronic equipment and a computer readable storage medium.
Background
The children are lost or found with great difficulty after walking, the children are too young to alarm, the clues are found to be difficult, the data show that the probability of finding the lost children in China is only 0.1%. Therefore, the force of the whole society is particularly required to be mobilized, the functions of various media are exerted, the related information is acquired as much as possible, the public assistance is requested to the greatest extent, and the clues are captured in time, so that the actions of the relatives and police of the children cannot be relied on.
At present, in the scenes of video monitoring, face recognition access control, face recognition unlocking and the like, face recognition plays an important role, and automatic alarming is achieved by automatically recognizing guests through face recognition and judging intrusion or unlocking of strangers, however, the face recognition is not applied to the record of the missing early warning of children at present.
Disclosure of Invention
The present invention has been made in view of the above-mentioned problems, and it is an object of the present invention to provide a method, apparatus, electronic device and computer-readable storage medium for real-time monitoring-based missing child early warning implementation that overcomes or at least partially solves the above-mentioned problems.
According to one aspect of the invention, there is provided a method for implementing early warning of a missing child based on real-time monitoring, the method comprising:
acquiring and detecting face images appearing on streets in real time;
extracting face characteristic data from the face image by using a trained convolutional neural network;
calculating Euclidean distance between the face characteristic data and the face characteristic data of each pre-stored missing child portrait image, and judging whether the Euclidean distance is smaller than or equal to a preset threshold value;
if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the pre-stored missing child face image are the same face, and then giving out missing child early warning.
Optionally, the acquiring and detecting the face image appearing on the street in real time includes:
and acquiring and detecting the data of the face image from video or photographing of a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera.
Optionally, the acquiring and detecting the face image appearing on the street in real time further includes:
and performing face scale correction processing, planar face rotation correction processing, depth face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image, and detecting the preprocessed face image to obtain a face candidate region in the image.
Optionally, the extracting face feature data from the face image by using the trained convolutional neural network includes:
determining a face candidate region;
selecting characteristic points in the face candidate region, and correcting the characteristic points;
and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
Optionally, the selecting the feature points in the face candidate region and correcting the feature points includes:
selecting boundary points, curve inflection points, connection points or equal division points on the connecting lines of the points of the face candidate region as feature points, sequentially arranging x and y coordinate values of each feature point to form a first feature set, and converting the first feature set into a two-dimensional vector; carrying out principal component analysis on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector, and the vector is a vector comprising covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with texture information in a training set, finding out the closest point of the texture information in the training set as a matching feature point, thereby obtaining a second feature set, and rotating, scaling or translating a face candidate region by utilizing the position relation between the matching feature point and the corresponding feature point to obtain an aligned face region, and obtaining corrected feature points according to the aligned face region.
Optionally, the face feature data includes a 128-dimensional vector, the 128-dimensional vector is composed of x values and y values of 68 feature points, the face feature of the pre-saved missing child portrait image is obtained in the same manner as the face feature is obtained, and the calculating the euclidean distance between the face feature and the face feature of the pre-saved missing child portrait image includes:
calculating the average value of 128-dimensional vectors of the face features, and storing the average value in a CSV file;
calculating the average value of 128-dimensional vectors of the face characteristics of each pre-stored missing child portrait image, and storing the average value into the CSV file;
and according to the difference value between the vectors of all dimensions, solving the Euclidean distance between the average value of the 128-dimensional vectors of the face features of each pre-stored missing child portrait image and the average value of the 128-dimensional vectors of the face features.
Optionally, if the euclidean distance is less than or equal to a preset threshold, determining that the face image and the pre-stored missing child face image are the same face includes:
and if a plurality of Euclidean distances are smaller than or equal to the preset threshold, sequencing the Euclidean distances, and selecting the face corresponding to the smallest Euclidean distance as the pre-warning object.
According to another aspect of the present invention, there is provided a missing child early warning implementation device based on real-time monitoring, the device comprising:
the face image detection unit is suitable for acquiring and detecting face images appearing on streets in real time;
the feature data acquisition unit is suitable for extracting face feature data from the face image by using a trained convolutional neural network;
the Euclidean distance determining unit is suitable for calculating Euclidean distances between the face characteristic data and the face characteristic data of each pre-stored missing child portrait image and judging whether the Euclidean distances are smaller than or equal to a preset threshold value;
and the missing child early warning unit is suitable for judging that the face image and the pre-stored missing child face image are the same face if the Euclidean distance is smaller than or equal to a preset threshold value, and then sending out missing child early warning.
According to still another aspect of the present invention, there is provided an electronic apparatus including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method as described in any of the above.
According to a further aspect of the present invention there is provided a computer readable storage medium storing one or more programs which when executed by a processor implement a method as described in any of the above.
The technical scheme can obtain the following beneficial effects:
the invention extracts the face characteristics by utilizing the face images acquired from the vehicle-mounted automobile data recorder or the mobile phone or other functional cameras in real time, compares the face characteristics with the pre-stored face characteristic data of the missing child, calculates the Euclidean distance between the face characteristics, judges whether the face characteristic data acquired in real time and the face characteristic data of the missing child are the same face according to the Euclidean distance, avoids the use of a loss function, and more directly and accurately acquires the identification result. According to the technical scheme, the mobile phone or the vehicle-mounted automobile data recorder and the face recognition technology are combined, and the face recognition technology is applied to searching of the missing children, so that the strength of masses is exerted, and the success rate of searching of the missing children is improved.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flow diagram of a method for implementing a missing child early warning based on real-time monitoring according to one embodiment of the invention;
FIG. 2 shows a schematic structural diagram of a missing child early warning implementation device based on real-time monitoring according to one embodiment of the present invention;
FIG. 3 shows a schematic diagram of an electronic device according to one embodiment of the invention;
fig. 4 illustrates a schematic structure of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a method for implementing early warning of a missing child based on real-time monitoring according to an embodiment of the present invention, the method includes:
s110, acquiring and detecting face images appearing on the street in real time. According to the embodiment of the invention, the face image is acquired and detected from video videos or pictures acquired by an intelligent camera such as a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera, and the face candidate area is determined by a rectangular frame.
And S120, extracting face characteristic data from the face image by using the trained convolutional neural network.
The convolutional neural network in the step is obtained after training by selecting a proper face image training sample set, wherein the characteristic point marks of the face comprise eyebrows, eyes, nose, mouth, chin and face contours, 68 characteristic points can be preferably adopted for each sample in the sample set to represent the shape of the face, the characteristic points are also called key points, and the position information of each key point is two-dimensional, and then the sample can be represented as follows: xi= [ Xi1, xi2, ], xi68, yi1, yi2, ], yi 68.
Convolutional neural networks (ConvolutionalNeuralNetworks, CNN) are a type of feed forward neural networks (feedforwardshaving deep structures) that involve convolutional calculations, and are one representative algorithm for deep learning. Convolutional neural networks have a characteristic learning capability and can perform translation-invariant classification on input information according to a hierarchical structure of the convolutional neural networks, so the convolutional neural networks are also called as 'translation-invariant artificial neural networks'.
The convolutional neural network comprises an input layer, an implicit layer and an output layer, wherein the input layer is used for receiving the face two-dimensional vector; the hidden layer comprises a convolution layer, a pooling layer and a full-connection layer, wherein the convolution kernel in the convolution layer comprises weight coefficients, the function of the convolution layer is to extract characteristics of input data, the convolution layer internally comprises a plurality of convolution kernels, each element composing the convolution kernels corresponds to one weight coefficient and one deviation amount, and the convolution kernels are similar to neurons of a feedforward neural network; the output layer upstream of the convolutional neural network is usually a fully-connected layer, so that the structure and the working principle of the convolutional neural network are the same as those of the output layer of the traditional feedforward neural network. According to the subsequent calculation requirements, the output layer in the invention does not set a function for classification, and directly outputs more refined and accurate face feature vectors.
S130, calculating Euclidean distance between the face feature data and the face feature data of each pre-stored missing child portrait image, and judging whether the Euclidean distance is smaller than or equal to a preset threshold value.
The face feature data of the missing child human image pre-stored in the invention has the same data structure, and is preferably obtained by adopting the same algorithm as the face feature data obtained in real time, wherein the Euclidean distance is obtained by squaring each vector in the two face features after subtracting, squaring the squares of the difference values of each vector, and obtaining the Euclidean distance after summing the squares of the difference values of each vector, wherein the Euclidean distance is smaller than a preset threshold value, so that the difference between the two face feature data is smaller, and the two face feature data can be recognized as the same human image.
And S140, if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the pre-stored missing child face image are the same face, and then sending out missing child early warning.
Therefore, when the Euclidean distance is smaller than a preset threshold value, the face image obtained in real time and the pre-stored missing child face image are the same face, namely the person is a found missing child with high probability, and at the moment, early warning can be sent out, and the image can be returned to the information issuer of the missing child or clues and related information can be provided for related organizations.
In a preferred embodiment, S110 specifically further includes: and performing face scale correction processing, plane face rotation correction processing, depth face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image.
Face scale correction processing, plane face rotation correction processing and depth face rotation correction processing belong to normalization of face images, and aim to enable pictures of the same person taken under different imaging conditions (illumination intensity, direction, distance, gesture and the like) to have consistency. Face normalization includes two aspects of content: firstly, geometric normalization and secondly, gray scale normalization. Geometric normalization is also called position calibration, and is helpful to correct size difference and angle inclination caused by imaging distance and face posture change, and aims to solve the problems of face scale change and face rotation, and specifically comprises three links of face scale normalization, planar face rotation correction (head distortion) and deep face rotation correction (face twisting).
Whether the gray image is directly acquired or is converted from a color image, noise exists in the gray image, and the noise has a great influence on the image quality. The median filtering can remove the solitary noise, maintain the edge characteristics of the image, and not cause the image to generate obvious blurring, so that the method is suitable for face images in experiments.
The histogram is a point operation that changes the gray value of an image point by point, so that each gray level has the same number of pixels as much as possible, and the histogram tends to be balanced. Histogram equalization may convert an input image into an output image with the same number of pixels at each gray level (i.e., the output histogram is flat).
In actual operation, any one or more of the above items can be selected according to the actual situation of the data source, and then the pre-processed face image is subjected to preliminary detection to obtain a face candidate region from the image.
In one embodiment, S120 includes: determining a face candidate region; selecting characteristic points in the face candidate region, and correcting the characteristic points; and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
In practice, the above operations may be implemented using Python in combination with OpenCv and Dlib, and existing modules are modified in development to obtain the above model. Dlib is a modern c++ toolbox that contains machine learning algorithms and tools for creating complex software in c++ to solve practical problems. It is widely used in industry and academia, including robots, embedded devices, mobile phones, and large high performance computing environments.
The determination of the face candidate region may be implemented using detector=dlib.get_front_face_detector () in Dlib, load face key point detector sp=dlib.shape_predictor (predictor_path), load face model face=dlib.face_direction_model_v1 (face_rec_model_path), and the like in conjunction with load face detector in Dlib.
In a preferred embodiment, the selecting the feature points in the face candidate region and correcting the feature points includes: selecting eyebrows, eyes, nose, mouth, chin, boundary points of facial contours, curve inflection points, connecting points or equal dividing points on connecting lines of the points in a face candidate region as characteristic points, sequentially arranging X and y coordinate values of the characteristic points to form a first characteristic set X, and converting the first characteristic set into a two-dimensional vector; performing Principal Component Analysis (PCA) on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector, and the feature vector is a vector comprising covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with texture information in a training set, finding out the closest point of the texture information in the training set as a matching feature point, thus obtaining a second feature set Y, obtaining a rotation scaling translation matrix T by utilizing the position relation between the matching feature point and the corresponding feature point, rotating, scaling or translating the face candidate region to obtain an aligned face region, and obtaining corrected feature points according to the aligned face region.
The acquisition process is as follows: initializing a vector to be 0 to obtain a model X, finding a transformation matrix T by using a kalman filtering method and the like to obtain Y, reversely solving parameters in the model Y by using the Y, updating the parameters until convergence, and rotating, translating and zooming the face region by using a rotation scaling translation matrix T to obtain the aligned face region.
In one embodiment, the face feature data includes 128-dimensional vectors, such as xi= [ Xi1, xi2,., xi68, yi1, yi2,., yi68], the 128-dimensional vectors being composed of x-values and y-values of 68 feature points, the face features of the pre-saved missing child portrait image are obtained in the same manner as the face features are obtained, and if there are n missing children, there are n 128-dimensional feature vector data pre-saved.
Thus, S130 includes: calculating the average value of 128-dimensional vectors of the face features, and storing the average value in a CSV file; and calculating the average value of 128-dimensional vectors of the face characteristics of each pre-saved missing child portrait image, wherein the average value can be calculated by using a numpy.mean () function in Dlib and saved in the CSV file.
And then, according to the difference value among the vectors of each dimension one by one, solving the Euclidean distance between the average value of the 128-dimension vectors of the face features of each pre-stored missing child portrait image and the average value of the 128-dimension vectors of the face features.
In one embodiment, S140 specifically includes: if a plurality of Euclidean distances are smaller than or equal to the preset threshold value, the Euclidean distance values are ordered, the face corresponding to the smallest Euclidean distance is selected as the pre-warning object, in actual operation, the image characteristic data can be identified, and the identifications are stored in the CSV file, so that the specific pre-warning object can be conveniently determined.
Fig. 2 shows a real-time monitoring-based missing child early warning implementation apparatus 200 according to an embodiment of the present invention, the apparatus 200 comprising:
the face image detection unit 210 acquires and detects face images appearing on the street in real time. According to the embodiment of the invention, the face image is acquired and detected from video videos or pictures acquired by an intelligent camera such as a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera, and the face candidate area is determined by a rectangular frame.
The feature data obtaining unit 220 extracts face feature data from the face image by using the trained convolutional neural network.
The euclidean distance determining unit 230 calculates the euclidean distance between the face feature data and the face feature data of each pre-stored missing child portrait image, and determines whether the euclidean distance is less than or equal to a preset threshold.
The face feature data of the missing child human image pre-stored in the invention has the same data structure, and is preferably obtained by adopting the same algorithm as the face feature data obtained in real time, wherein the Euclidean distance is obtained by squaring each vector in the two face features after subtracting, squaring the squares of the difference values of each vector, and obtaining the Euclidean distance after summing the squares of the difference values of each vector, wherein the Euclidean distance is smaller than a preset threshold value, so that the difference between the two face feature data is smaller, and the two face feature data can be recognized as the same human image.
And the missing child early warning unit 240 determines that the face image and the pre-stored missing child face image are the same face if the euclidean distance is smaller than or equal to a preset threshold value, and then sends out missing child early warning.
Therefore, when the Euclidean distance is smaller than a preset threshold value, the face image obtained in real time and the pre-stored missing child face image are the same face, namely the person is a found missing child with high probability, and at the moment, early warning can be sent out, and the image can be returned to the information issuer of the missing child or clues and related information can be provided for related organizations.
In a preferred embodiment, the face image detection unit 210 is adapted to: and performing face scale correction processing, plane face rotation correction processing, depth face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image.
In actual operation, any one or more of the above items can be selected according to the actual situation of the data source, and then the pre-processed face image is subjected to preliminary detection to obtain a face candidate region from the image.
In an embodiment, the feature data acquisition unit 220 is further adapted to: determining a face candidate region; selecting characteristic points in the face candidate region, and correcting the characteristic points; and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
In a preferred embodiment, the feature data acquisition unit 220 is specifically adapted to: selecting eyebrows, eyes, nose, mouth, chin, boundary points of facial contours, curve inflection points, connecting points or equal dividing points on connecting lines of the points in a face candidate region as characteristic points, sequentially arranging X and y coordinate values of the characteristic points to form a first characteristic set X, and converting the first characteristic set into a two-dimensional vector; performing Principal Component Analysis (PCA) on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector, and the feature vector is a vector comprising covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with texture information in a training set, finding out the closest point of the texture information in the training set as a matching feature point, thus obtaining a second feature set Y, obtaining a rotation scaling translation matrix T by utilizing the position relation between the matching feature point and the corresponding feature point, rotating, scaling or translating the face candidate region to obtain an aligned face region, and obtaining corrected feature points according to the aligned face region.
In one embodiment, the face feature data includes 128-dimensional vectors, such as xi= [ Xi1, xi2,., xi68, yi1, yi2,., yi68], the 128-dimensional vectors being composed of x-values and y-values of 68 feature points, the face features of the pre-saved missing child portrait image are obtained in the same manner as the face features are obtained, and if there are n missing children, there are n 128-dimensional feature vector data pre-saved.
Thus, the euclidean distance determining unit 230 is adapted to: calculating the average value of 128-dimensional vectors of the face features, and storing the average value in a CSV file; and calculating the average value of 128-dimensional vectors of the face characteristics of each pre-saved missing child portrait image, wherein the average value can be calculated by using a numpy.mean () function in Dlib and saved in the CSV file.
And then, according to the difference value among the vectors of each dimension one by one, solving the Euclidean distance between the average value of the 128-dimension vectors of the face features of each pre-stored missing child portrait image and the average value of the 128-dimension vectors of the face features.
In one embodiment, the missing child early warning unit 240 is specifically adapted to: if a plurality of Euclidean distances are smaller than or equal to the preset threshold value, the Euclidean distance values are ordered, the face corresponding to the smallest Euclidean distance is selected as the pre-warning object, in actual operation, the image characteristic data can be identified, and the identifications are stored in the CSV file, so that the specific pre-warning object can be conveniently determined.
It should be noted that, the specific implementation manner of each embodiment of the apparatus may be performed with reference to the specific implementation manner of the corresponding embodiment of the method, which is not described herein.
In summary, the technical scheme of the invention simplifies the recognition model, improves the recognition precision of the model, avoids the use of a loss function, and obtains the recognition result more directly and accurately; and the mobile phone or the vehicle-mounted automobile data recorder and other devices are combined with the face recognition technology, and the face recognition technology is applied to the searching of the missing children, so that the strength of masses is exerted, and the success rate of searching the missing children is improved.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a real-time monitoring-based missing child early warning implementation device in accordance with an embodiment of the present invention. The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
For example, fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 300 comprises a processor 310 and a memory 320 arranged to store computer executable instructions (computer readable program code). The memory 320 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 320 has a memory space 330 storing computer readable program code 331 for performing any of the method steps described above. For example, the memory space 330 for storing computer readable program code may include respective computer readable program code 331 for implementing the respective steps in the above method, respectively. The computer readable program code 331 can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium as described for example in fig. 4. Fig. 4 illustrates a schematic structure of a computer-readable storage medium according to an embodiment of the present invention. The computer readable storage medium 400 stores computer readable program code 331 for performing the steps of the method according to the invention, which may be read by the processor 310 of the electronic device 300, which computer readable program code 331, when executed by the electronic device 300, causes the electronic device 300 to perform the steps of the method described above, in particular the computer readable program code 331 stored by the computer readable storage medium may perform the method shown in any of the embodiments described above. The computer readable program code 331 may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.

Claims (7)

1. The method for realizing the early warning of the missing child based on real-time monitoring is characterized by comprising the following steps of:
acquiring and detecting face images appearing on streets in real time;
extracting face characteristic data from the face image by using a trained convolutional neural network;
calculating Euclidean distance between the face characteristic data and the face characteristic data of each pre-stored missing child portrait image, and judging whether the Euclidean distance is smaller than or equal to a preset threshold value;
if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the pre-stored missing child face image are the same face, and then sending out missing child early warning;
the extracting face feature data from the face image by using the trained convolutional neural network comprises the following steps:
determining a face candidate region;
selecting characteristic points in the face candidate region, and correcting the characteristic points;
according to the feature points, determining face feature data of the face image by using the trained convolutional neural network;
the selecting the feature points in the face candidate region and correcting the feature points comprises the following steps:
selecting boundary points, curve inflection points, connection points or equal division points on the connecting lines of the points of the face candidate region as feature points, sequentially arranging x and y coordinate values of each feature point to form a first feature set, and converting the first feature set into a two-dimensional vector; carrying out principal component analysis on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector, and the vector is a vector comprising covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with texture information in a training set, finding out matched feature points of the texture information in the training set, thus obtaining a second feature set, rotating, scaling or translating a face candidate region by utilizing the position relation between the matched feature points and the corresponding feature points, obtaining an aligned face region, and obtaining corrected feature points according to the aligned face region;
the face feature data comprises 128-dimensional vectors, the 128-dimensional vectors are composed of x values and y values of 68 feature points, face features of the pre-stored missing child portrait image are obtained in the same mode as the face features are obtained, and the calculating of the Euclidean distance between the face features and the face features of the pre-stored missing child portrait image comprises:
calculating the average value of 128-dimensional vectors of the face features, and storing the average value in a CSV file;
calculating the average value of 128-dimensional vectors of the face characteristics of each pre-stored missing child portrait image, and storing the average value into the CSV file;
and according to the difference value between the vectors of all dimensions, solving the Euclidean distance between the average value of the 128-dimensional vectors of the face features of each pre-stored missing child portrait image and the average value of the 128-dimensional vectors of the face features.
2. The method of claim 1, wherein the acquiring and detecting face images appearing on streets in real time comprises:
and acquiring and detecting the data of the face image from video or photographing of a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera.
3. The method of claim 1, wherein the acquiring and detecting face images appearing on streets in real time further comprises:
and performing face scale correction processing, planar face rotation correction processing, depth face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image, and detecting the preprocessed face image to obtain a face candidate region in the image.
4. The method of claim 1, wherein if the euclidean distance is less than or equal to a preset threshold, determining that the face image and the pre-saved missing child face image are the same face comprises:
and if a plurality of Euclidean distances are smaller than or equal to the preset threshold, sequencing the Euclidean distances, and selecting the face corresponding to the smallest Euclidean distance as the pre-warning object.
5. Missing child early warning implementation device based on real-time monitoring, characterized in that the device includes:
the face image detection unit is suitable for acquiring and detecting face images appearing on streets in real time;
the feature data acquisition unit is suitable for extracting face feature data from the face image by using a trained convolutional neural network;
the Euclidean distance determining unit is suitable for calculating Euclidean distances between the face characteristic data and the face characteristic data of each pre-stored missing child portrait image and judging whether the Euclidean distances are smaller than or equal to a preset threshold value;
the missing child early warning unit is suitable for judging that the face image and the pre-stored missing child face image are the same face if the Euclidean distance is smaller than or equal to a preset threshold value, and then sending out missing child early warning;
the feature data acquisition unit is specifically configured to:
determining a face candidate region;
selecting characteristic points in the face candidate region, and correcting the characteristic points;
according to the feature points, determining face feature data of the face image by using the trained convolutional neural network;
the feature data acquisition unit is specifically configured to:
selecting boundary points, curve inflection points, connection points or equal division points on the connecting lines of the points of the face candidate region as feature points, sequentially arranging x and y coordinate values of each feature point to form a first feature set, and converting the first feature set into a two-dimensional vector; carrying out principal component analysis on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector, and the vector is a vector comprising covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with texture information in a training set, finding out matched feature points of the texture information in the training set, thus obtaining a second feature set, rotating, scaling or translating a face candidate region by utilizing the position relation between the matched feature points and the corresponding feature points, obtaining an aligned face region, and obtaining corrected feature points according to the aligned face region;
the face feature data comprises 128-dimensional vectors, the 128-dimensional vectors are composed of x values and y values of 68 feature points, the face features of the pre-stored missing child portrait image are obtained in the same mode as the face features are obtained, and the Euclidean distance determining unit is specifically used for:
calculating the average value of 128-dimensional vectors of the face features, and storing the average value in a CSV file;
calculating the average value of 128-dimensional vectors of the face characteristics of each pre-stored missing child portrait image, and storing the average value into the CSV file;
and according to the difference value between the vectors of all dimensions, solving the Euclidean distance between the average value of the 128-dimensional vectors of the face features of each pre-stored missing child portrait image and the average value of the 128-dimensional vectors of the face features.
6. An electronic device, wherein the electronic device comprises: a processor; and a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of claims 1-4.
7. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs, which when executed by a processor, implement the method of any of claims 1-4.
CN202110024033.XA 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring Active CN112784712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110024033.XA CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110024033.XA CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Publications (2)

Publication Number Publication Date
CN112784712A CN112784712A (en) 2021-05-11
CN112784712B true CN112784712B (en) 2023-08-18

Family

ID=75756907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110024033.XA Active CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Country Status (1)

Country Link
CN (1) CN112784712B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420663B (en) * 2021-06-23 2022-02-22 深圳市海清视讯科技有限公司 Child face recognition method and system
CN113821040A (en) * 2021-09-28 2021-12-21 中通服创立信息科技有限责任公司 Robot with depth vision camera and laser radar integrated navigation
CN116912899A (en) * 2023-05-22 2023-10-20 国政通科技有限公司 Personnel searching method and device based on regional network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN106920256A (en) * 2017-03-14 2017-07-04 上海琛岫自控科技有限公司 A kind of effective missing child searching system
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN109948450A (en) * 2019-02-22 2019-06-28 深兰科技(上海)有限公司 A kind of user behavior detection method, device and storage medium based on image
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4375420B2 (en) * 2007-03-26 2009-12-02 株式会社デンソー Sleepiness alarm device and program
IL219572A0 (en) * 2012-05-03 2012-07-31 Pinchas Dahan Method and system for real time displaying of various combinations of selected multiple aircrafts position and their cockpit view

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN106920256A (en) * 2017-03-14 2017-07-04 上海琛岫自控科技有限公司 A kind of effective missing child searching system
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN109948450A (en) * 2019-02-22 2019-06-28 深兰科技(上海)有限公司 A kind of user behavior detection method, device and storage medium based on image
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征点的人脸相似性评估模型;陈利军 等;《电脑知识与技术》;第14卷(第03期);179-180 *

Also Published As

Publication number Publication date
CN112784712A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN107423690B (en) Face recognition method and device
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
KR102641115B1 (en) A method and apparatus of image processing for object detection
US8989455B2 (en) Enhanced face detection using depth information
US9639748B2 (en) Method for detecting persons using 1D depths and 2D texture
CN105740780B (en) Method and device for detecting living human face
US20160379050A1 (en) Method for determining authenticity of a three-dimensional object
WO2016149944A1 (en) Face recognition method and system, and computer program product
US10650260B2 (en) Perspective distortion characteristic based facial image authentication method and storage and processing device thereof
CN108416291B (en) Face detection and recognition method, device and system
CN108182397B (en) Multi-pose multi-scale human face verification method
CN111144207B (en) Human body detection and tracking method based on multi-mode information perception
CN111178252A (en) Multi-feature fusion identity recognition method
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
JP6351243B2 (en) Image processing apparatus and image processing method
US11908117B2 (en) Image processing method and apparatus for object detection
US10521659B2 (en) Image processing device, image processing method, and image processing program
US20140301608A1 (en) Chemical structure recognition tool
JP2021503139A (en) Image processing equipment, image processing method and image processing program
CN112668374A (en) Image processing method and device, re-recognition network training method and electronic equipment
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
Gürel Development of a face recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant