CN112784712A - Missing child early warning implementation method and device based on real-time monitoring - Google Patents

Missing child early warning implementation method and device based on real-time monitoring Download PDF

Info

Publication number
CN112784712A
CN112784712A CN202110024033.XA CN202110024033A CN112784712A CN 112784712 A CN112784712 A CN 112784712A CN 202110024033 A CN202110024033 A CN 202110024033A CN 112784712 A CN112784712 A CN 112784712A
Authority
CN
China
Prior art keywords
face
feature
image
missing child
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110024033.XA
Other languages
Chinese (zh)
Other versions
CN112784712B (en
Inventor
苗朝府
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Original Assignee
Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Chuangtong Lianzhi Internet Of Things Co ltd filed Critical Chongqing Chuangtong Lianzhi Internet Of Things Co ltd
Priority to CN202110024033.XA priority Critical patent/CN112784712B/en
Publication of CN112784712A publication Critical patent/CN112784712A/en
Application granted granted Critical
Publication of CN112784712B publication Critical patent/CN112784712B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a missing child early warning implementation method and device based on real-time monitoring, wherein the method comprises the following steps: acquiring and detecting a face image appearing on a street in real time; extracting face feature data from the face image by using the trained convolutional neural network; calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image, and judging whether the Euclidean distances are smaller than or equal to a preset threshold value or not; and if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the face image of the missing child stored in advance are the same face, and then sending out early warning of the missing child. According to the technical scheme, the mobile phone or the vehicle-mounted automobile data recorder and other equipment are combined with the face recognition technology, and the face recognition technology is applied to the searching of the lost children, so that the force of the masses is exerted, and the success rate of searching the lost children is improved.

Description

Missing child early warning implementation method and device based on real-time monitoring
Technical Field
The invention relates to the field of missing child early warning, in particular to a method and a device for realizing missing child early warning based on real-time monitoring, electronic equipment and a computer readable storage medium.
Background
China is wide in territory and large in population, children are missing or are in abduction and are difficult to find, in addition, some children are too young to give an alarm, clues are difficult to find, data shows that the probability of finding back by missing children in China is only 0.1%. Therefore, there is a particular need to mobilize the social strength, exert the functions of various media, obtain the related information as much as possible, request the public help to the maximum extent, and capture clues in time, but cannot rely on the actions of the child relatives and police.
At present, in scenes such as video monitoring, face recognition entrance guard, face recognition unlocking and the like, face recognition plays an important role, visitors are automatically recognized through face recognition, and intrusion or unlocking of strangers is judged to realize automatic alarm, however, face recognition is not applied to record of child missing early warning at present.
Disclosure of Invention
In view of the above, the present invention has been made to provide a missing child warning implementation method, apparatus, electronic device and computer readable storage medium based on real-time monitoring that overcome or at least partially solve the above-mentioned problems.
According to one aspect of the invention, a missing child early warning implementation method based on real-time monitoring is provided, and the method comprises the following steps:
acquiring and detecting a face image appearing on a street in real time;
extracting face feature data from the face image by using a trained convolutional neural network;
calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image, and judging whether the Euclidean distances are smaller than or equal to a preset threshold value or not;
and if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the face image of the pre-stored missing child are the same face, and then sending out early warning of the missing child.
Optionally, the acquiring and detecting the face image appearing on the street in real time includes:
and acquiring and detecting the data of the face image from the video recording or the photo taking of a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera.
Optionally, the acquiring and detecting the face image appearing on the street in real time further includes:
and carrying out face scale correction processing, plane face rotation correction processing, deep face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image, and detecting the preprocessed face image to obtain a face candidate region in the image.
Optionally, the extracting, by using the trained convolutional neural network, the face feature data from the face image includes:
determining a face candidate region;
selecting characteristic points in the face candidate area, and correcting the characteristic points;
and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
Optionally, the selecting the feature points in the face candidate region, and correcting the feature points includes:
selecting boundary points, curve inflection points, connection points or bisectors on the connection lines of the points of the human face candidate region as feature points, arranging x and y coordinate values of the feature points in sequence to form a first feature set, and converting the first feature set into a two-dimensional vector; performing principal component analysis on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector at the moment, and the vector is a vector comprising the covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with the texture information in the training set, finding out a point with the closest texture information in the training set as a matching feature point so as to obtain a second feature set, rotating, zooming or translating the face candidate region by using the position relation between the matching feature point and the corresponding feature point to obtain an aligned face region, and obtaining a corrected feature point according to the aligned face region.
Optionally, the face feature data includes a 128-dimensional vector, the 128-dimensional vector is composed of x values and y values of 68 feature points, the face features of the pre-saved missing child portrait image are obtained in the same manner as the face features are obtained, and the calculating the euclidean distance between the face features and the face features of the pre-saved missing child portrait image includes:
calculating the average value of the 128-dimensional vectors of the human face features, and storing the average value in a CSV file;
calculating the average value of 128-dimensional vectors of the face features of the pre-stored missing child portrait images, and storing the average value into the CSV file;
and solving the Euclidean distance between the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait images and the mean value of the 128-dimensional vectors of the face features according to the difference value between the dimensional vectors.
Optionally, if the euclidean distance is less than or equal to a preset threshold, determining that the face image and the pre-stored missing child face image are the same face includes:
if the Euclidean distances are smaller than or equal to the preset threshold value, sequencing the Euclidean distances, and selecting the face corresponding to the smallest Euclidean distance as an early warning object.
According to another aspect of the present invention, there is provided a missing child warning implementation apparatus based on real-time monitoring, the apparatus including:
the face image detection unit is suitable for acquiring and detecting a face image appearing on a street in real time;
the characteristic data acquisition unit is suitable for extracting human face characteristic data from the human face image by using the trained convolutional neural network;
the Euclidean distance determining unit is suitable for calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image and judging whether the Euclidean distances are smaller than or equal to a preset threshold value or not;
and the missing child early warning unit is suitable for judging that the face image and the prestored missing child face image are the same face if the Euclidean distance is less than or equal to a preset threshold value, and then sending out missing child early warning.
In accordance with still another aspect of the present invention, there is provided an electronic apparatus including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform a method as any one of the above.
According to a further aspect of the invention, there is provided a computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement a method as any one of the above.
The technical scheme can obtain the following beneficial effects:
the method extracts the face features by using the face images acquired in real time from the vehicle-mounted automobile data recorder or the mobile phone or other functional cameras, compares the face features with the face feature data of the missing child stored in advance, calculates the Euclidean distance between the face features and the face feature data of the missing child, judges whether the face feature data acquired in real time and the face feature data of the missing child are the same face according to the Euclidean distance, avoids the use of a loss function, and more directly and accurately acquires the identification result. According to the technical scheme, the mobile phone or the vehicle-mounted automobile data recorder and other equipment are combined with the face recognition technology, and the face recognition technology is applied to the searching of the lost children, so that the force of the masses is exerted, and the success rate of searching the lost children is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow diagram of a missing child warning implementation method based on real-time monitoring according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a missing child warning implementation device based on real-time monitoring according to an embodiment of the invention;
FIG. 3 shows a schematic structural diagram of an electronic device according to one embodiment of the invention;
fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a missing child warning implementation method based on real-time monitoring according to one embodiment of the invention, the method includes:
and S110, acquiring and detecting a face image appearing on a street in real time. The embodiment of the invention firstly obtains and detects the face image from the video or the shot picture collected by the vehicle-mounted automobile data recorder, the pedestrian mobile phone camera or the street monitoring intelligent camera, and determines the face candidate area by the rectangular frame.
And S120, extracting face feature data from the face image by using the trained convolutional neural network.
The convolutional neural network in this step is obtained by selecting a suitable training sample set of face images, where the feature point labels of the face include eyebrows, eyes, nose, mouth, chin, and face contour, for each sample in the sample set, it may be preferable to use 68 feature points to represent the shape of the face, these feature points are also referred to as key points, and the position information of each key point is two-dimensional, then the sample may be represented as follows: xi ═ Xi1, Xi 2.., Xi68, yi1, yi 2.., yi 68.
Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (feed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the algorithms that represent deep learning. Convolutional neural networks have a characteristic learning ability, and can perform translation invariant classification on input information according to a hierarchical structure thereof, and are also called "translation invariant artificial neural networks".
The convolutional neural network comprises an input layer, a hidden layer and an output layer, wherein the input layer is used for receiving the two-dimensional vector of the human face; the hidden layer comprises a convolution layer, a pooling layer and a full-connection layer, convolution kernels in the convolution layer comprise weight coefficients, the convolution layer has the function of carrying out feature extraction on input data and comprises a plurality of convolution kernels, each element forming the convolution kernels corresponds to one weight coefficient and one deviation value and is similar to a neuron of a feedforward neural network; the convolutional neural network is usually a fully-connected layer upstream of the output layer, and thus has the same structure and operation principle as the output layer in the conventional feedforward neural network. According to the subsequent calculation requirements, the output layer in the invention is not provided with a function for classification, and more refined and accurate face feature vectors are directly output.
And S130, calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image, and judging whether the Euclidean distances are smaller than or equal to a preset threshold value.
The face feature data of the pre-stored missing child portrait image has the same data structure, and is preferably obtained by adopting the same algorithm as the face feature data obtained in real time, wherein the Euclidean distance is obtained by subtracting each vector in the two face features and then squaring the subtracted vector, and the Euclidean distance is obtained by adding the squares of the difference values of the vectors and then squaring the added vector, and the Euclidean distance being smaller than a preset threshold value indicates that the difference between the two face feature data is smaller, so that the two face feature data can be identified as the same portrait.
And S140, if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the prestored face image of the missing child are the same face, and then sending out early warning of the missing child.
Therefore, when the Euclidean distance is smaller than the preset threshold value, the fact that the face image obtained in real time and the face image of the missing child stored in advance are the same face is shown, namely the person probably is the missing child to be found, at the moment, early warning can be sent out, the image can be returned to the information issuing party of the missing child, or clues and related information are provided for related organizations.
In a preferred embodiment, S110 further includes: and carrying out face scale correction processing, plane face rotation correction processing, deep face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image.
The human face scale correction processing, the plane human face rotation correction processing and the depth human face rotation correction processing belong to the normalization of human face images, and aim to enable photos of the same person shot under different imaging conditions (illumination intensity, direction, distance, posture and the like) to have consistency. Face normalization includes two aspects: the first is geometric normalization and the second is gray normalization. The geometric normalization is also called position calibration, is helpful for correcting size difference and angle inclination caused by imaging distance and face posture change, and aims to solve the problems of face scale change and face rotation, and specifically comprises three links of face scale normalization, planar face rotation correction (head distortion) and deep face rotation correction (face distortion).
Both the directly acquired gray-scale image and the gray-scale image converted from the color image have noise therein, and the noise has a great influence on the image quality. The median filtering can remove the isolated point noise, maintain the edge characteristics of the image, avoid obvious blurring of the image and be more suitable for the face image in the experiment.
The histogram is a point operation, which changes the gray value of the image point by point, and makes each gray level have the same number of pixel points as much as possible, so that the histogram tends to be balanced. Histogram equalization may convert an input image to an output image having the same number of pixel points at each gray level (i.e., the output histogram is flat).
In practical operation, any one or more of the above items can be selected according to the practical situation of the data source, and then the preprocessed face image is subjected to preliminary detection to obtain the face candidate region from the image.
In one embodiment, S120 includes: determining a face candidate region; selecting characteristic points in the face candidate area, and correcting the characteristic points; and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
In fact, the above operations can be implemented by Python in combination with OpenCv and Dlib, and the existing module is modified in development to obtain the above model. The Dlib is a modern C + + tool box that contains machine learning algorithms and tools for creating complex software in C + + to solve practical problems. It is widely used in the industrial and academic communities, including robots, embedded devices, mobile phones and large high-performance computing environments.
Determining the face candidate region may be implemented by combining a loading face detector in the Dlib, using a detector in the Dlib, get _ front _ face _ detector (), a loading face key point detector sp ═ Dlib, shape _ predictor (predictor _ path), a loading face recognition model face ═ Dlib, face _ recognition _ model _ v1(face _ rec _ model _ path), and the like.
In a preferred embodiment, the selecting feature points in the face candidate region, and correcting the feature points includes: selecting boundary points, curve inflection points, connection points or equant points on the connection lines of the eyebrows, the eyes, the nose, the mouth, the chin and the face in a face candidate region as feature points, arranging X and y coordinate values of the feature points in sequence to form a first feature set X, and converting the first feature set into a two-dimensional vector; performing Principal Component Analysis (PCA) on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector at the moment, and the feature vector is a vector comprising the covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with the texture information in a training set, finding a point with the closest texture information in the training set as a matching feature point to obtain a second feature set Y, obtaining a rotation scaling translation matrix T by using the position relation between the matching feature point and the corresponding feature point, rotating, scaling or translating the face candidate region to obtain an aligned face region, and obtaining corrected feature points according to the aligned face region.
The acquisition process is as follows: initializing a vector to be 0 to obtain a model X, finding a transformation matrix T by using methods such as kalman filtering and the like to obtain Y, reversely obtaining parameters therein by using Y, updating the parameters until convergence, and rotating, translating and scaling the face region by using a rotation scaling translation matrix T to obtain the aligned face region.
In one embodiment, the face feature data includes a 128-dimensional vector, such as Xi ═ Xi1, Xi2,.., Xi68, yi1, yi2,.., yi68], which is composed of x and y values of 68 feature points, and the face features of the pre-saved missing child portrait image are obtained in the same manner as the face features are obtained, and n pieces of 128-dimensional feature vector data are pre-saved assuming n missing children.
Thus, S130 includes: calculating the average value of the 128-dimensional vectors of the human face features, and storing the average value in a CSV file; and calculating the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait images, wherein the mean value can be calculated by adopting a numpy mean () function in the Dlib and is stored in the CSV file.
And then, according to the difference value between the dimensional vectors, obtaining the Euclidean distance between the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait image and the mean value of the 128-dimensional vectors of the face features.
In one embodiment, S140 specifically includes: if the Euclidean distances are smaller than or equal to the preset threshold value, the Euclidean distances are sorted, the face corresponding to the smallest Euclidean distance is selected as an early warning object, in actual operation, the feature data of each face can be identified, and each identifier is also stored in a CSV file, so that specific early warning objects can be determined conveniently.
Fig. 2 shows a missing child warning implementation apparatus 200 based on real-time monitoring according to an embodiment of the present invention, the apparatus 200 includes:
the face image detection unit 210 acquires and detects a face image appearing on a street in real time. The embodiment of the invention firstly obtains and detects the face image from the video or the shot picture collected by the vehicle-mounted automobile data recorder, the pedestrian mobile phone camera or the street monitoring intelligent camera, and determines the face candidate area by the rectangular frame.
The feature data obtaining unit 220 extracts face feature data from the face image by using the trained convolutional neural network.
The euclidean distance determining unit 230 calculates euclidean distances between the face feature data and the face feature data of each of the pre-stored missing child portrait images, and determines whether the euclidean distances are less than or equal to a preset threshold value.
The face feature data of the pre-stored missing child portrait image has the same data structure, and is preferably obtained by adopting the same algorithm as the face feature data obtained in real time, wherein the Euclidean distance is obtained by subtracting each vector in the two face features and then squaring the subtracted vector, and the Euclidean distance is obtained by adding the squares of the difference values of the vectors and then squaring the added vector, and the Euclidean distance being smaller than a preset threshold value indicates that the difference between the two face feature data is smaller, so that the two face feature data can be identified as the same portrait.
And if the Euclidean distance is smaller than or equal to a preset threshold value, the missing child early warning unit 240 judges that the face image and the prestored missing child face image are the same face, and then sends out the missing child early warning.
Therefore, when the Euclidean distance is smaller than the preset threshold value, the fact that the face image obtained in real time and the face image of the missing child stored in advance are the same face is shown, namely the person probably is the missing child to be found, at the moment, early warning can be sent out, the image can be returned to the information issuing party of the missing child, or clues and related information are provided for related organizations.
In a preferred embodiment, the face image detection unit 210 is adapted to: and carrying out face scale correction processing, plane face rotation correction processing, deep face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image.
In practical operation, any one or more of the above items can be selected according to the practical situation of the data source, and then the preprocessed face image is subjected to preliminary detection to obtain the face candidate region from the image.
In an embodiment, the feature data obtaining unit 220 is further adapted to: determining a face candidate region; selecting characteristic points in the face candidate area, and correcting the characteristic points; and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
In a preferred embodiment, the feature data acquisition unit 220 is specifically adapted to: selecting boundary points, curve inflection points, connection points or equant points on the connection lines of the eyebrows, the eyes, the nose, the mouth, the chin and the face in a face candidate region as feature points, arranging X and y coordinate values of the feature points in sequence to form a first feature set X, and converting the first feature set into a two-dimensional vector; performing Principal Component Analysis (PCA) on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector at the moment, and the feature vector is a vector comprising the covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with the texture information in a training set, finding a point with the closest texture information in the training set as a matching feature point to obtain a second feature set Y, obtaining a rotation scaling translation matrix T by using the position relation between the matching feature point and the corresponding feature point, rotating, scaling or translating the face candidate region to obtain an aligned face region, and obtaining corrected feature points according to the aligned face region.
In one embodiment, the face feature data includes a 128-dimensional vector, such as Xi ═ Xi1, Xi2,.., Xi68, yi1, yi2,.., yi68], which is composed of x and y values of 68 feature points, and the face features of the pre-saved missing child portrait image are obtained in the same manner as the face features are obtained, and n pieces of 128-dimensional feature vector data are pre-saved assuming n missing children.
Thus, the euclidean distance determining unit 230 is adapted to: calculating the average value of the 128-dimensional vectors of the human face features, and storing the average value in a CSV file; and calculating the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait images, wherein the mean value can be calculated by adopting a numpy mean () function in the Dlib and is stored in the CSV file.
And then, according to the difference value between the dimensional vectors, obtaining the Euclidean distance between the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait image and the mean value of the 128-dimensional vectors of the face features.
In one embodiment, the missing child warning unit 240 is specifically adapted to: if the Euclidean distances are smaller than or equal to the preset threshold value, the Euclidean distances are sorted, the face corresponding to the smallest Euclidean distance is selected as an early warning object, in actual operation, the feature data of each face can be identified, and each identifier is also stored in a CSV file, so that specific early warning objects can be determined conveniently.
It should be noted that, for the specific implementation of each apparatus embodiment, reference may be made to the specific implementation of the corresponding method embodiment, which is not described herein again.
In conclusion, the technical scheme of the invention simplifies the identification model, improves the identification precision of the model, avoids the use of a loss function, and obtains the identification result more directly and accurately; and the equipment such as a mobile phone or a vehicle-mounted automobile data recorder is combined with the face recognition technology, and the face recognition technology is applied to the search of the lost children, so that the force of the masses is exerted, and the success rate of searching the lost children is further improved.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the real-time monitoring based missing child warning implementation apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the invention. The electronic device 300 comprises a processor 310 and a memory 320 arranged to store computer executable instructions (computer readable program code). The memory 320 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 320 has a storage space 330 storing computer readable program code 331 for performing any of the method steps described above. For example, the storage space 330 for storing the computer readable program code may comprise respective computer readable program codes 331 for respectively implementing various steps in the above method. The computer readable program code 331 may be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 4. Fig. 4 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. The computer readable storage medium 400 has stored thereon a computer readable program code 331 for performing the steps of the method according to the invention, readable by a processor 310 of the electronic device 300, which computer readable program code 331, when executed by the electronic device 300, causes the electronic device 300 to perform the steps of the method described above, in particular the computer readable program code 331 stored on the computer readable storage medium may perform the method shown in any of the embodiments described above. The computer readable program code 331 may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A missing child early warning implementation method based on real-time monitoring is characterized by comprising the following steps:
acquiring and detecting a face image appearing on a street in real time;
extracting face feature data from the face image by using a trained convolutional neural network;
calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image, and judging whether the Euclidean distances are smaller than or equal to a preset threshold value or not;
and if the Euclidean distance is smaller than or equal to a preset threshold value, judging that the face image and the face image of the pre-stored missing child are the same face, and then sending out early warning of the missing child.
2. The method of claim 1, wherein the acquiring and detecting in real-time a facial image appearing on a street comprises:
and acquiring and detecting the data of the face image from the video recording or the photo taking of a vehicle-mounted automobile data recorder, a pedestrian mobile phone camera or a street monitoring camera.
3. The method of claim 1, wherein said obtaining and detecting in real-time images of faces appearing on streets further comprises:
and carrying out face scale correction processing, plane face rotation correction processing, deep face rotation correction processing, image scaling processing, median filtering processing or histogram light equalization processing on the obtained face image, and detecting the preprocessed face image to obtain a face candidate region in the image.
4. The method of claim 1, wherein the extracting the facial feature data from the facial image by using the trained convolutional neural network comprises:
determining a face candidate region;
selecting characteristic points in the face candidate area, and correcting the characteristic points;
and determining the face feature data of the face image by using the trained convolutional neural network according to the feature points.
5. The method of claim 4, wherein the selecting the feature points in the face candidate region and correcting the feature points comprises:
selecting boundary points, curve inflection points, connection points or bisectors on the connection lines of the points of the human face candidate region as feature points, arranging x and y coordinate values of the feature points in sequence to form a first feature set, and converting the first feature set into a two-dimensional vector; performing principal component analysis on the two-dimensional vector to extract principal components, wherein each feature point in the first feature set is a coordinate point in a principal component vector space, the origin of coordinates is the average of the first feature set, any feature point is the origin of coordinates plus a vector at the moment, and the vector is a vector comprising the covariance of the previous feature vector; sampling texture information around each feature point, comparing the sampled texture information with the texture information in the training set, finding out a point with the closest texture information in the training set as a matching feature point so as to obtain a second feature set, rotating, zooming or translating the face candidate region by using the position relation between the matching feature point and the corresponding feature point to obtain an aligned face region, and obtaining a corrected feature point according to the aligned face region.
6. The method of claim 4, wherein the facial feature data comprises a 128-dimensional vector consisting of x and y values of 68 feature points, wherein the facial features of the pre-saved missing child portrait image are obtained in the same manner as the facial features were obtained, and wherein calculating the Euclidean distance between the facial features and the facial features of the pre-saved missing child portrait image comprises:
calculating the average value of the 128-dimensional vectors of the human face features, and storing the average value in a CSV file;
calculating the average value of 128-dimensional vectors of the face features of the pre-stored missing child portrait images, and storing the average value into the CSV file;
and solving the Euclidean distance between the mean value of the 128-dimensional vectors of the face features of the pre-stored missing child portrait images and the mean value of the 128-dimensional vectors of the face features according to the difference value between the dimensional vectors.
7. The method of claim 1, wherein if the Euclidean distance is less than or equal to a preset threshold value, the determining that the face image and the pre-stored missing child face image are the same face comprises:
if the Euclidean distances are smaller than or equal to the preset threshold value, sequencing the Euclidean distances, and selecting the face corresponding to the smallest Euclidean distance as an early warning object.
8. The utility model provides a missing children early warning realization device based on real time monitoring which characterized in that, the device includes:
the face image detection unit is suitable for acquiring and detecting a face image appearing on a street in real time;
the characteristic data acquisition unit is suitable for extracting human face characteristic data from the human face image by using the trained convolutional neural network;
the Euclidean distance determining unit is suitable for calculating Euclidean distances between the face feature data and face feature data of each pre-stored missing child portrait image and judging whether the Euclidean distances are smaller than or equal to a preset threshold value or not;
and the missing child early warning unit is suitable for judging that the face image and the prestored missing child face image are the same face if the Euclidean distance is less than or equal to a preset threshold value, and then sending out missing child early warning.
9. An electronic device, wherein the electronic device comprises: a processor; and a memory arranged to store computer-executable instructions that, when executed, cause the processor to perform the method of any one of claims 1-7.
10. A computer readable storage medium, wherein the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-7.
CN202110024033.XA 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring Active CN112784712B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110024033.XA CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110024033.XA CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Publications (2)

Publication Number Publication Date
CN112784712A true CN112784712A (en) 2021-05-11
CN112784712B CN112784712B (en) 2023-08-18

Family

ID=75756907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110024033.XA Active CN112784712B (en) 2021-01-08 2021-01-08 Missing child early warning implementation method and device based on real-time monitoring

Country Status (1)

Country Link
CN (1) CN112784712B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420663A (en) * 2021-06-23 2021-09-21 深圳市海清视讯科技有限公司 Child face recognition method and system
CN113821040A (en) * 2021-09-28 2021-12-21 中通服创立信息科技有限责任公司 Robot with depth vision camera and laser radar integrated navigation
CN116912899A (en) * 2023-05-22 2023-10-20 国政通科技有限公司 Personnel searching method and device based on regional network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080238694A1 (en) * 2007-03-26 2008-10-02 Denso Corporation Drowsiness alarm apparatus and program
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
US20150145704A1 (en) * 2012-05-03 2015-05-28 Pinchas Dahan Method and system for real time displaying of various combinations of selected multiple aircrafts position and their cockpit view
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN106920256A (en) * 2017-03-14 2017-07-04 上海琛岫自控科技有限公司 A kind of effective missing child searching system
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN109948450A (en) * 2019-02-22 2019-06-28 深兰科技(上海)有限公司 A kind of user behavior detection method, device and storage medium based on image
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080238694A1 (en) * 2007-03-26 2008-10-02 Denso Corporation Drowsiness alarm apparatus and program
US20150145704A1 (en) * 2012-05-03 2015-05-28 Pinchas Dahan Method and system for real time displaying of various combinations of selected multiple aircrafts position and their cockpit view
CN103246869A (en) * 2013-04-19 2013-08-14 福建亿榕信息技术有限公司 Crime monitoring method based on face recognition technology and behavior and sound recognition
CN106156688A (en) * 2015-03-10 2016-11-23 上海骏聿数码科技有限公司 A kind of dynamic human face recognition methods and system
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN106920256A (en) * 2017-03-14 2017-07-04 上海琛岫自控科技有限公司 A kind of effective missing child searching system
CN106951867A (en) * 2017-03-22 2017-07-14 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN109886137A (en) * 2019-01-27 2019-06-14 武汉星巡智能科技有限公司 Infant sleeping posture detection method, device and computer readable storage medium
CN109948450A (en) * 2019-02-22 2019-06-28 深兰科技(上海)有限公司 A kind of user behavior detection method, device and storage medium based on image
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
CN112183394A (en) * 2020-09-30 2021-01-05 江苏智库智能科技有限公司 Face recognition method and device and intelligent security management system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG M H 等: "Detecting faces in images: A survey", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 24, no. 01, pages 34 - 58, XP011094133, DOI: 10.1109/34.982883 *
陈利军 等: "基于特征点的人脸相似性评估模型", 《电脑知识与技术》, vol. 14, no. 03, pages 179 - 180 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420663A (en) * 2021-06-23 2021-09-21 深圳市海清视讯科技有限公司 Child face recognition method and system
CN113821040A (en) * 2021-09-28 2021-12-21 中通服创立信息科技有限责任公司 Robot with depth vision camera and laser radar integrated navigation
CN116912899A (en) * 2023-05-22 2023-10-20 国政通科技有限公司 Personnel searching method and device based on regional network
CN116912899B (en) * 2023-05-22 2024-05-03 国政通科技有限公司 Personnel searching method and device based on regional network

Also Published As

Publication number Publication date
CN112784712B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US20230418389A1 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
Singh et al. Face detection and recognition system using digital image processing
Portmann et al. People detection and tracking from aerial thermal views
JP5639478B2 (en) Detection of facial expressions in digital images
JP5726125B2 (en) Method and system for detecting an object in a depth image
JP5629803B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN112686812B (en) Bank card inclination correction detection method and device, readable storage medium and terminal
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
US9639748B2 (en) Method for detecting persons using 1D depths and 2D texture
JP4743823B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US10650260B2 (en) Perspective distortion characteristic based facial image authentication method and storage and processing device thereof
US6757571B1 (en) System and process for bootstrap initialization of vision-based tracking systems
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN108416291B (en) Face detection and recognition method, device and system
CN111144207B (en) Human body detection and tracking method based on multi-mode information perception
CN109063626B (en) Dynamic face recognition method and device
CN111178252A (en) Multi-feature fusion identity recognition method
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN111985314A (en) ViBe and improved LBP-based smoke detection method
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN112070077A (en) Deep learning-based food identification method and device
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN110766093A (en) Video target re-identification method based on multi-frame feature fusion
CN113689365B (en) Target tracking and positioning method based on Azure Kinect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant