CN113449663B - Collaborative intelligent security method and device based on polymorphic fitting - Google Patents

Collaborative intelligent security method and device based on polymorphic fitting Download PDF

Info

Publication number
CN113449663B
CN113449663B CN202110763475.6A CN202110763475A CN113449663B CN 113449663 B CN113449663 B CN 113449663B CN 202110763475 A CN202110763475 A CN 202110763475A CN 113449663 B CN113449663 B CN 113449663B
Authority
CN
China
Prior art keywords
fitting
face
motion path
value
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110763475.6A
Other languages
Chinese (zh)
Other versions
CN113449663A (en
Inventor
刘小青
罗芳
刘幼聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhongzhi Mingke Intelligent Technology Co ltd
Original Assignee
Shenzhen Zhongzhi Mingke Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongzhi Mingke Intelligent Technology Co ltd filed Critical Shenzhen Zhongzhi Mingke Intelligent Technology Co ltd
Priority to CN202110763475.6A priority Critical patent/CN113449663B/en
Publication of CN113449663A publication Critical patent/CN113449663A/en
Application granted granted Critical
Publication of CN113449663B publication Critical patent/CN113449663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of intelligent security and particularly relates to a collaborative intelligent security and protection method and device based on polymorphic fitting. The method performs the steps of: step 1: acquiring video image information of a target scene, and taking a first frame in the video image information as initial image information; all object regions in the starting image information are extracted. The image authentication information of the human body and the motion track information of the human body are obtained respectively by fitting twice to perform security early warning, the human body part in the image information can be obtained by a relatively efficient method by fitting for the first time, the efficiency is improved, the interference of background factors can be better eliminated by the result obtained by fitting, a target area is obtained, the accuracy is improved, and meanwhile, the accuracy of early warning judgment is improved by using the result obtained by fitting twice to perform early warning judgment.

Description

Collaborative intelligent security method and device based on polymorphic fitting
Technical Field
The invention belongs to the technical field of intelligent security and particularly relates to a collaborative intelligent security and protection method and device based on polymorphic fitting.
Background
The intelligent security technology has been advanced into a brand new field along with the development and progress of scientific technology and the soaring of information technology in the twenty-first century, the boundary between the intelligent security technology and a computer gradually disappears, the society is unstable without the security technology, and the advance and development of the world scientific technology are influenced.
The popularization and application of the internet of things technology enable the security of cities to evolve from a simple security protection system in the past to a city comprehensive system, and security projects of the cities cover a plurality of fields including street communities, building buildings, bank post offices, road monitoring, motor vehicles, police officers, moving objects, ships and the like. In particular for important locations, such as: in airports, docks, water and electricity and gas plants, bridges and dams, riverways, subways and other places, the comprehensive three-dimensional protection can be established by means of wireless movement, tracking and positioning and the like after the technology of the Internet of things is introduced. The comprehensive system has the advantages of taking into account the application of an integral city management system, an environmental protection monitoring system, a traffic management system, an emergency command system and the like. Especially, the Internet of vehicles can be more quickly and accurately tracked and positioned in public traffic management, vehicle accident treatment and vehicle theft prevention. And more accurate information sources such as disaster accident information, road flow information, vehicle position information, public facility safety information, meteorological information and the like can be acquired through vehicles at any time and any place.
The existing intelligent security technology is mostly carried out based on image, sound, fingerprint or other identity authentication technologies. However, since various identity information acquisition technologies have certain errors, the accuracy rate is greatly influenced by simply using one identity authentication mode, but if multiple kinds of identity information are combined for security authentication, the efficiency is reduced, and therefore, the development of an intelligent security method which can ensure the efficiency and improve the accuracy rate has great significance.
Patent No. CN201510057596.3A discloses an intelligent security method and system based on a mobile terminal, the method specifically includes: the method comprises the steps that face image information is collected and stored in advance, and when an infrared detector on an intelligent security camera end detects that a person enters a camera shooting range, whether a camera starts an image collecting function is judged; if the camera has started the function of collecting images, comparing the collected real-time face image information with the face image information stored in advance; and if the comparison between the acquired real-time face image information and the face image information stored in advance is inconsistent, sending prompt information to a mobile terminal which is preset and associated with the intelligent security camera.
The identity authentication is still realized essentially through image recognition and matching, and then the purposes of security and monitoring are realized. Because the image recognition system often can cause because of environmental factor or the factor of self noise to discern the rate of accuracy reduction in the course of the work, and then the deviation appears to this reduces the rate of accuracy of security protection system's authentication.
Disclosure of Invention
In view of the above, the main object of the present invention is to provide a collaborative intelligent security method and apparatus based on polymorphic fitting, in which, by performing fitting twice, image authentication information of a human body and motion trajectory information of the human body are respectively obtained to perform security pre-warning, and by performing fitting for the first time, a human body part in the image information can be obtained by a more efficient method, so as to improve efficiency, and the result obtained by fitting can better eliminate interference of background factors to obtain a target region and improve accuracy, and meanwhile, since the result obtained by fitting twice is used for performing pre-warning judgment, the accuracy of pre-warning judgment is improved.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the collaborative intelligent security method based on polymorphic fitting comprises the following steps:
step 1: acquiring video image information of a target scene, and taking a first frame in the video image information as initial image information; extracting all object regions in the initial image information;
step 2: performing first contour fitting on all object regions to obtain fitted contours of all objects;
and step 3: identifying the fitted contours, and finding out the human body fitted contour in all the fitted contours; finding a human body region in the object region through the found human body fitting contour;
and 4, step 4: carrying out face recognition based on the found human body area to obtain a face recognition result;
and 5: taking the position of the human body area in the initial image information as an initial point, recording the positions of the human body area in other video frames in the video image information as intermediate points, and connecting all the intermediate points from the initial point according to the time sequence of the video frames to obtain the original contour of the motion path;
step 6: setting a plurality of fitting nodes in the original contour of the motion path; setting the number of fitting nodes according to a set value; the number of fitting nodes is at least 5;
and 7: performing second contour fitting on the motion path based on the motion path and a fitting node arranged on the motion path to obtain a fitting contour of the motion path;
and 8: identifying a motion path based on the obtained fitting contour of the motion path to obtain a motion path identification result;
and step 9: and judging whether to perform early warning or not based on the obtained motion path recognition result and the face recognition result.
Further, the method for extracting all object regions in the starting image information in step 1 includes: segmenting the starting image information using a segmentation threshold calculated by the following formula:
Figure GDA0003604815260000041
where H is the calculated segmentation threshold, N is the number of pixels of the starting image information, CjThe alpha is an adjustment coefficient and the value range is 200-500; determining the background color according to the segmentation result; creating a multi-channel image, assigning the multi-channel image of the region corresponding to the background color as a first value, and assigning the multi-channel image outside the region corresponding to the background color as a second value to obtain a binary image of the initial image information; and determining a background area and an object area in the initial image information according to the binary image.
Further, the method for performing first contour fitting on all object regions in step 2 to obtain the fitted contours of all objects includes: carrying out edge detection to find the edge of the object area as a graph to be fitted; setting a plurality of fitting nodes on a graph to be fitted, and generating ordered discrete points on the graph to be fitted based on the geometric characteristics of the graph to be fitted and the set fitting nodes; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitted contour.
Further, the method for detecting an edge and finding an edge of an object region includes: performing convolution operation on the object region according to the template of the edge detection operator in each preset direction and the logarithm operation gray value of each pixel point in the neighborhood of the object region to obtain a color level fixed value of the object region in each preset direction; obtaining an edge significant value of the object area according to the color level fixed value of the object area in each preset direction; comparing the edge significance value of the object region with an edge significance value threshold value, and taking the object region with the edge significance value greater than or equal to the edge significance value threshold value as an edge point; and extracting the edge of the object region according to the obtained edge point.
Further, the method for obtaining the edge significant value of the object region according to the fixed value of the color level of the object region in each preset direction includes: the edge saliency value of the object region is calculated using the following formula:
Figure GDA0003604815260000051
Figure GDA0003604815260000052
wherein r, g and b are the RGB values of the pixels in the object region, λ is the adjustment coefficient, and the value range is: 1-5; A. b and C are fixed values of the color gradation in each preset direction.
Further, the method for performing second contour fitting on the motion path based on the motion path and the fitting node set on the motion path in step 7 to obtain the fitting contour of the motion path includes: generating ordered discrete points on the motion path based on the motion path and a fitting node arranged on the motion path; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitting contour of the motion path.
Further, the method for obtaining the face recognition result by performing face recognition based on the found human body region in the step 4 includes: acquiring all face image information in video image information corresponding to the human body area; respectively extracting face characteristic information from all face image information to obtain a face characteristic information group corresponding to the human body region; and identifying the human face based on the human face characteristic information group to obtain an identification result.
Further, the recognizing the human face based on the human face feature information group to obtain a recognition result includes: fusing the face feature information in the face feature information group to obtain fused feature information; calculating the similarity between the fusion characteristic information and the face characteristic information in a preset first database; and selecting the face feature information with the highest similarity in the first database as a recognition result.
Further, the method for obtaining the fusion feature information by fusing the face feature information in the face feature information group includes: performing fusion calculation on the face feature information by using the following formula to obtain fusion feature information:
Figure GDA0003604815260000061
Figure GDA0003604815260000062
wherein M is fusion characteristic information; l is the number of the face characteristic information; p is face characteristic information; and T is a fusion group value of the face characteristic information, and the fusion group value is defined as a fusion value array generated randomly.
A collaborative intelligent security device based on polymorphic fitting.
The collaborative intelligent security method and the device based on polymorphic fitting have the following beneficial effects:
1. the efficiency is high: in the process of carrying out multiple fitting early warning, the image information is not directly analyzed in the traditional technology, the amount of algorithms required to be carried out is huge because the image information is directly identified and judged, and the traditional method directly extracts a target area part in an image and also needs a great amount of subsequent denoising and identifying work because of other interference parts in the image per se; although the invention uses two times of fitting, and compared with the traditional method, the invention has more preorders before identification and judgment, but the invention uses the method based on fitting and contour, so the algorithm quantity is equivalent to the prior art, and on the premise, the accuracy is improved.
2. The accuracy is high: when the human face image recognition is carried out, the human body contour is obtained firstly based on a fitting mode so as to eliminate the interference of background noise and other objects. On the other hand, when the image recognition is carried out, after the contour is obtained, the edge detection based on the edge significant value is firstly used for obtaining the edge of the object region, and then the recognition and the judgment are carried out, so that the recognition and the judgment efficiency and the judgment accuracy are improved; meanwhile, when the path is judged, the path is also judged in a path contour fitting mode instead of directly judging the path, and the path obtained after the path fitting is easier to judge and calculate compared with the path directly obtained, so that the influence of burrs and other branch paths in the path generation process is reduced, the efficiency is improved, and equivalently, the path is subjected to feature extraction, so that the calculated efficiency is higher, and the accuracy is not influenced.
Drawings
Fig. 1 is a schematic diagram of a system structure of a collaborative intelligent security method based on polymorphic fitting according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a human fitting profile of the collaborative intelligent security method and apparatus based on polymorphic fitting according to the embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a principle of setting a fitting node when the collaborative intelligent security method and apparatus based on polymorphic fitting according to the embodiment of the present invention performs fitting;
fig. 4 is a schematic diagram illustrating a principle when fitting is performed on the number of different fitting nodes of the collaborative intelligent security method and apparatus based on polymorphic fitting according to the embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a principle of the collaborative intelligent security method and apparatus based on polymorphic fitting according to the embodiment of the present invention when performing motion path fitting;
fig. 6 is a graph diagram of the early warning accuracy rate of the collaborative intelligent security method and apparatus based on polymorphic fitting according to the embodiment of the present invention changing with the number of experiments, and a diagram of the comparative experiment effect in the prior art;
fig. 7 is a schematic diagram of a curve indicating that the early warning accuracy varies with the number of experiments according to the polymorphic fitting-based collaborative intelligent security method and apparatus provided in the embodiment of the present invention, and a schematic diagram of a comparative experiment effect in the prior art.
Detailed Description
The method of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments of the invention.
Example 1
As shown in fig. 1, the collaborative intelligent security method based on polymorphic fitting comprises the following steps:
step 1: acquiring video image information of a target scene, and taking a first frame in the video image information as initial image information; extracting all object regions in the initial image information;
step 2: performing first contour fitting on all object regions to obtain fitted contours of all objects;
and step 3: identifying the fitted contours, and finding out the human body fitted contour in all the fitted contours; finding a human body region in the object region through the found human body fitting contour;
and 4, step 4: carrying out face recognition based on the found human body area to obtain a face recognition result;
and 5: taking the position of the human body area in the initial image information as an initial point, recording the positions of the human body area in other video frames in the video image information as intermediate points, and connecting all the intermediate points from the initial point according to the time sequence of the video frames to obtain the original contour of the motion path;
step 6: setting a plurality of fitting nodes in the original contour of the motion path; setting the number of fitting nodes according to a set value; the number of fitting nodes is at least 5;
and 7: performing second contour fitting on the motion path based on the motion path and a fitting node arranged on the motion path to obtain a fitting contour of the motion path;
and 8: identifying a motion path based on the obtained fitting contour of the motion path to obtain a motion path identification result;
and step 9: and judging whether to perform early warning or not based on the obtained motion path recognition result and the face recognition result.
Referring to fig. 2, after a human body region in an object region is found, the invention performs contour fitting once on the human body region, and when performing contour fitting, a plurality of fitting nodes are arranged on the edge, and the arrangement of the fitting nodes is arranged at intervals of an angle α according to a rule of uniform arrangement.
Referring to fig. 3, a plurality of fitting nodes are provided on the edge, and are set on the edge at a certain distance from the initial fitting node.
Referring to fig. 4, when the present invention performs fitting, how many fitting nodes are set will affect the final accuracy. The more fitting nodes, the higher the accuracy.
Referring to fig. 5, the fitted contours have a smaller rate of change of the curve than the meta-contours, and are easier to judge and analyze.
Example 2
On the basis of the above embodiment, the method for extracting all object regions in the starting image information in step 1 includes: segmenting the starting image information using a segmentation threshold calculated by the following formula:
Figure GDA0003604815260000101
where H is the calculated segmentation threshold, N is the number of pixels of the starting image information, CjThe alpha is an adjustment coefficient and the value range is 200-500; determining the background color according to the segmentation result; creating a multi-channel image, and assigning values to the multi-channel image of the region corresponding to the background colorAssigning a second value to the multi-channel image outside the region corresponding to the background color as the first value to obtain a binary image of the initial image information; and determining a background area and an object area in the initial image information according to the binary image.
Example 3
On the basis of the above embodiment, the method for performing the first contour fitting on all the object regions in step 2 to obtain the fitted contours of all the objects includes: carrying out edge detection to find the edge of the object area as a graph to be fitted; setting a plurality of fitting nodes on a graph to be fitted, and generating ordered discrete points on the graph to be fitted based on the geometric characteristics of the graph to be fitted and the set fitting nodes; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitted contour.
Specifically, the invention provides a data processing method for performing contour fitting in the same way as the thinking of curve fitting, and approximately depicting or comparing the functional relation between coordinates represented by discrete point groups on a plane by using a continuous curve. A method of approximating discrete data with an analytical expression. In scientific experiments or social activities, a set of data pairs (xi, yi) (i ═ 1, 2, … m) of quantities x and y is obtained through experiments or observations, where each xi is different from each other. It is desirable to reflect the dependency between the quantities x and y in a class of analytical expressions, y ═ f (x, c), that is, in the sense that the known data is "best" approximated or fitted, as appropriate to the background material laws of the data. f (x, c) is often referred to as a fitting model, where c ═ c1, c2, … cn are some of the parameters to be determined. When c occurs linearly in f, it is called a linear model, otherwise it is called a non-linear model. There are many criteria for goodness-of-fit, and the most common one is to choose the parameter c such that the fit model is within the actual observations
The weighted sum of squares of the residuals (or dispersion) ek of each point, yk-f (xk, c), is minimal, where the resulting curve is referred to as a fitted curve to the data in the weighted least squares sense. There are many successful methods of solving fitted curves, for which linear models the fitted curve is generally obtained by establishing and solving a system of equations to determine the parameters. For non-linear models, fitting curves are obtained by solving a non-linear system of equations or by using optimization methods to obtain the required parameters, sometimes referred to as non-linear least squares fitting.
Example 4
On the basis of the above embodiment, the method for performing edge detection to find the edge of the object region includes: performing convolution operation on the object region according to the template of the edge detection operator in each preset direction and the logarithm operation gray value of each pixel point in the neighborhood of the object region to obtain a color level fixed value of the object region in each preset direction; obtaining an edge significant value of the object area according to the color level fixed value of the object area in each preset direction; comparing the edge significance value of the object region with an edge significance value threshold value, and taking the object region with the edge significance value greater than or equal to the edge significance value threshold value as an edge point; and extracting the edge of the object region according to the obtained edge point.
In particular, edge detection is a fundamental problem in image processing and computer vision, and the purpose of edge detection is to identify points in a digital image where brightness changes are significant. Significant changes in image attributes typically reflect significant events and changes in the attributes. These include (i) discontinuities in depth, (ii) surface orientation discontinuities, (iii) material property variations, and (iv) scene lighting variations. Edge detection is a research area in image processing and computer vision, especially in feature extraction.
The image edge detection greatly reduces the data volume, eliminates information which can be considered irrelevant, and retains important structural attributes of the image. There are many methods for edge detection, and most of them can be divided into two categories: one class based on look-up and one class based on zero-crossings. The search-based approach detects boundaries by finding the maximum and minimum values in the first derivative of the image, usually by locating the boundaries in the direction where the gradient is largest. The zero crossing based method finds the boundary by finding the second derivative zero crossing of the image, usually Laplacian zero crossing or a zero crossing represented by a nonlinear difference.
Example 5
On the basis of the above embodiment, the method for obtaining the edge significant value of the object region according to the fixed value of the color level of the object region in each preset direction includes: the edge saliency value of the object region is calculated using the following formula:
Figure GDA0003604815260000131
Figure GDA0003604815260000132
wherein r, g and b are the RGB values of the pixels in the object region, λ is the adjustment coefficient, and the value range is: 1-5; A. b and C are fixed values of the color gradation in each preset direction.
Specifically, the edge refers to a set of pixels whose surrounding pixels have a sharp change in gray level, which is the most basic feature of an image. Edges exist between objects, backgrounds and regions, so it is the most important basis on which image segmentation depends. Since the edge is a mark of a position and is insensitive to the change of the gray scale, the edge is also an important feature for image matching.
Edge detection and region division are two different methods of image segmentation, and the two methods have the characteristic of mutual complementation. In edge detection, the features of the discontinuous portions in the image are extracted, and the regions are determined according to the closed edges. In the area division, the image is divided into areas with the same characteristics, and the boundary between the areas is an edge. The edge detection method is more suitable for the segmentation of large images because the image does not need to be segmented pixel by pixel.
The edge can be roughly divided into two types, one type is a step-shaped edge, and the gray values of pixels on two sides of the edge are obviously different; the other is a roof-shaped edge, and the edge is positioned at the turning point of the change of the gray value from small to large to small. The main tool for edge detection is the edge detection template. We take a one-dimensional template as an example to examine how the edge detection template functions.
The effect of the template is to subtract the gray value of the left neighboring point from the gray value of the right neighboring point as the gray value of the point. In the area with similar gray scale, the gray scale value of the point is close to 0 as a result of doing so; whereas near the edges there is a clear jump in the grey value, which results in a large grey value at that point, which results in the above result. This template is an edge detector, which is mathematically defined as a gradient-based filter, also known as an edge operator. It is known that the gradient is directional, and always perpendicular to the direction of the edge. The template is horizontally oriented and the edges of the top image are exactly vertically oriented, which can be detected using the template. If the edges of the image are horizontally oriented, the gradient can be vertical.
Example 6
On the basis of the previous embodiment, the method for performing the second contour fitting on the motion path in step 7 based on the motion path and the fitting node set on the motion path to obtain the fitting contour of the motion path includes: generating ordered discrete points on the motion path based on the motion path and a fitting node arranged on the motion path; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitting contour of the motion path.
Example 7
On the basis of the previous embodiment, the method for obtaining the face recognition result by performing face recognition based on the found human body region in step 4 includes: acquiring all face image information in video image information corresponding to the human body area; respectively extracting face characteristic information from all face image information to obtain a face characteristic information group corresponding to the human body region; and identifying the human face based on the human face characteristic information group to obtain an identification result.
Specifically, the face recognition system mainly includes four components, which are respectively: the method comprises the steps of face image acquisition and detection, face image preprocessing, face image feature extraction, matching and identification.
Acquiring a face image: different face images can be collected through the camera lens, and for example, static images, dynamic images, different positions, different expressions and the like can be well collected. When the user is in the shooting range of the acquisition equipment, the acquisition equipment can automatically search and shoot the face image of the user.
Face detection: in practice, face detection is mainly used for preprocessing of face recognition, namely, the position and size of a face are accurately calibrated in an image. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, Haar features, and the like. The face detection is to extract the useful information and to use the features to realize the face detection.
The mainstream face detection method adopts an Adaboost learning algorithm based on the characteristics, wherein the Adaboost algorithm is a method for classification, and combines weak classification methods to form a new strong classification method.
In the process of face detection, an Adaboost algorithm is used for picking out some rectangular features (weak classifiers) which can represent the face most, the weak classifiers are constructed into a strong classifier according to a weighted voting mode, and then a plurality of strong classifiers obtained by training are connected in series to form a cascade-structured stacked classifier, so that the detection speed of the classifier is effectively improved.
Extracting the features of the face image: features that can be used by a face recognition system are generally classified into visual features, pixel statistical features, face image transform coefficient features, face image algebraic features, and the like. The face feature extraction is performed on some features of the face. Face feature extraction, also known as face characterization, is a process of feature modeling for a face. The methods for extracting human face features are classified into two main categories: one is a knowledge-based characterization method; the other is a characterization method based on algebraic features or statistical learning.
The knowledge-based characterization method mainly obtains feature data which is helpful for face classification according to shape description of face organs and distance characteristics between the face organs, and feature components of the feature data generally comprise Euclidean distance, curvature, angle and the like between feature points. The human face is composed of parts such as eyes, nose, mouth, and chin, and geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. The knowledge-based face characterization mainly comprises a geometric feature-based method and a template matching method.
Example 8
On the basis of the previous embodiment, the recognizing a face based on the face feature information group to obtain a recognition result includes: fusing the face feature information in the face feature information group to obtain fused feature information; calculating the similarity between the fusion characteristic information and the face characteristic information in a preset first database; and selecting the face feature information with the highest similarity in the first database as a recognition result.
Example 9
On the basis of the previous embodiment, the method for obtaining the fusion feature information by fusing the face feature information in the face feature information group includes: performing fusion calculation on the face feature information by using the following formula to obtain fusion feature information:
Figure GDA0003604815260000161
Figure GDA0003604815260000162
wherein M is fusion characteristic information; l is the number of the face characteristic information; p is face characteristic information; and T is a fusion group value of the face characteristic information, and the fusion group value is defined as a fusion value array generated randomly.
Specifically, fusing features of different scales is an important means for improving segmentation performance. The low-level features have higher resolution and contain more position and detail information, but have lower semanteme and more noise due to less convolution. The high-level features have stronger semantic information, but the resolution is very low, and the perception capability of the details is poor. How to fuse the two into a whole efficiently, and how to take the advantages of the two, abandoning the vintage is the key to improve the segmentation model.
Many works improve the performance of detection and segmentation by fusing multiple layers, and are classified into Early fusion (Early fusion) and Late fusion (Late fusion) according to the precedence order of fusion and prediction.
Example 10
A collaborative intelligent security device based on polymorphic fitting.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional units, and in practical applications, the functions may be distributed by different functional units according to needs, that is, the units or steps in the embodiments of the present invention are further decomposed or combined, for example, the units in the foregoing embodiment may be combined into one unit, or may be further decomposed into multiple sub-units, so as to complete all or the functions of the units described above. The names of the units and steps involved in the embodiments of the present invention are only for distinguishing the units or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative elements, method steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the elements, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether these functions are performed in electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or unit/apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or unit/apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent modifications or substitutions of the related art marks may be made by those skilled in the art without departing from the principle of the present invention, and the technical solutions after such modifications or substitutions will fall within the protective scope of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (9)

1. The collaborative intelligent security method based on polymorphic fitting is characterized by comprising the following steps:
step 1: acquiring video image information of a target scene, and taking a first frame in the video image information as initial image information; extracting all object regions in the initial image information;
and 2, step: performing first contour fitting on all object regions to obtain fitted contours of all objects;
and step 3: identifying the fitted contours, and finding out the human body fitted contour in all the fitted contours; finding a human body region in the object region through the found human body fitting contour;
and 4, step 4: performing face recognition based on the found human body area to obtain a face recognition result;
and 5: taking the position of the human body area in the initial image information as an initial point, recording the positions of the human body area in other video frames in the video image information as intermediate points, and connecting all the intermediate points from the initial point according to the time sequence of the video frames to obtain the original contour of the motion path;
step 6: setting a plurality of fitting nodes in the original contour of the motion path; setting the number of fitting nodes according to a set value; the number of fitting nodes is at least 5;
and 7: performing second contour fitting on the motion path based on the motion path and a fitting node arranged on the motion path to obtain a fitting contour of the motion path;
and 8: identifying a motion path based on the obtained fitting contour of the motion path to obtain a motion path identification result;
and step 9: judging whether to perform early warning or not based on the obtained motion path recognition result and the face recognition result;
the method for extracting all object regions in the starting image information in the step 1 comprises the following steps: segmenting the starting image information using a segmentation threshold calculated by the following formula:
Figure FDA0003604815250000021
where H is the calculated segmentation threshold, N is the number of pixels of the starting image information, CjThe alpha is an adjustment coefficient and the value range is 200-500; determining the background color according to the segmentation result; creating a multi-channel image, assigning the multi-channel image of the region corresponding to the background color as a first value, and assigning the multi-channel image outside the region corresponding to the background color as a second value to obtain a binary image of the initial image information; and determining a background area and an object area in the initial image information according to the binary image.
2. The method of claim 1, wherein the step 2 of performing a first contour fitting on all object regions to obtain the fitted contours of all objects comprises: carrying out edge detection to find the edge of the object area as a graph to be fitted; setting a plurality of fitting nodes on a graph to be fitted, and generating ordered discrete points on the graph to be fitted based on the geometric characteristics of the graph to be fitted and the set fitting nodes; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitted contour.
3. The method of claim 2, wherein the performing edge detection to find the edge of the object region comprises: performing convolution operation on the object region according to the template of the edge detection operator in each preset direction and the logarithm operation gray value of each pixel point in the neighborhood of the object region to obtain a color level fixed value of the object region in each preset direction; obtaining an edge significant value of the object area according to the color level fixed value of the object area in each preset direction; comparing the edge significance value of the object region with an edge significance value threshold value, and taking the object region with the edge significance value greater than or equal to the edge significance value threshold value as an edge point; and extracting the edge of the object region according to the obtained edge point.
4. The method according to claim 3, wherein the obtaining the edge saliency value of the object region according to the fixed color level values of the object region in the preset directions comprises: the edge saliency value of the object region is calculated using the following formula:
Figure FDA0003604815250000031
wherein r, g and b are the RGB values of the pixels in the object region, λ is the adjustment coefficient, and the value range is: 1-5; A. b and C are fixed values of the color gradation in each preset direction.
5. The method as claimed in claim 4, wherein the step 7 of performing the second contour fitting on the motion path based on the motion path and the fitting node set on the motion path to obtain the fitted contour of the motion path comprises: generating ordered discrete points on the motion path based on the motion path and a fitting node arranged on the motion path; and fitting the graph to be fitted based on the ordered discrete points to obtain a fitting contour of the motion path.
6. The method as claimed in claim 5, wherein the step 4 of performing face recognition based on the found human body region to obtain the face recognition result comprises: acquiring all face image information in video image information corresponding to the human body area; respectively extracting face characteristic information from all face image information to obtain a face characteristic information group corresponding to the human body region; and identifying the face based on the face characteristic information group to obtain an identification result.
7. The method of claim 6, wherein the recognizing the face based on the face feature information group to obtain a recognition result comprises: fusing the face feature information in the face feature information group to obtain fused feature information; calculating the similarity between the fusion characteristic information and the face characteristic information in a preset first database; and selecting the face feature information with the highest similarity in the first database as a recognition result.
8. The method as claimed in claim 7, wherein the method for fusing the face feature information in the face feature information group to obtain fused feature information comprises: performing fusion calculation on the face feature information by using the following formula to obtain fusion feature information:
Figure FDA0003604815250000041
wherein M is fusion characteristic information; l is face characteristic informationThe number of (2); p is face characteristic information; and T is a fusion group value of the face characteristic information, and the fusion group value is defined as a fusion value array generated randomly.
9. Collaborative smart security based on polymorphic fitting for implementing the method according to one of claims 1 to 8.
CN202110763475.6A 2021-07-06 2021-07-06 Collaborative intelligent security method and device based on polymorphic fitting Active CN113449663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110763475.6A CN113449663B (en) 2021-07-06 2021-07-06 Collaborative intelligent security method and device based on polymorphic fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110763475.6A CN113449663B (en) 2021-07-06 2021-07-06 Collaborative intelligent security method and device based on polymorphic fitting

Publications (2)

Publication Number Publication Date
CN113449663A CN113449663A (en) 2021-09-28
CN113449663B true CN113449663B (en) 2022-06-03

Family

ID=77815196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110763475.6A Active CN113449663B (en) 2021-07-06 2021-07-06 Collaborative intelligent security method and device based on polymorphic fitting

Country Status (1)

Country Link
CN (1) CN113449663B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118411529B (en) * 2024-07-04 2024-09-13 中联重科股份有限公司 Image-based operation early warning area identification method, early warning method and operation machine

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729614A (en) * 2012-10-16 2014-04-16 上海唐里信息技术有限公司 People recognition method and device based on video images
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN110009659A (en) * 2019-04-12 2019-07-12 武汉大学 Personage's video clip extracting method based on multiple target motion tracking
CN112153300A (en) * 2020-09-24 2020-12-29 广州云从洪荒智能科技有限公司 Multi-view camera exposure method, device, equipment and medium
CN113011367A (en) * 2021-03-31 2021-06-22 广州大学 Abnormal behavior analysis method based on target track

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156692B (en) * 2015-03-25 2019-12-13 阿里巴巴集团控股有限公司 method and device for positioning human face edge feature points
CN109410026A (en) * 2018-02-09 2019-03-01 深圳壹账通智能科技有限公司 Identity identifying method, device, equipment and storage medium based on recognition of face

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729614A (en) * 2012-10-16 2014-04-16 上海唐里信息技术有限公司 People recognition method and device based on video images
CN105787469A (en) * 2016-03-25 2016-07-20 广州市浩云安防科技股份有限公司 Method and system for pedestrian monitoring and behavior recognition
CN110009659A (en) * 2019-04-12 2019-07-12 武汉大学 Personage's video clip extracting method based on multiple target motion tracking
CN112153300A (en) * 2020-09-24 2020-12-29 广州云从洪荒智能科技有限公司 Multi-view camera exposure method, device, equipment and medium
CN113011367A (en) * 2021-03-31 2021-06-22 广州大学 Abnormal behavior analysis method based on target track

Also Published As

Publication number Publication date
CN113449663A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN107563372B (en) License plate positioning method based on deep learning SSD frame
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN109145742B (en) Pedestrian identification method and system
CN101389004B (en) Moving target classification method based on on-line study
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
Ogale A survey of techniques for human detection from video
CN109711416B (en) Target identification method and device, computer equipment and storage medium
CN108268867B (en) License plate positioning method and device
CN101142584A (en) Method for facial features detection
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105678213B (en) Dual-mode mask person event automatic detection method based on video feature statistics
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN108537143B (en) A kind of face identification method and system based on key area aspect ratio pair
CN114359876B (en) Vehicle target identification method and storage medium
CN111353385B (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN104978567A (en) Vehicle detection method based on scenario classification
CN104299009A (en) Plate number character recognition method based on multi-feature fusion
CN111008574A (en) Key person track analysis method based on body shape recognition technology
Zang et al. Traffic lane detection using fully convolutional neural network
Akbarzadeh et al. Design and matlab simulation of Persian license plate recognition using neural network and image filtering for intelligent transportation systems
Tao et al. Smoke vehicle detection based on spatiotemporal bag-of-features and professional convolutional neural network
CN113449663B (en) Collaborative intelligent security method and device based on polymorphic fitting
CN113177439A (en) Method for detecting pedestrian crossing road guardrail
CN117496570A (en) Face recognition method and system based on multi-scale convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant