CN112990079A - Matching degree calculation method based on big data and artificial intelligence - Google Patents

Matching degree calculation method based on big data and artificial intelligence Download PDF

Info

Publication number
CN112990079A
CN112990079A CN202110361074.8A CN202110361074A CN112990079A CN 112990079 A CN112990079 A CN 112990079A CN 202110361074 A CN202110361074 A CN 202110361074A CN 112990079 A CN112990079 A CN 112990079A
Authority
CN
China
Prior art keywords
matching
target
image
sequence number
target background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110361074.8A
Other languages
Chinese (zh)
Inventor
陈晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110361074.8A priority Critical patent/CN112990079A/en
Publication of CN112990079A publication Critical patent/CN112990079A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a matching degree calculation method based on big data and artificial intelligence, and relates to the technical field of face recognition. In the present application, first, for each sequence number in the sequence, a mean value of matching results corresponding to the sequence number in a plurality of matching result sets is calculated to obtain a matching mean value corresponding to the sequence number. Secondly, determining the maximum matching average value in a plurality of matching average values corresponding to the plurality of sequence numbers, and determining a target sequence number corresponding to the maximum matching average value. And then, acquiring a target sequence number and a matching average value corresponding to all sequence numbers before the target sequence number, and performing weighted calculation based on the acquired matching average value to obtain a weighted average value, wherein the weight coefficient of the matching average value with the sequence number before is greater than the weight coefficient of the matching average value with the sequence number after. And finally, taking the weighted average as a target matching result. Based on the method, the problem of low reliability in the matching process in the existing face recognition can be solved.

Description

Matching degree calculation method based on big data and artificial intelligence
Technical Field
The application relates to the technical field of face recognition, in particular to a matching degree calculation method based on big data and artificial intelligence.
Background
In order to meet the requirement for improving the security requirement, in the prior art, a face recognition technology is adopted for verification, that is, a face image is collected to determine whether a user is a legal user (for example, the currently collected face image is compared with a face image stored when an account is registered in advance). However, the inventor researches and finds that the existing face recognition has the problem of low reliability.
Disclosure of Invention
In view of the above, an object of the present application is to provide a face recognition method and a face recognition platform based on big data and artificial intelligence, so as to solve the problem of low reliability in the existing face recognition.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
a face recognition method based on big data and artificial intelligence is applied to a face recognition platform, and comprises the following steps:
performing background segmentation processing on a target image to be recognized to obtain a target background image and a target face image, wherein the target image is formed by shooting a target object;
performing matching degree calculation processing on the target background image and each frame of reference image in a first target database to obtain a target matching result, wherein each frame of reference image is formed on the basis of background images which are historically found when face recognition is unsuccessful;
determining a target identification rule from a plurality of pre-formed identification rules based on the target matching result, wherein each identification rule has different identification precision;
and identifying the target face image based on the target identification rule.
On the basis, the embodiment of the present application further provides a face recognition platform, including:
a memory for storing a computer program;
and the processor is connected with the memory and is used for executing the computer program so as to realize the human face recognition method based on the big data and the artificial intelligence.
According to the face recognition method and the face recognition platform based on the big data and the artificial intelligence, the target image to be recognized is segmented to obtain the target background image and the target face image, and then the target face image is recognized based on the target recognition rule determined through the target background image. Therefore, as the target identification rule is determined based on the target background image, compared with the prior art in which identification is directly performed based on a fixed identification rule, the reliability of identification can be improved, thereby solving the problem of low reliability in the prior face identification.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a structure of a face recognition platform according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of steps included in a face recognition method based on big data and artificial intelligence according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, an embodiment of the present application provides a face recognition platform. Wherein the face recognition platform may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory may have stored therein at least one software function, which may be in the form of software or firmware. The processor may be configured to execute executable computer programs stored in the memory, such as the software functional modules, so as to implement the big data and artificial intelligence based face recognition method provided by the embodiments (described later) of the present application.
Alternatively, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
Also, the Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
It will be appreciated that the face recognition platform may be a server with data processing capabilities.
Moreover, the structure shown in fig. 1 is only an illustration, and the face recognition platform may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may further include a communication unit for information interaction with other devices.
With reference to fig. 2, an embodiment of the present application further provides a face recognition method based on big data and artificial intelligence, which is applicable to the face recognition platform. The method steps defined by the flow related to the face recognition method based on big data and artificial intelligence can be realized by the face recognition platform.
The specific process shown in FIG. 2 will be described in detail below.
Step S110, carrying out background segmentation processing on a target image to be recognized to obtain a target background image and a target face image.
In this embodiment, when a target image to be recognized is acquired, the face recognition platform may perform background segmentation processing on the target image, so that a target background image and a target face image may be obtained, that is, the target image is segmented into the target background image and the target face image.
The target image may be formed by shooting a target object, for example, when a terminal device needs to log in a target account based on a user operation, the terminal device needs to collect a face image of the user first, and then send the face image to the face recognition platform for verification.
And step S120, calculating the matching degree of the target background image and each frame of reference image in the first target database to obtain a target matching result.
In this embodiment, after obtaining the target background image based on step S110, the human face recognition platform may perform matching degree calculation processing on the target background image and each frame of reference image in a first target database (which may be a local database of the human face recognition platform or a remote database), so as to obtain a target matching result.
Wherein the reference image for each frame is formed based on a background image which has historically been present when face recognition was unsuccessful. That is, the higher the matching degree in the target matching result indicates that the authenticity of the target background image is lower, indicating that the authenticity of the target face image is also lower.
Step S130, determining a target recognition rule among a plurality of recognition rules formed in advance based on the target matching result.
In this embodiment, after obtaining the target matching result based on step S120, the face recognition platform may determine a target recognition rule based on the target matching result from a plurality of pre-formed recognition rules.
Wherein each of the recognition rules has a different recognition accuracy. That is, the higher the matching degree in the target matching result is, the higher the recognition accuracy of the corresponding target recognition rule may be.
And step S140, identifying the target face image based on the target identification rule.
In this embodiment, after determining the target recognition rule based on step S130, the face recognition platform may perform recognition on the target face image based on the target recognition rule (e.g., perform matching recognition with a plurality of face images stored in advance).
Based on the method, because the target recognition rule is determined based on the target background image, compared with the prior art that the recognition is directly carried out based on the fixed recognition rule, the recognition reliability can be improved, and the problem of low reliability in the existing face recognition is solved. In addition, the recognition rules with different accuracies are adopted for recognition under different conditions, so that the reliability of the recognition result and the recognition efficiency can be both considered (if the recognition rules with low recognition accuracy are adopted, the problem of low reliability may exist, and the recognition rules with high recognition accuracy are adopted, the problem of low recognition efficiency may exist).
In the first aspect, it should be noted that, in step S120, a specific manner of performing the matching degree calculation is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, in order to improve the reliability of the obtained target matching result, step S120 may include sub-steps 10 to 21, which are described in detail below.
And a substep 10, dividing the target background image into a plurality of target background sub-images according to the association relationship with the face parts in the target face image (for example, in an alternative example, the target background sub-images may be divided into three target background sub-images, wherein the first target background sub-image includes the background image above the head, the second target background sub-image includes the background image on the left side of the head, and the third target background image includes the background image on the right side of the head).
And substep 11, converting each target background sub-image into a target background binarization sub-image (i.e. setting the pixel value of the target background sub-image larger than the target pixel threshold value to 255 and setting the pixel value of the target background sub-image not larger than the target pixel threshold value to 0), so as to obtain a plurality of target background binarization sub-images.
And a substep 12, performing contour extraction processing on each target background binary sub-image to obtain a contour feature corresponding to the target background binary sub-image (in this way, multiple contour features can be obtained).
And a substep 13, performing pixel value sorting processing on a binarization region surrounded by the contour feature corresponding to each target background binarization sub-image (wherein, if the contour feature is in a non-closed shape, two end points may be connected by a shortest straight line segment to form a closed shape), according to a predetermined target path (for example, scanning line by line from left to right), to obtain a pixel value sequence (for example, 255, 0, 255, 0) corresponding to the target background binarization sub-image.
And a substep 14, determining a target matching model of each target background binary sub-image in a plurality of preset matching models (each matching model can be obtained by training based on different sample images respectively, for example, the target matching model corresponding to the target background binary sub-image above the head can be obtained by training based on the background image above the head in the obtained sample image, and the target matching model corresponding to the target background binary sub-image on the left side of the head can be obtained by training based on the background image on the left side of the head in the obtained sample image) based on the association relationship between the target background binary sub-image and each face part in the target face image.
And a substep 15, performing sequence matching processing on the pixel value sequence corresponding to the target background binarization sub-image and each frame of reference image in the first target database based on the target matching model corresponding to the target background binarization sub-image for each target background binarization sub-image (wherein, for one frame of reference image, scanning may be performed according to the aforementioned target path to form a reference pixel sequence, and then, performing similarity or matching degree calculation on the reference pixel sequence and the pixel value sequence) to obtain a plurality of matching results corresponding to the target background binarization sub-image.
And substep 16, regarding each target background binarization sub-image, taking a plurality of matching results corresponding to the target background binarization sub-image as a matching result set, so as to obtain a plurality of matching result sets (that is, each target background binarization sub-image corresponds to a matching result set).
And a substep 17, for each matching result set, sorting the matching results in the matching result set according to the sequence from high matching degree to low matching degree.
Substep 18, performing an average calculation on the matching results corresponding to the sequence number in the multiple matching result sets for each sequence number in the sequence number to obtain a matching average corresponding to the sequence number (for example, the number of the matching result sets is 3, each matching result set includes 3 matching results, so that a first matching result in a first matching result set, a first matching result in a second matching result set, and a first matching average of a first matching result in a third matching result set can be calculated, a second matching result in the first matching result set, a second matching result in the second matching result set, and a second matching average of a second matching result in the third matching result set can be calculated, a third matching result in the first matching result set, a third matching result in the second matching result set, a third matching result in the third matching result set, and a matching average corresponding to the sequence number in the multiple matching result sets can be calculated, A third matching average of a third one of the third set of matching results).
And a substep 19 of determining a maximum matching average value among the multiple matching average values corresponding to the multiple sequence numbers, and determining a target sequence number corresponding to the maximum matching average value (for example, determining the maximum matching average value among the first matching average value, the second matching average value, and the third matching average value, if the second matching average value is the maximum, the target sequence number is 2).
Substep 20, obtaining the target sequence number and the matching average corresponding to all sequence numbers before the target sequence number (in the example, if the target sequence number is 2, the second matching average and the first matching average are obtained), and performing a weighted calculation based on the obtained matching average to obtain a weighted average, wherein the weight coefficient of the matching average with the sequence number before is greater than the weight coefficient of the matching average with the sequence number after (for example, the weight coefficient of the first matching average is greater than the weight coefficient of the second matching average).
A substep 21 of using the weighted average as a target matching result (in some examples, the weighted average may be normalized and then used as a target matching result).
For another example, in another alternative example, in order to improve the reliability of the obtained target matching result, step S120 may include sub-steps 20 to 28, which are described in detail below.
And a substep 22 of converting the target background image into a target background binary image.
And a substep 23, performing contour extraction processing on the target background binary image to obtain a contour feature corresponding to the target background image.
Substep 24, taking an endpoint pixel point in the contour feature as a starting point and another endpoint pixel point as an end point, performing sliding window processing according to the number of preset pixel points to obtain a plurality of pixel point tracks, wherein the number of pixel points included in each pixel point track is the number of preset pixel points, and every two adjacent pixel point tracks are adjacent to a corresponding sequenced pixel point (for example, for a first pixel point track and a second pixel point track, a first pixel point of the first pixel point track is adjacent to a first pixel point of the second pixel point track, a last pixel point of the first pixel point track is adjacent to a last pixel point of the second pixel point track, a second pixel point of the first pixel point track is coincident with the first pixel point of the second pixel point track, and a last pixel point of the first pixel point track is coincident with a penultimate pixel point of the second pixel point track) And if the contour feature is a closed track, taking any two adjacent pixel points as two endpoint pixel points (and in this case, in the pixel point tracks obtained by the sliding window processing, only the first pixel point track includes an endpoint pixel point as a starting point, and only the last pixel point track includes an endpoint pixel point as an endpoint).
And a substep 25, performing track matching processing on each pixel point track and each frame of reference image in the first target database (for one frame of reference image, performing contour extraction processing on the reference image to obtain a reference contour, and then performing similarity or matching degree calculation on the reference contour and the pixel point track) to obtain a plurality of matching results corresponding to the pixel point track, wherein the reference image is a plurality of frames.
And a substep 26, regarding each pixel point track, using a plurality of matching results corresponding to the pixel point track as a matching result set to obtain a plurality of matching result sets.
And a substep 27, for each matching result set, sorting the matching results in the matching result set according to the sequence from high matching degree to low matching degree.
And a substep 28, for each sequence number in the sequence, performing average calculation on the matching results corresponding to the sequence number in the multiple matching result sets to obtain a matching average corresponding to the sequence number.
And a substep 29 of determining a maximum matching average value among the multiple matching average values corresponding to the multiple sequence numbers, and determining a target sequence number corresponding to the maximum matching average value.
And a substep 30 of obtaining the target sequence number and a matching average value corresponding to all sequence numbers before the target sequence number, and performing weighting calculation based on the obtained matching average value to obtain a weighted average value, wherein the weight coefficient of the matching average value with the sequence number before is greater than the weight coefficient of the matching average value with the sequence number after.
And a substep 31 of using the weighted average as a target matching result.
In the second aspect, it should be noted that, in step S130, a specific manner for determining the target identification rule is not limited, and may be selected according to actual application requirements.
For example, in one alternative example, step 130 may include the following sub-steps:
the method comprises the steps that firstly, a second target database is searched based on a target matching result to obtain target safety level information, wherein the second target database has a corresponding relation between a plurality of different matching results and a plurality of different safety level information, and the higher the matching degree of the target matching result is, the lower the safety level of the corresponding target safety level information is;
and secondly, determining one identification rule from a plurality of pre-formed identification rules as the target identification rule based on the target safety level information, wherein the lower the safety level of the target safety level information is, the lower the identification precision of the corresponding target identification rule is.
Optionally, in the above example, the specific implementation manner of the first step included in step S130 is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, to improve the reliability of the determined target security level, the target security level may be derived based on the following steps:
firstly, analyzing acquired historical behavior data of a target user to obtain a target analysis result; then, the target security level information is searched in a second target database based on the target analysis result and the target matching result to obtain target security level information (for example, a value normalized by the target analysis result and the target matching result may be summed to obtain a sum value, and then, based on a correspondence relationship formed in advance in the second target database, the target security level information corresponding to the sum value, such as first-level, second-level, third-level, etc., is searched).
It is understood that, in the above example, the specific manner of performing the parsing process on the historical behavior data is not limited, and may be selected according to the actual application requirement, and in this embodiment, the following three examples are provided to perform the parsing process.
In the first example, in order to improve the reliability of the parsing result so that the determined target recognition rule has higher reliability, the parsing process may be performed based on the following sub-steps.
A substep 40 of obtaining a first type historical behavior data and a second type historical behavior data of the target user, where the first type historical behavior data includes time information of each time a target account pre-registered by the target user on the face recognition platform performs face recognition through the human recognition platform (for example, time information of performing face recognition through the target account by the device a, time information of performing face recognition through the target account by the device B, and time information of performing face recognition through the target account by the device C), and the second type historical behavior data includes time information of performing face recognition through the human recognition platform through at least one account by the target device which shoots the target image (for example, time information of performing face recognition through the target account by the target device, time information of performing face recognition through the human recognition platform through the target device, and the like, Time information for face recognition by the target device based on other account numbers).
A substep 41, performing sliding window processing on a plurality of time information included in the first type historical behavior data according to a first preset number with an earliest time information in the plurality of time information as a starting point and a latest time information in the plurality of time information as an end point to obtain a plurality of first time series, where the number of time information included in each first time series is the first preset number, and every two adjacent first time series have neighboring time information (for example, the plurality of time information are 11 days 1/11 days 2020/1 month, 13 days 1/1 month, 15 days 1/15 months 2020/1 month, 17 days 1/17 months 2020/19 days/1 month, 25 days 2020/1 month, and 1 day 2 months 2020/1 month, and thus, if the first preset number is 5, the obtained first time series may be 3, which are respectively "11 days/1 month/2020/1 month", 11 days/1 month "day, 25 days/months, and 1 month/day 2/months/1 year, 13 days 1 month 2020, 15 days 1 month 2020, 17 days 1 month 2020, 19 days 1 month 2020, "13 days 1 month 2020, 15 days 1 month 2020, 17 days 1 month 2020, 19 days 1 month 2020, 25 days 1 month 2020," "15 days 1 month 2020, 17 days 1 month 2020, 19 days 1 month 2020, 25 days 1 month 2020, and 1 month 2020").
And a substep 42, determining, for each of the first time series, a predicted time information after the latest time information in the first time series based on the time variation trend of the first time series, and obtaining the predicted time information corresponding to each of the first time series (for example, for the above-mentioned "11/1/2020/13/2020/1/15/2020/1/17/2020/1/19/2020/1/15/2020", since the trend of the time variation is relatively stable, a predicted time information of 21/2020/1/can be obtained, and it can be understood that, in practical applications, in order to ensure the accuracy of prediction, the first preset number may be relatively large, such as at least several tens).
And a substep 43, calculating, for each of the first time series, a first prediction error value between the predicted time information corresponding to the first time series and the corresponding target time information, where the target time information corresponding to each of the first time series is the latest time information in the next first time series, and the target time information corresponding to the last first time series is the time information for acquiring the target image (for example, for the predicted time information in the above example, 1/21/2020/1/25/2020, the corresponding first prediction error value may be obtained as 4 days).
Substep 44, for each of the first prediction error values, performing weighting coefficient determination processing on the first prediction error value based on a preset weighting coefficient determination rule to obtain a first weighting coefficient of the first prediction error value, wherein, among the obtained plurality of first weighting coefficients, the first weighting coefficient from the first weighting coefficient to the middle weighting coefficient is sequentially increased according to a first preset increasing coefficient, the first weighting coefficient from the middle weighting coefficient to the last weighting coefficient is sequentially decreased according to a first preset decreasing coefficient, and the first preset decreasing coefficient is smaller than the first preset increasing coefficient, and the sum of all the first weighting coefficients is 1 (wherein, the middle first weighting coefficient is the weighting coefficient of the corresponding first prediction error value when the time information of the first time of face recognition unsuccessfully performed is taken as the target time information, or the largest first prediction error value except the first prediction error value and the last first prediction error value).
And a substep 45 of performing a weighted calculation process to obtain a first target error value based on each of the first prediction error values and each of the corresponding first weighting coefficients.
And a substep 46, performing sliding window processing on a plurality of time information included in the second type of historical behavior data according to a second preset number by using the earliest time information in the plurality of time information as a starting point and the latest time information in the plurality of time information as an end point to obtain a plurality of second time sequences, wherein the number of the time information included in each second time sequence is the second preset number, the time information correspondingly sorted between every two adjacent second time sequences is adjacent, and the second preset number is greater than the first preset number.
And a substep 47, determining, for each second time series, a predicted time information after the latest time information in the second time series based on the time variation trend of the second time series, to obtain the predicted time information corresponding to each second time series.
And a substep 48, calculating a second prediction error value between the predicted time information corresponding to each second time sequence and the corresponding target time information for each second time sequence, where the target time information corresponding to each second time sequence is the latest time information in the next second time sequence, and the target time information corresponding to the last second time sequence is the time information for acquiring the target image.
And a substep 49, performing weighting coefficient determination processing on each second prediction error value based on a preset weighting coefficient determination rule to obtain a second weighting coefficient of the second prediction error value, wherein, in the obtained plurality of second weighting coefficients, the first weighting coefficient to the last weighting coefficient are sequentially increased according to a second preset increasing coefficient, and the sum of all the second weighting coefficients is 1.
And a substep 50 of performing a weighted calculation process to obtain a second target error value based on each of the second prediction error values and each of the corresponding second weighting coefficients.
And a substep 51 of calculating a weighted sum between the first target error value and the second target error value, wherein the weight coefficient of the second target error value is decreased based on an increase in the number of account numbers for face recognition by the human recognition platform in the second type of historical behavior data, and the sum of the weight coefficient of the second target error value and the weight coefficient of the first target error value is 1.
And a substep 52 of using a difference value between the weighted sum and the last first prediction error value as a target analysis result, wherein the larger the difference value is, the lower the security level of the determined target security level information is (for example, in an alternative example, the difference value between the weighted sum and the last first prediction error value may be calculated first, and then, determining a target range value among a plurality of preset range values based on the difference value, and using the target range value as the target analysis result, wherein the target security level information determined based on the different difference values of the same range value is the same).
In the second example, in order to improve the efficiency of analysis while ensuring certain reliability of the determined target analysis result, analysis processing may be performed based on the following sub-steps.
And a substep 60 of obtaining first type historical behavior data of the target user, wherein the first type historical behavior data comprises time information of face recognition of the target user through the human recognition platform each time when the target account number pre-registered in the face recognition platform by the target user passes through the human recognition platform.
And a substep 61, performing sliding window processing on a plurality of time information included in the first type historical behavior data according to a first preset number by taking the earliest time information in the plurality of time information as a starting point and the latest time information in the plurality of time information as an end point to obtain a plurality of first time sequences, wherein the number of the time information included in each first time sequence is the first preset number, and the time information in corresponding sorting between every two adjacent first time sequences is adjacent.
And a substep 62, for each of the first time series, determining a predicted time information after the latest time information in the first time series based on the time variation trend of the first time series, and obtaining the predicted time information corresponding to each of the first time series.
And a substep 63, calculating, for each of the first time sequences, a first prediction error value between the predicted time information corresponding to the first time sequence and the corresponding target time information, where the target time information corresponding to each of the first time sequences is the latest time information in the next first time sequence, and the target time information corresponding to the last first time sequence is the time information for acquiring the target image.
And a substep 64, performing weighting coefficient determination processing on each first prediction error value based on a preset weighting coefficient determination rule to obtain a first weighting coefficient of the first prediction error value, wherein, among the obtained plurality of first weighting coefficients, the first weighting coefficient increases from the first weighting coefficient to the middle according to a first preset increasing coefficient, the first weighting coefficient decreases from the middle first weighting coefficient to the last first weighting coefficient according to a first preset decreasing coefficient, the first preset decreasing coefficient is smaller than the first preset increasing coefficient, and the sum of all the first weighting coefficients is 1.
Substep 65 performs a weighted calculation process to obtain a first target error value based on each of the first prediction error values and each of the corresponding first weighting coefficients.
Substep 66, using a difference value between the first target error value and the last first prediction error value as a target analysis result, wherein the larger the difference value is, the lower the security level of the determined target security level information is.
In the third example, in order to sufficiently improve the efficiency of performing the analysis processing, the analysis processing may be performed based on the following sub-steps.
And a substep 70 of obtaining second type historical behavior data of the target user, wherein the second type historical behavior data comprises time information of face recognition of the target device which shoots and forms the target image through the human recognition platform at least based on one account.
And a substep 71, performing sliding window processing on a plurality of time information included in the second type of historical behavior data according to a second preset number by using the earliest time information in the plurality of time information as a starting point and the latest time information in the plurality of time information as an end point to obtain a plurality of second time sequences, wherein the number of the time information included in each second time sequence is the second preset number, and the time information in corresponding sorting between every two adjacent second time sequences is adjacent.
And a substep 72, determining a predicted time information after the latest time information in the second time series based on the time variation trend of the second time series for each second time series, and obtaining the predicted time information corresponding to each second time series.
And a substep 73, calculating a second prediction error value between the predicted time information corresponding to each second time sequence and the corresponding target time information for each second time sequence, where the target time information corresponding to each second time sequence is the latest time information in the next second time sequence, and the target time information corresponding to the last second time sequence is the time information for acquiring the target image.
And a substep 74, performing weighting coefficient determination processing on each second prediction error value based on a preset weighting coefficient determination rule to obtain a second weighting coefficient of the second prediction error value, wherein, in the obtained plurality of second weighting coefficients, the first weighting coefficient to the last weighting coefficient are sequentially increased according to a second preset increasing coefficient, and the sum of all the second weighting coefficients is 1.
And a substep 75 of performing a weighted calculation process to obtain a second target error value based on each of the second prediction error values and each of the corresponding second weighting coefficients.
Substep 76, using the difference between the second target error value and the last second prediction error value as the target analysis result, wherein the larger the difference is, the lower the security level of the determined target security level information is.
Optionally, in the above example, the specific implementation manner of the second step included in step S130 is not limited, and may be selected according to the actual application requirement.
For example, in an alternative example, an identification rule may be determined as the target identification rule based on the following steps:
firstly, determining a target recognition model from a plurality of pre-formed recognition models based on the target safety level information, wherein different recognition models are formed based on different amounts of sample data and loss threshold value training (wherein the more the recognition models obtained based on the more the sample data training, the higher the corresponding recognition accuracy; secondly, the target recognition model is used as a target recognition rule.
That is, the target face image may be recognized based on the trained recognition model.
In summary, the human face recognition method and the human face recognition platform based on big data and artificial intelligence provided by the application obtain a target background image and a target human face image by segmenting a target image to be recognized, and then recognize the target human face image based on a target recognition rule determined by the target background image. Therefore, as the target identification rule is determined based on the target background image, compared with the prior art in which identification is directly performed based on a fixed identification rule, the reliability of identification can be improved, thereby solving the problem of low reliability in the prior face identification. In addition, the recognition rules with different accuracies are adopted for recognition under different conditions, so that the reliability of the recognition result and the recognition efficiency can be both considered (if the recognition rules with low recognition accuracy are adopted, the problem of low reliability may exist, and the recognition rules with high recognition accuracy are adopted, the problem of low recognition efficiency may exist).
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (2)

1. A big data and artificial intelligence based matching degree calculation method is characterized by comprising the following steps:
dividing the target background image into a plurality of target background subimages according to the incidence relation between the target background image and each face part in the target face image;
converting each target background subimage into a target background binarization subimage to obtain a plurality of target background binarization subimages;
carrying out contour extraction processing on the target background binarization sub-image aiming at each target background binarization sub-image to obtain contour characteristics corresponding to the target background binarization sub-image;
for each target background binarization sub-image, carrying out pixel value sequencing processing on a binarization area surrounded by contour features corresponding to the target background binarization sub-image according to a predetermined target path to obtain a pixel value sequence corresponding to the target background binarization sub-image;
for each target background binarization subimage, determining a target matching model of the target background binarization subimage in a plurality of preset matching models based on the correlation between the target background binarization subimage and each face part in the target face image;
for each target background binary subimage, performing sequence matching processing on a pixel value sequence corresponding to the target background binary subimage and each frame of reference image in a first target database based on a target matching model corresponding to the target background binary subimage to obtain a plurality of matching results corresponding to the target background binary subimage;
for each target background binarization sub-image, taking a plurality of matching results corresponding to the target background binarization sub-image as a matching result set to obtain a plurality of matching result sets;
for each matching result set, sequencing a plurality of matching results in the matching result set from high to low according to the matching degree;
aiming at each sequence number in the sequence, carrying out average calculation on the matching results corresponding to the sequence number in a plurality of matching result sets to obtain a matching average corresponding to the sequence number;
determining a maximum matching mean value in a plurality of matching mean values corresponding to the plurality of sequence numbers, and determining a target sequence number corresponding to the maximum matching mean value;
acquiring the target sequence number and a matching average value corresponding to all sequence numbers before the target sequence number, and performing weighting calculation based on the acquired matching average value to obtain a weighted average value, wherein the weight coefficient of the matching average value with the sequence number before is greater than the weight coefficient of the matching average value with the sequence number after;
and taking the weighted average as a target matching result.
2. A big data and artificial intelligence based matching degree calculation method is characterized by comprising the following steps:
converting the target background image into a target background binary image;
carrying out contour extraction processing on the target background binary image to obtain contour features corresponding to the target background image;
taking one endpoint pixel point in the contour features as a starting point and the other endpoint pixel point as an end point, performing sliding window processing according to the number of preset pixel points to obtain a plurality of pixel point tracks, wherein the number of pixel points included in each pixel point track is the number of the preset pixel points, every two adjacent pixel point tracks are adjacent to each other in a corresponding sequencing manner, and if the contour features are closed tracks, any two adjacent pixel points are taken as two endpoint pixel points;
aiming at each pixel point track, carrying out track matching processing on the pixel point track and each frame of reference image in a first target database to obtain a plurality of matching results corresponding to the pixel point track, wherein the reference image is a plurality of frames;
aiming at each pixel point track, taking a plurality of matching results corresponding to the pixel point track as a matching result set to obtain a plurality of matching result sets;
for each matching result set, sequencing a plurality of matching results in the matching result set from high to low according to the matching degree;
aiming at each sequence number in the sequence, carrying out average calculation on the matching results corresponding to the sequence number in a plurality of matching result sets to obtain a matching average corresponding to the sequence number;
determining a maximum matching mean value in a plurality of matching mean values corresponding to the plurality of sequence numbers, and determining a target sequence number corresponding to the maximum matching mean value;
acquiring the target sequence number and a matching average value corresponding to all sequence numbers before the target sequence number, and performing weighting calculation based on the acquired matching average value to obtain a weighted average value, wherein the weight coefficient of the matching average value with the sequence number before is greater than the weight coefficient of the matching average value with the sequence number after;
and taking the weighted average as a target matching result.
CN202110361074.8A 2020-10-16 2020-10-16 Matching degree calculation method based on big data and artificial intelligence Withdrawn CN112990079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110361074.8A CN112990079A (en) 2020-10-16 2020-10-16 Matching degree calculation method based on big data and artificial intelligence

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110361074.8A CN112990079A (en) 2020-10-16 2020-10-16 Matching degree calculation method based on big data and artificial intelligence
CN202011107140.0A CN112232206B (en) 2020-10-16 2020-10-16 Face recognition method and face recognition platform based on big data and artificial intelligence

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011107140.0A Division CN112232206B (en) 2020-10-16 2020-10-16 Face recognition method and face recognition platform based on big data and artificial intelligence

Publications (1)

Publication Number Publication Date
CN112990079A true CN112990079A (en) 2021-06-18

Family

ID=74117369

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110361074.8A Withdrawn CN112990079A (en) 2020-10-16 2020-10-16 Matching degree calculation method based on big data and artificial intelligence
CN202011107140.0A Active CN112232206B (en) 2020-10-16 2020-10-16 Face recognition method and face recognition platform based on big data and artificial intelligence
CN202110361075.2A Withdrawn CN112990080A (en) 2020-10-16 2020-10-16 Rule determination method based on big data and artificial intelligence

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202011107140.0A Active CN112232206B (en) 2020-10-16 2020-10-16 Face recognition method and face recognition platform based on big data and artificial intelligence
CN202110361075.2A Withdrawn CN112990080A (en) 2020-10-16 2020-10-16 Rule determination method based on big data and artificial intelligence

Country Status (1)

Country Link
CN (3) CN112990079A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344586A (en) * 2021-07-05 2021-09-03 塔里木大学 Face recognition payment system facing mobile terminal
CN115424353B (en) * 2022-09-07 2023-05-05 杭银消费金融股份有限公司 Service user characteristic identification method and system based on AI model
CN117152157B (en) * 2023-10-31 2023-12-29 南通三喜电子有限公司 Electronic element identification method based on artificial intelligence

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7697026B2 (en) * 2004-03-16 2010-04-13 3Vr Security, Inc. Pipeline architecture for analyzing multiple video streams
CN101162499A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Method for using human face formwork combination to contrast
AU2012219026B2 (en) * 2011-02-18 2017-08-03 Iomniscient Pty Ltd Image quality assessment
CN108268850B (en) * 2018-01-24 2022-04-12 贵州华泰智远大数据服务有限公司 Big data processing method based on image
CN109711252A (en) * 2018-11-16 2019-05-03 天津大学 A kind of face identification method of more ethnic groups
CN110533002B (en) * 2019-09-06 2022-04-12 厦门久凌创新科技有限公司 Big data processing method based on face recognition

Also Published As

Publication number Publication date
CN112232206B (en) 2021-05-18
CN112990080A (en) 2021-06-18
CN112232206A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN112232206B (en) Face recognition method and face recognition platform based on big data and artificial intelligence
CN110209660B (en) Cheating group mining method and device and electronic equipment
CN111738351B (en) Model training method and device, storage medium and electronic equipment
CN110826525A (en) Face recognition method and system
CN111798312A (en) Financial transaction system abnormity identification method based on isolated forest algorithm
CN115018840B (en) Method, system and device for detecting cracks of precision casting
CN107341508B (en) Fast food picture identification method and system
CN111800430A (en) Attack group identification method, device, equipment and medium
CN111882338A (en) Online people number abnormality detection method and device and electronic equipment
CN111291824A (en) Time sequence processing method and device, electronic equipment and computer readable medium
CN110874835B (en) Crop leaf disease resistance identification method and system, electronic equipment and storage medium
CN114925348A (en) Security verification method and system based on fingerprint identification
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN114187763A (en) Vehicle driving data screening method and system for intelligent traffic
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN112532645A (en) Internet of things equipment operation data monitoring method and system and electronic equipment
CN116168213A (en) People flow data identification method and training method of people flow data identification model
CN115620211A (en) Performance data processing method and system of flame-retardant low-smoke halogen-free sheath
CN112949592B (en) Hyperspectral image classification method and device and electronic equipment
CN115439928A (en) Operation behavior identification method and device
CN115484044A (en) Data state monitoring method and system
CN113239236B (en) Video processing method and device, electronic equipment and storage medium
CN112070023B (en) Neighborhood prior embedded type collaborative representation mode identification method
CN113473124B (en) Information acquisition method, device, electronic equipment and storage medium
CN111625672B (en) Image processing method, image processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210618

WW01 Invention patent application withdrawn after publication