CN115497152A - Customer information analysis method, device, system and medium based on image recognition - Google Patents

Customer information analysis method, device, system and medium based on image recognition Download PDF

Info

Publication number
CN115497152A
CN115497152A CN202211338215.5A CN202211338215A CN115497152A CN 115497152 A CN115497152 A CN 115497152A CN 202211338215 A CN202211338215 A CN 202211338215A CN 115497152 A CN115497152 A CN 115497152A
Authority
CN
China
Prior art keywords
image
information analysis
face detection
face
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211338215.5A
Other languages
Chinese (zh)
Inventor
曹圳杰
常鹏
朱益兴
李飞
林星凯
朱恩东
王步青
赖众程
黎利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202211338215.5A priority Critical patent/CN115497152A/en
Publication of CN115497152A publication Critical patent/CN115497152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a customer information analysis method, a device, a system and a medium based on image recognition, wherein the method comprises the following steps: acquiring a customer image to be analyzed; carrying out face detection on the client image to obtain a face detection image; inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network; and displaying the corresponding asset identification according to the analysis result output by the information analysis model. The method has the advantages that the face detection is carried out on the client image, and the multi-mode client information analysis is carried out on the basis of the pre-trained analysis model face features and the micro-expression features, so that the high-efficiency objective predictive analysis is carried out on the asset information of the client, the asset identification obtained by prediction is visually displayed, the efficiency and the accuracy of the client information analysis are improved, and a reliable data basis is provided for confirming the potential client.

Description

Customer information analysis method, device, system and medium based on image recognition
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a customer information analysis method, a customer information analysis device, a customer information analysis system and a customer information analysis medium based on image recognition.
Background
In conventional financial services, the asset information of the customer is highly sensitive and invisible to business personnel in financial service scenarios such as selling financial products. On the other hand, business personnel also want to obtain more customer information to improve the accuracy of the determination of potential customers.
The existing method for analyzing the client information usually depends on manual rules or experience to judge and mine potential clients, but the method has low timeliness and strong subjectivity, so that the efficiency and the accuracy of client information analysis are low, and accurate analysis data are difficult to provide for mining of the potential clients.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a method, an apparatus, a system and a medium for analyzing customer information based on image recognition, which can be applied to financial technology or other related fields, and aims to improve the efficiency and accuracy of customer information analysis and the accuracy of potential customer mining.
The technical scheme of the invention is as follows:
a customer information analysis method based on image recognition comprises the following steps:
acquiring a customer image to be analyzed;
carrying out face detection on the client image to obtain a face detection image;
inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and displaying the corresponding asset identification according to the analysis result output by the information analysis model.
In one embodiment, the performing face detection on the client image to obtain a face detection image includes:
carrying out face detection on the client image according to a preset face detection algorithm, and adding a corresponding face detection frame;
and performing region cutting on the client image according to the face detection frame to obtain a face detection image.
In one embodiment, after performing face detection on the client image to obtain a face detection image, the method further includes:
and preprocessing the face detection image.
In one embodiment, the inputting the face detection image into a pre-trained information analysis model for customer information analysis includes:
extracting face depth features in the face detection image through the face feature extraction network;
extracting micro expression features in the face detection image through the micro expression extraction network;
and performing asset prediction analysis based on the fusion characteristics of the face depth characteristics and the micro-expression characteristics through the classification output network, and outputting an analysis result.
In one embodiment, the classification output network includes a multi-layer perceptron, a pooling layer and a full connection layer, and the asset prediction analysis is performed through the classification output network based on the fusion features of the face depth features and the micro-expression features, and the analysis result is output, including:
inputting the human face depth features and the micro expression features into the multilayer perceptron to carry out feature alignment processing, and outputting fusion features;
inputting the fusion features into the pooling layer for dimension reduction processing, and outputting one-dimensional features;
performing asset prediction on the one-dimensional features through the full connection layer to obtain prediction probabilities of different asset types;
and outputting the asset type with the highest prediction probability as the target asset type.
In an embodiment, the displaying the corresponding asset identifier according to the analysis result output by the information analysis model specifically includes:
and collecting the video stream of a client, and displaying after adding the asset identification of the target asset type on the video stream.
In one embodiment, the face feature extraction network is a DenseNet-based convolutional neural network.
In one embodiment, the micro expression extraction network adopts a local binary pattern operator to extract the texture features of the image.
A customer information analysis apparatus based on image recognition, comprising:
the acquisition module is used for acquiring a client image to be analyzed;
the face detection module is used for carrying out face detection on the client image to obtain a face detection image;
the information analysis module is used for inputting the face detection image into a pre-trained information analysis model for customer information analysis, and the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and the display module is used for displaying the corresponding asset identification according to the analysis result output by the information analysis model.
A customer information analysis system based on image recognition, the system comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition based customer information analysis method described above.
A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the above-described method for customer information analysis based on image recognition.
Has the advantages that: compared with the prior art, the embodiment of the invention carries out high-efficiency and objective predictive analysis on the asset information of the client by carrying out face detection on the client image and carrying out multi-mode client information analysis based on the face characteristics and the micro-expression characteristics of the pre-trained analysis model, thus visually displaying the asset identification obtained by prediction, improving the efficiency and the accuracy of client information analysis and providing a reliable data base for confirming the potential client.
Drawings
The invention will be further described with reference to the following drawings and examples, in which:
fig. 1 is a flowchart of a customer information analysis method based on image recognition according to an embodiment of the present invention;
fig. 2 is a flowchart of step S200 in a customer information analysis method based on image recognition according to an embodiment of the present invention;
fig. 3 is a flowchart of step S300 in a customer information analysis method based on image recognition according to an embodiment of the present invention;
fig. 4 is a flowchart of step S303 in the customer information analysis method based on image recognition according to the embodiment of the present invention;
FIG. 5 is a functional block diagram of a customer information analysis apparatus based on image recognition according to an embodiment of the present invention;
fig. 6 is a schematic hardware structure diagram of a customer information analysis system based on image recognition according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is described in further detail below. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for analyzing customer information based on image recognition according to an embodiment of the present invention. The client information analysis method based on image recognition provided by the embodiment is particularly applicable to a system comprising a terminal device, a network and a server, wherein the network is a medium for directly providing a communication link between the terminal device and the server, and can comprise various connection types, such as a wired connection, a wireless communication link or an optical fiber cable; the operating system on the terminal device may include an iPhone operating system (iOS system), an android system, or another operating system, and the terminal device is connected to the server through a network to perform an interaction, so as to perform operations such as receiving or sending data, and may specifically be various electronic devices having a display screen and supporting web browsing, including but not limited to a smart phone, a tablet computer, a portable computer, a desktop server, and the like. As shown in fig. 1, the method specifically includes the following steps:
and S100, obtaining a customer image to be analyzed.
In this embodiment, when the online or offline communication is performed with the client, the image of the client to be analyzed may be acquired and obtained in real time, for example, when the client enters an organization website to perform the offline communication, the image of the client may be acquired by a website camera to perform subsequent analysis, and if the client establishes a video connection with a service person to perform the online communication, the acquisition of the online and offline multi-channel analysis image may be achieved by acquiring a video stream and extracting a frame image having a face area of the client from the video stream as the image of the client to be analyzed.
Specifically, in order to ensure that the privacy of the user is not leaked, the image of the client is not stored locally, and other information associated with the client is also subjected to desensitization processing, wherein the desensitization processing specifically includes obfuscating and hiding data, for example, data information such as a mobile phone number and an address of the client is hidden by using a preset symbol, so that the personal privacy information is not leaked, and the data security during the information analysis of the client is improved.
S200, carrying out face detection on the client image to obtain a face detection image;
in the embodiment, the characteristics carried by the face area are closely related to the client information, and the client image may contain a lot of useless environment background information, so that the face detection is performed on the acquired client image, and the image of the face part is obtained only by subsequent processing, so that the data processing amount is saved, and the analysis efficiency is improved.
In an embodiment, please refer to fig. 2, which is a flowchart illustrating a step S200 of the method for analyzing customer information based on image recognition according to an embodiment of the present invention, as shown in fig. 2, S200 includes:
s201, carrying out face detection on the client image according to a preset face detection algorithm, and adding a corresponding face detection frame;
s202, performing region cutting on the client image according to the face detection frame to obtain a face detection image.
In the embodiment, the face position in the client image is positioned by a preset face detection algorithm, and a corresponding face detection frame is added according to the positioning result.
When the detection is carried out, a client image is input into a center model for face detection, a face heat image, a face scale image and a face center offset image are output, a point which is larger than a preset threshold value of 0.35 in the face heat image is regarded as a face, a face coordinate offset is taken out from a corresponding position on the face center offset image and added with coordinates of the face heat image to obtain a final face center position, finally, the width and the height of the face are calculated on the face scale image through index conversion to obtain a face detection frame, and a repeated face frame is removed through non-maximum suppression (NMS), so that the face detection frame addition of the client image is realized. Then the client image is subjected to region cropping based on the position of the face detection frame, the face detection image containing the key features can be obtained to realize efficient customer information analysis.
In one embodiment, after step S200, the method further comprises:
and preprocessing the face detection image.
In this embodiment, the face detection image obtained by detection and clipping is further preprocessed to improve the accuracy of subsequent customer information analysis, and the specific preprocessing may include, for example, face alignment processing, light correction processing, and the like, so as to eliminate the impression of different face angles or illumination conditions on the accuracy of customer information analysis.
Specifically, when the face alignment processing is performed, the key point position of the current face detection image can be detected, the key point position of the current face detection image is aligned to the preset standard key point position through an image transformation algorithm, and the aligned face detection image is obtained, so that the purpose of correcting the face posture is achieved.
When the light correction is carried out, a standard illumination condition can be preset, the light correction is carried out on the image after the face is aligned according to the standard illumination condition, a face detection image with the illumination condition consistent with that of the standard face is obtained, the illumination condition of the face image is changed to the illumination condition of the standard face, the processed image has proper contrast, and the detail definition of the face image is improved.
S300, inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro-expression extraction network and a classification output network.
In the embodiment, an information analysis model of multi-modal features is pre-constructed and trained, and in the constructed information analysis model, a face feature extraction network is a convolutional neural network based on DenseNet, so that the face feature extraction network has very good feature learning and characterization learning capabilities and can realize accurate face feature extraction on face detection images; the micro expression extraction network adopts a Local Binary Pattern operator, namely an LBP (Local Binary Pattern) operator, to extract texture features of the image, the LBP operator considers the size relationship between pixel points and surrounding pixels, the essence of the LBP operator is also the texture features of the extracted image, and the features of the textures are beneficial to extracting the features related to micro expressions and realizing accurate micro expression feature extraction.
By inputting the face detection image into the trained information analysis model, the model analyzes the client information of the face detection image from the double angles of the face characteristic and the micro expression characteristic, and realizes the high-efficiency information analysis and prediction of the client image.
In specific implementation, when the constructed information analysis model is trained, firstly, training data is collected, for example, historical customer image data is collected from existing business data to serve as a data set, asset labeling is carried out according to the asset condition of a historical customer, for example, 5 asset classes are divided in advance, each asset class corresponds to corresponding asset data, 5 labels such as p 5-p 1 are sequentially marked from high to low according to the asset condition of the customer, the training data marked with the asset labels are divided into a training set, a verification set and a test set according to a budget proportion, gradient descending is optimized based on an Adam algorithm, and the information analysis model is trained by taking cross entropy as a loss function in the training process until a preset convergence condition is reached, and then training is completed.
In an embodiment, please refer to fig. 3, which is a flowchart illustrating step S300 of the method for analyzing customer information based on image recognition according to an embodiment of the present invention, as shown in fig. 3, step S300 includes:
s301, extracting face depth features in the face detection image through the face feature extraction network;
s302, extracting micro expression features in the face detection image through the micro expression extraction network;
and S303, performing asset prediction analysis based on the fusion characteristics of the face depth characteristics and the micro expression characteristics through the classification output network, and outputting an analysis result.
In the embodiment, when the quality information is predicted and analyzed, two paths of features are extracted from a face detection image input into an information analysis model, one path of the face detection image extracts face depth features through a face feature extraction network, the other path of the face detection image extracts micro expression features through a micro expression extraction network, the asset information is predicted and analyzed based on the fusion features of the two paths of features, analysis results of corresponding asset categories are obtained, the two paths of features are extracted and fused, and then classification prediction is carried out, so that the method is more robust than the single feature analysis prediction, the micro expression features can well supplement the global disadvantage of the face depth features, supplement the lacking local information, and further improve the accuracy and robustness of feature extraction and classification prediction.
In an embodiment, the classification output network includes a multi-layer perceptron, a pooling layer and a full connection layer, please refer to fig. 4, which is a flowchart of step S303 in the method for analyzing customer information based on image recognition according to an embodiment of the present invention, as shown in fig. 4, step S303 includes:
s3031, inputting the face depth feature and the micro expression feature into the multilayer perceptron to carry out feature alignment processing, and outputting a fusion feature;
s3032, inputting the fusion features to the pooling layer for dimension reduction processing, and outputting one-dimensional features;
s3033, performing asset prediction on the one-dimensional characteristics through the full connection layer to obtain prediction probabilities of different asset types;
and S3034, outputting the asset type with the highest prediction probability as the target asset type.
In this embodiment, because there is a difference between the extracted micro expression features and the face depth features, a multi-layer Perceptron (MLP) is set in the classification output network to perform feature alignment processing, so as to realize fusion of two features and further output a fusion feature, the fused feature exists in the length, width, channel number and three dimensions, the three-dimensional fusion feature is subjected to dimension reduction processing through a pooling layer, and is converted into a one-dimensional feature to simplify network complexity and reduce calculation amount, and finally, the one-dimensional feature is subjected to asset type prediction analysis through a full connection layer, so as to obtain prediction probabilities of different asset types, for example, when a model is trained, 5 asset tags are labeled, the full connection layer performs classification prediction on image features and finally regresses the prediction probabilities of the 5 tags, and the asset type with the highest probability is output as a target asset type, that is a prediction result of an information analysis model, which is one of the labeled asset tags during training, thereby realizing qualitative judgment of client asset conditions.
S400, displaying the corresponding asset identification according to the analysis result output by the information analysis model.
In this embodiment, the asset identifier of the client is displayed on the terminal interface of the service staff based on the analysis result efficiently and objectively output by the information analysis module, and specifically, when a scene is applied on line, the video stream of the client may be collected, and the asset identifier of the target asset type output by the model is added to the video stream and then displayed on the terminal screen of the service staff. The specific asset identification can be an asset prediction grade or the potential probability of a high-net-value client and the like, so that business personnel can rapidly conduct qualitative judgment on the asset information of the client through the intuitive and visual asset identification, and the accuracy of mining judgment of the potential client is improved.
Another embodiment of the present invention provides a customer information analysis apparatus based on image recognition, as shown in fig. 5, the apparatus 1 includes:
an obtaining module 11, configured to obtain a client image to be analyzed;
a face detection module 12, configured to perform face detection on the client image to obtain a face detection image;
the information analysis module 13 is used for inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and the display module 14 is used for displaying the corresponding asset identifier according to the analysis result output by the information analysis model.
The modules referred to in the present invention refer to a series of computer program instruction segments capable of performing specific functions, and are more suitable for describing the execution process of customer information analysis based on image recognition than programs.
In one embodiment, the face detection module 12 includes:
the detection unit is used for carrying out face detection on the client image according to a preset face detection algorithm and adding a corresponding face detection frame;
and the cutting unit is used for cutting the client image according to the face detection frame to obtain a face detection image.
In one embodiment, the apparatus 1 further comprises:
and the preprocessing module is used for preprocessing the face detection image.
In one embodiment, the information analysis module 13 includes:
the first extraction unit is used for extracting the face depth characteristics in the face detection image through the face characteristic extraction network;
the second extraction unit is used for extracting micro-expression characteristics in the face detection image through the micro-expression extraction network;
and the classification output unit is used for performing asset prediction analysis on the basis of the fusion characteristics of the face depth characteristics and the micro expression characteristics through the classification output network and outputting an analysis result.
In one embodiment, the classification output unit includes:
the fusion subunit is used for inputting the human face depth feature and the micro expression feature into the multilayer perceptron to perform feature alignment processing and outputting a fusion feature;
the dimensionality reduction subunit is used for inputting the fusion features to the pooling layer for dimensionality reduction processing and outputting one-dimensional features;
the prediction classification subunit is used for performing asset prediction on the one-dimensional features through the full-connection layer to obtain prediction probabilities of different asset types;
and the output subunit is used for outputting the asset type with the highest prediction probability as the target asset type.
In one embodiment, the display module 14 is specifically configured to:
and collecting the video stream of a client, and displaying after adding the asset identification of the target asset type on the video stream.
In one embodiment, the face feature extraction network is a DenseNet-based convolutional neural network.
In one embodiment, the micro expression extraction network adopts a local binary pattern operator to extract the texture features of the image.
Another embodiment of the present invention provides a customer information analysis system based on image recognition, as shown in fig. 6, the system 10 includes:
one or more processors 110 and a memory 120, where one processor 110 is illustrated in fig. 6, the processor 110 and the memory 120 may be connected by a bus or other means, and the connection by the bus is illustrated in fig. 6.
Processor 110 is used to implement various control logic for system 10, which may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip microcomputer, an ARM (Acorn RISC Machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the processor 110 may be any conventional processor, microprocessor, or state machine. The processor 110 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The memory 120, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions corresponding to the customer information analysis method based on image recognition in the embodiment of the present invention. The processor 110 executes various functional applications and data processing of the system 10, i.e., implements the customer information analysis method based on image recognition in the above method embodiments, by running the nonvolatile software programs, instructions and units stored in the memory 120.
The memory 120 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the system 10, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 120 optionally includes memory located remotely from processor 110, which may be connected to system 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more units are stored in the memory 120, and when executed by the one or more processors 110, perform the steps of:
acquiring a customer image to be analyzed;
carrying out face detection on the client image to obtain a face detection image;
inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and displaying the corresponding asset identification according to the analysis result output by the information analysis model.
In one embodiment, the performing face detection on the client image to obtain a face detection image includes:
carrying out face detection on the client image according to a preset face detection algorithm, and adding a corresponding face detection frame;
and performing region cutting on the client image according to the face detection frame to obtain a face detection image.
In one embodiment, after performing face detection on the client image to obtain a face detection image, the method further includes:
and preprocessing the face detection image.
In one embodiment, the inputting the face detection image into a pre-trained information analysis model for customer information analysis includes:
extracting face depth features in the face detection image through the face feature extraction network;
extracting micro expression features in the face detection image through the micro expression extraction network;
and performing asset prediction analysis based on the fusion characteristics of the face depth characteristics and the micro-expression characteristics through the classification output network, and outputting an analysis result.
In one embodiment, the classification output network includes a multi-layer perceptron, a pooling layer and a full-link layer, and the asset prediction analysis is performed through the classification output network based on the fusion feature of the face depth feature and the micro-expression feature, and the analysis result is output, including:
inputting the human face depth features and the micro expression features into the multilayer perceptron to carry out feature alignment processing, and outputting fusion features;
inputting the fusion features into the pooling layer for dimension reduction processing, and outputting one-dimensional features;
performing asset prediction on the one-dimensional features through the full-connection layer to obtain prediction probabilities of different asset types;
and outputting the asset type with the highest prediction probability as the target asset type.
In one embodiment, the displaying the corresponding asset identifier according to the analysis result output by the information analysis model specifically includes:
and collecting a video stream of a client, and displaying after adding the asset identification of the target asset type on the video stream.
In one embodiment, the face feature extraction network is a DenseNet based convolutional neural network.
In one embodiment, the micro expression extraction network adopts a local binary pattern operator to extract the texture features of the image.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, e.g., to perform method steps S100-S400 of fig. 1 described above.
By way of example, nonvolatile storage media can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Synchronous RAM (SRAM), dynamic RAM, (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The disclosed memory components or memory of the operating environment described herein are intended to comprise one or more of these and/or any other suitable types of memory.
In summary, in the customer information analysis method, apparatus, system and medium based on image recognition disclosed in the present invention, the method obtains the customer image to be analyzed; carrying out face detection on the client image to obtain a face detection image; inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network; and displaying the corresponding asset identification according to the analysis result output by the information analysis model. By carrying out face detection on a client image and carrying out multi-mode client information analysis based on the pre-trained analysis model face characteristics and micro-expression characteristics, the method can efficiently and objectively predict and analyze the asset information of a client, visually display the asset identification obtained by prediction, improve the efficiency and accuracy of client information analysis and provide a reliable data basis for confirming potential clients.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by instructing relevant hardware (such as a processor, a controller, etc.) through a computer program, which may be stored in a non-volatile computer-readable storage medium, and the computer program may include the processes of the above method embodiments when executed. The storage medium may be a memory, a magnetic disk, a floppy disk, a flash memory, an optical memory, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (11)

1. A customer information analysis method based on image recognition is characterized by comprising the following steps:
acquiring a customer image to be analyzed;
carrying out face detection on the client image to obtain a face detection image;
inputting the face detection image into a pre-trained information analysis model for customer information analysis, wherein the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and displaying the corresponding asset identification according to the analysis result output by the information analysis model.
2. The customer information analysis method based on image recognition according to claim 1, wherein the performing face detection on the customer image to obtain a face detection image comprises:
carrying out face detection on the client image according to a preset face detection algorithm, and adding a corresponding face detection frame;
and performing region cutting on the client image according to the face detection frame to obtain a face detection image.
3. The customer information analysis method based on image recognition according to claim 1, wherein after the face detection is performed on the customer image to obtain a face detection image, the method further comprises:
and preprocessing the face detection image.
4. The customer information analysis method based on image recognition according to claim 1, wherein the inputting the face detection image into a pre-trained information analysis model for customer information analysis comprises:
extracting face depth features in the face detection image through the face feature extraction network;
extracting micro expression features in the face detection image through the micro expression extraction network;
and performing asset prediction analysis based on the fusion characteristics of the face depth characteristics and the micro-expression characteristics through the classification output network, and outputting an analysis result.
5. The customer information analysis method based on image recognition according to claim 4, wherein the classification output network comprises a multi-layer perceptron, a pooling layer and a full connection layer, and the asset prediction analysis is performed through the classification output network based on the fusion feature of the face depth feature and the micro-expression feature, and the analysis result is output, and the method comprises the following steps:
inputting the human face depth features and the micro expression features into the multilayer perceptron to carry out feature alignment processing, and outputting fusion features;
inputting the fusion features into the pooling layer for dimension reduction processing, and outputting one-dimensional features;
performing asset prediction on the one-dimensional features through the full connection layer to obtain prediction probabilities of different asset types;
and outputting the asset type with the highest prediction probability as the target asset type.
6. The customer information analysis method based on image recognition according to claim 5, wherein the displaying of the corresponding asset identifier according to the analysis result output by the information analysis model specifically comprises:
and collecting a video stream of a client, and displaying after adding the asset identification of the target asset type on the video stream.
7. The image recognition-based client information analysis method according to any one of claims 1 to 6, wherein the face feature extraction network is a DenseNet-based convolutional neural network.
8. The image-recognition-based customer information analysis method according to any one of claims 1 to 6, wherein the micro expression extraction network performs texture feature extraction of the image by using a local binary pattern operator.
9. A customer information analysis apparatus based on image recognition, comprising:
the acquisition module is used for acquiring a client image to be analyzed;
the face detection module is used for carrying out face detection on the client image to obtain a face detection image;
the information analysis module is used for inputting the face detection image into a pre-trained information analysis model for customer information analysis, and the information analysis model comprises a face feature extraction network, a micro expression extraction network and a classification output network;
and the display module is used for displaying the corresponding asset identification according to the analysis result output by the information analysis model.
10. A customer information analysis system based on image recognition, the system comprising at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition based customer information analysis method of any one of claims 1-8.
11. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the image recognition based customer information analysis method of any one of claims 1-8.
CN202211338215.5A 2022-10-28 2022-10-28 Customer information analysis method, device, system and medium based on image recognition Pending CN115497152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211338215.5A CN115497152A (en) 2022-10-28 2022-10-28 Customer information analysis method, device, system and medium based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211338215.5A CN115497152A (en) 2022-10-28 2022-10-28 Customer information analysis method, device, system and medium based on image recognition

Publications (1)

Publication Number Publication Date
CN115497152A true CN115497152A (en) 2022-12-20

Family

ID=85115416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211338215.5A Pending CN115497152A (en) 2022-10-28 2022-10-28 Customer information analysis method, device, system and medium based on image recognition

Country Status (1)

Country Link
CN (1) CN115497152A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117609611A (en) * 2023-11-24 2024-02-27 中邮消费金融有限公司 Multi-mode information processing method, equipment, storage medium and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117609611A (en) * 2023-11-24 2024-02-27 中邮消费金融有限公司 Multi-mode information processing method, equipment, storage medium and device

Similar Documents

Publication Publication Date Title
US10691928B2 (en) Method and apparatus for facial recognition
CN108509915B (en) Method and device for generating face recognition model
US11244435B2 (en) Method and apparatus for generating vehicle damage information
CN114913565B (en) Face image detection method, model training method, device and storage medium
US11270099B2 (en) Method and apparatus for generating facial feature
US20190087686A1 (en) Method and apparatus for detecting human face
CN106203242B (en) Similar image identification method and equipment
US20190392587A1 (en) System for predicting articulated object feature location
US11436863B2 (en) Method and apparatus for outputting data
CN108229376B (en) Method and device for detecting blinking
JP6030240B2 (en) Method and apparatus for face recognition
CN109858333B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110348439B (en) Method, computer readable medium and system for automatically identifying price tags
KR102002024B1 (en) Method for processing labeling of object and object management server
US11087140B2 (en) Information generating method and apparatus applied to terminal device
CN108491823B (en) Method and device for generating human eye recognition model
CN108133197B (en) Method and apparatus for generating information
CN111738199B (en) Image information verification method, device, computing device and medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN111382655A (en) Hand-lifting behavior identification method and device and electronic equipment
CN115497152A (en) Customer information analysis method, device, system and medium based on image recognition
CN109064464B (en) Method and device for detecting burrs of battery pole piece
CN110751004A (en) Two-dimensional code detection method, device, equipment and storage medium
CN111353429A (en) Interest degree method and system based on eyeball turning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination