CN113254491A - Information recommendation method and device, computer equipment and storage medium - Google Patents

Information recommendation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113254491A
CN113254491A CN202110609706.8A CN202110609706A CN113254491A CN 113254491 A CN113254491 A CN 113254491A CN 202110609706 A CN202110609706 A CN 202110609706A CN 113254491 A CN113254491 A CN 113254491A
Authority
CN
China
Prior art keywords
information
target object
feature
information recommendation
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110609706.8A
Other languages
Chinese (zh)
Inventor
喻凌威
周宝
杨浩宇
陈远旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110609706.8A priority Critical patent/CN113254491A/en
Publication of CN113254491A publication Critical patent/CN113254491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/16Speech classification or search using artificial neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The application discloses an information recommendation method and device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. The identity information of a target object is determined based on a face image, historical interaction information of the target object is obtained according to the identity information, a first information recommendation strategy is obtained based on the historical interaction information, the first information recommendation strategy is called to carry out information recommendation on the target object, feature information of the target object in an information recommendation process is obtained, the feature information is led into a pre-trained emotion recognition model, a second information recommendation strategy is obtained based on an emotion recognition result of the target object, and the second information recommendation strategy is called to carry out information recommendation on the target object. In addition, the present application also relates to a blockchain technique, and identity information can be stored in the blockchain. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience is improved, and intelligent auxiliary education is achieved.

Description

Information recommendation method and device, computer equipment and storage medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an information recommendation method, an information recommendation device, computer equipment and a storage medium.
Background
Along with the development of electronic information technology and network technology, intelligent teaching is widely applied to daily teaching, and the teaching process is optimized, the teaching efficiency is improved and the teaching quality is improved in a richer display form at the present day when the education is increasingly informationized. Today, the education psychologist considers that emotion has two aspects to the learning activities of students, so that the enthusiasm of learning can be improved, the learning effect can be promoted and enhanced, the enthusiasm of learning can be reduced, and the learning effect can be weakened and reduced.
At present, some auxiliary teaching robots are applied to the teaching field in the market, but the auxiliary teaching robots are generally used as objects to be researched and learned and are used for knowledge learning and practice of students in the aspects of mechanical and electronic technologies, so that the existing auxiliary teaching robots are small in action range and poor in expansibility. Or as an auxiliary teaching aid, the functions and teaching contents of the auxiliary teaching robot are designed in advance by developers, and intelligent auxiliary teaching cannot be performed according to the current state of students.
Disclosure of Invention
An object of the embodiment of the present application is to provide an information recommendation method, an information recommendation device, a computer device, and a storage medium, so as to solve the technical problem that an existing teaching-assistance robot has a large limitation in information recommendation and cannot push information according to an emotional state of a target object.
In order to solve the above technical problem, an embodiment of the present application provides an information recommendation method, which adopts the following technical solutions:
a method of information recommendation, comprising:
acquiring a face image of a target object, and determining identity information of the target object based on the face image;
acquiring historical interaction information of the target object according to the identity information of the target object;
acquiring a first information recommendation strategy from a preset information recommendation strategy library based on the historical interaction information, and calling the first information recommendation strategy to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object;
acquiring characteristic information of the target object in an information recommendation process, and importing the characteristic information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object;
and acquiring a second information recommendation strategy from a preset information recommendation strategy library based on the emotion recognition result, and calling the second information recommendation strategy to perform information recommendation on the target object, wherein the second information recommendation strategy is matched with the emotion recognition result of the target object.
Further, the step of acquiring a face image of a target object and determining the identity information of the target object based on the face image specifically includes:
tracking and shooting the face of the target object to obtain a face image of the target object;
carrying out face region segmentation on the face image to obtain a face region image of the target object;
carrying out feature recognition on the face region image of the target object to obtain face feature information of the target object;
and comparing the facial feature information with preset facial feature information, and determining the identity information of the target object according to a comparison result.
Further, the step of performing feature recognition on the face region image of the target object to obtain the face feature information of the target object specifically includes:
collecting facial feature points of the target object on a facial region image of the target object;
establishing a human face 3D grid according to the facial feature points of the target object;
acquiring the characteristic values of the facial characteristic points, and calculating the connection relation between the facial characteristic points according to the characteristic values and the 3D mesh of the human face;
and acquiring the 3D space distribution characteristic information of the face characteristic points according to the characteristic values and the connection relation to obtain the face characteristic information of the target object.
Further, the step of obtaining the feature values of the facial feature points and calculating the connection relationship between the facial feature points according to the feature values and the 3D mesh of the human face specifically includes:
acquiring color information of the facial feature points, and determining feature values of the facial feature points based on the color information;
and calculating the position information of the facial feature points based on the human face 3D grid, and determining the connection relation between the facial feature points based on the feature values and the position information.
Further, before obtaining the feature information of the target object in the information recommendation process and importing the feature information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object, the method further includes:
acquiring a training sample, and extracting emotional characteristics of the training sample, wherein the emotional characteristics comprise facial characteristics, sound characteristics and physiological characteristics;
calculating feature weights of the facial features, the sound features and the physiological features based on a preset feature weight algorithm;
and training a preset initial recognition model based on the training samples and the feature weights to obtain an emotion recognition model.
Further, the step of calculating the feature weights of the facial features, the sound features, and the physiological features based on a preset feature weight algorithm specifically includes:
assigning the same initial weight to the facial features, the sound features, and the physiological features;
classifying the facial features, the sound features and the physiological features after the initial weight is given to obtain a plurality of emotional feature combinations;
calculating the similarity of the emotional features in the emotional feature combination of the same category to obtain a first similarity;
calculating the similarity of the emotional features among different types of emotional feature combinations to obtain a second similarity;
and adjusting the initial weights of the facial features, the sound features and the physiological features respectively based on the first similarity and the second similarity to obtain the feature weights of the facial features, the sound features and the physiological features.
Furthermore, a plurality of information recommendation strategies are preset in the information recommendation strategy library, each information recommendation strategy corresponds to one type of history interaction information, and each information recommendation strategy corresponds to one type of emotion recognition result.
In order to solve the above technical problem, an embodiment of the present application further provides an information recommendation apparatus, which adopts the following technical solutions:
an apparatus for information recommendation, comprising:
the identity confirmation module is used for acquiring a face image of a target object and determining identity information of the target object based on the face image;
the information acquisition module is used for acquiring historical interaction information of the target object according to the identity information of the target object;
the first recommendation module is used for acquiring a first information recommendation strategy from a preset information recommendation strategy library based on the historical interaction information, and calling the first information recommendation strategy to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object;
the emotion recognition module is used for acquiring the characteristic information of the target object in the information recommendation process, and importing the characteristic information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object;
and the second recommending module is used for acquiring a second information recommending strategy from a preset information recommending strategy library based on the emotion recognition result, and calling the second information recommending strategy to recommend information to the target object, wherein the second information recommending strategy is matched with the emotion recognition result of the target object.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of a method of information recommendation as described above.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of a method of information recommendation as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application discloses an information recommendation method, an information recommendation device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. According to the information recommendation method and device, information recommendation is carried out on a target object by adopting a first information recommendation strategy according to historical interactive information, emotional characteristics of the target object are continuously captured in the information recommendation process, emotion recognition of the target object is carried out according to the emotional characteristics so as to judge the receiving degree of the target object on the pushed information, and a corresponding second information recommendation strategy is selected to carry out information recommendation on the target object according to the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 illustrates a flow diagram of one embodiment of a method of information recommendation in accordance with the present application;
FIG. 3 illustrates a schematic structural diagram of one embodiment of an apparatus for information recommendation according to the present application;
FIG. 4 shows a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the method for information recommendation provided in the embodiments of the present application is generally executed by a server, and accordingly, the apparatus for information recommendation is generally disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow diagram of one embodiment of a method of information recommendation in accordance with the present application is shown. The information recommendation method comprises the following steps:
s201, acquiring a face image of a target object, and determining identity information of the target object based on the face image.
Specifically, when an information recommendation instruction is received, a preset camera system is used for tracking and shooting the face of a target object, a face image of the target object is obtained, facial feature information of the target object is obtained in a face feature recognition mode, the facial feature information of the target object is compared with facial feature information stored in a service in advance one by one, and identity information of the target object is determined according to a comparison result.
In a specific embodiment of the application, the information recommendation method can be applied to children picture book recommendation, a teaching-assisted robot is used for acquiring a facial image of a child, performing feature recognition on the acquired facial image, and comparing the recognized facial features with facial features stored in a service in advance to determine identity information of the child, wherein the identity information includes name, age, gender, class teacher and the like.
In this embodiment, the electronic device (for example, the server/terminal device shown in fig. 1) on which the information recommendation method operates may receive the information recommendation instruction through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
S202, obtaining historical interaction information of the target object according to the identity information of the target object.
Specifically, historical interaction information of the target object is obtained according to the identity information of the target object, wherein the historical interaction information comprises historical recommendation information of the target object, feedback information of the target object in the process of receiving the historical recommendation information and the like. In the above specific embodiment of the present application, the historical interaction information may be the drawing book reading progress information of the child, the reading time information of each drawing book, or the feedback information during the drawing book reading, and the various historical interaction information may be obtained and stored by the server during the generation.
S203, based on the historical interaction information, obtaining a first information recommendation strategy from a preset information recommendation strategy library, and calling the first information recommendation strategy to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object.
The information recommendation strategy library is preset with a plurality of information recommendation strategies, and each information recommendation strategy corresponds to one type of history interaction information.
Specifically, after the identity information of the target user is identified, historical interaction information of the target object is obtained based on the identity information, then a first information recommendation strategy is obtained from a preset information recommendation strategy library based on the historical interaction information of the target object, and the first information recommendation strategy is called to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object.
In the above specific embodiment of the application, the information recommendation policy in the information recommendation policy library may be pre-configured by a teacher in class, and a specific picture book recommendation policy is, for example, a explaining type, a live type, a discussion type, a question and answer type, and the like. And aiming at the sketches which are read by children for many times, live-broadcast type, discussion type or question-and-answer type sketches recommendation strategies can be selected respectively according to historical interaction information.
S204, obtaining the characteristic information of the target object in the information recommendation process, and importing the characteristic information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object.
Specifically, in order to improve the user experience and the accuracy of information recommendation, in the information recommendation process, the feature information of the target object in the information recommendation process is obtained, the feature information is imported into a pre-trained emotion recognition model, the emotion recognition result of the target object is obtained, the receiving degree of the pushed information by the target object is judged, and the information recommendation strategy is adjusted according to the emotion recognition result of the target object. The feature information comprises face information, sound information and physiological information, and the emotion recognition model can be trained in a manner of adding a feature weight algorithm, so that the emotion recognition accuracy is further improved, and better user experience is obtained.
In the specific embodiment of the application, in the process of reading and drawing the book by the child, the face information of the child is acquired in real time through the camera system, the sound information of the child is acquired through the radio system, the physiological information of the child is acquired through the smart bracelet worn by the child, the acquired face information, the sound information and the physiological information are imported into the pre-trained emotion recognition model, the current emotion recognition result of the child is obtained, the receiving degree of the pushed information by the child is judged according to the current emotion recognition result of the child, and the book drawing recommendation strategy is adjusted according to the current emotion recognition result of the child.
S205, based on the emotion recognition result, a second information recommendation strategy is obtained from a preset information recommendation strategy library, and the second information recommendation strategy is called to perform information recommendation on the target object, wherein the second information recommendation strategy is matched with the emotion recognition result of the target object.
The information recommendation strategy library is preset with a plurality of information recommendation strategies, and each information recommendation strategy corresponds to one emotion recognition result.
Specifically, after the emotion recognition result of the target object is obtained through a preset emotion recognition model, a second information recommendation strategy is obtained from a preset information recommendation strategy library based on the emotion recognition result of the target object, and the second information recommendation strategy is called to perform information recommendation on the target object, wherein the second information recommendation strategy is matched with the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
In the above specific embodiment of the application, in the context of the question-and-answer type picture recommendation policy, if the current emotion recognition result of the child is a negative emotion such as "anxiety", "impatience" or "confusion", it indicates that the child is difficult to understand the picture content under the current picture recommendation policy, so the server changes the picture recommendation policy into the discussion type picture recommendation policy according to the current emotion recognition result of the child, plays the picture for the child with the discussion type picture recommendation policy, and continuously recognizes the emotion recognition result of the child under the discussion type picture recommendation policy, so as to continuously determine whether to further change the picture recommendation policy, and repeats the above process until the emotion recognition result of the child is a positive emotion such as "joyful", "active", and the like.
Further, the step of acquiring a face image of a target object and determining the identity information of the target object based on the face image specifically includes:
tracking and shooting the face of the target object to obtain a face image of the target object;
carrying out face region segmentation on the face image to obtain a face region image of the target object;
carrying out feature recognition on the face region image of the target object to obtain face feature information of the target object;
and comparing the facial feature information with preset facial feature information, and determining the identity information of the target object according to a comparison result.
Specifically, when the server acquires a face image of the target object, the server tracks and captures the face of the target object through the camera system to acquire the face image of the target object, performs face region segmentation on the face image to acquire a face region image of the target object, for example, performs face region segmentation on the face image to acquire an eye region, a lip region, a nose region, an eyebrow region, a face contour region, and the like, acquires face feature information based on the face region image, and finally compares the acquired face feature information with face feature information stored in the server in advance one by one, and determines identity information, such as name, age, gender, and the like, of the target object according to a comparison result.
In the above embodiment, the face image of the target object is acquired, the face region of the face image is segmented, and the face region image is recognized to acquire the facial feature information of the target object, and the identity information of the target object is determined by comparing the facial feature information of the target object with the facial feature information stored in the service in advance, so that the information recommendation policy is determined according to the identity information of the target object.
Further, the step of performing feature recognition on the face region image of the target object to obtain the face feature information of the target object specifically includes:
collecting facial feature points of the target object on a facial region image of the target object;
establishing a human face 3D grid according to the facial feature points of the target object;
acquiring the characteristic values of the facial characteristic points, and calculating the connection relation between the facial characteristic points according to the characteristic values and the 3D mesh of the human face;
and acquiring the 3D space distribution characteristic information of the face characteristic points according to the characteristic values and the connection relation to obtain the face characteristic information of the target object.
Further, the step of obtaining the feature values of the facial feature points and calculating the connection relationship between the facial feature points according to the feature values and the 3D mesh of the human face specifically includes:
acquiring color information of the facial feature points, and determining feature values of the facial feature points based on the color information;
and calculating the position information of the facial feature points based on the human face 3D grid, and determining the connection relation between the facial feature points based on the feature values and the position information.
The face feature points of the target object may be collected by a susan (small uniform Segment classifying kernel) operator, the image is traversed by using a circular template, if the difference between the gray value of any other pixel in the template and the gray value of the central pixel (kernel) in the template is less than a certain threshold, the point is considered to have the same (or similar) gray value as the kernel, and a region composed of pixels satisfying the above condition is referred to as a kernel value similarity region (USAN). Associating each pixel in the image with a local area having similar gray values is the basis of the SUSAN criterion. According to the characteristics of the Susan operator, the Susan operator can be used for detecting edges and extracting corners.
Specifically, the server collects face feature points of the target object, such as feature points of an eye region, feature points of a lip region, feature points of a nose region, feature points of an eyebrow region, feature points of a face contour region, and the like, on the face region image of the target object through a preset Susan operator. Then, the server establishes a face 3D grid according to the face feature points of the target object, obtains feature values of the face feature points, and calculates connection relations among the face feature points according to the feature values and the face 3D grid, wherein the connection relations can be topological connection relations among the face feature points, space geometric distances or dynamic connection relations of various face feature point combinations and the like. And finally, the server acquires the 3D space distribution characteristic information of the face characteristic points according to the characteristic values and the connection relations to acquire the face characteristic information of the target object, wherein the three-dimensional face shape information can be acquired through analyzing the characteristic values and the connection relations, so that the face characteristic information of the target object is acquired.
The server obtains color information of each facial feature point, calculates position information of each facial feature point based on the color information and the 3D grid of the human face, then determines a feature value of each facial feature point based on the color information, and determines a connection relation between the facial feature points based on the feature value and the position information.
Specifically, the color information may measure a relevant feature value for a feature point of the face feature, where the feature value is a measure of one or more of a position, a distance, a shape, a size, an angle, an arc, and a curvature of each face feature point on the 2D plane, and the feature value further includes a measure of color, brightness, texture, and the like. For example, the positions of all pixels of the eye, the shape of the eye, the inclination of the corner of the eye, the color of the eye, and the like are obtained according to the extension of the central pixel point of the iris to the periphery. Combining the color information and the depth information, the connection relationship between the feature points can be calculated, wherein the connection relationship can be a topological connection relationship between the facial feature points, a spatial geometric distance, a dynamic connection relationship of various facial feature point combinations, or the like.
In the above embodiment, the facial feature points of the target object are rapidly acquired through the Susan operator, a 3D mesh of the face is constructed through the facial feature points, information such as color and position of the facial feature points is acquired, the feature values of the facial feature points and the connection relations between the facial feature points are calculated by combining the constructed 3D mesh of the face, and the three-dimensional facial shape information of the target object can be obtained by analyzing the feature values of the facial feature points and the connection relations between the facial feature points, so that the whole facial feature information of the target object is obtained, and the identity of the target object is determined by comparing the facial feature information of the target object with the facial feature information stored in the service in advance one by one.
Further, before obtaining the feature information of the target object in the information recommendation process and importing the feature information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object, the method further includes:
acquiring a training sample, and extracting emotional characteristics of the training sample, wherein the emotional characteristics comprise facial characteristics, sound characteristics and physiological characteristics;
calculating feature weights of the facial features, the sound features and the physiological features based on a preset feature weight algorithm;
and training a preset initial recognition model based on the training samples and the feature weights to obtain an emotion recognition model.
The preset initial recognition model adopts a deep Convolutional Neural network model, and a Convolutional Neural Network (CNN) is a feed forward Neural network (fed Neural network) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional neural networks have a feature learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure thereof, and are also called "shift-invariant artificial neural networks". The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has stable effect and no additional characteristic engineering requirement on data, and the convolutional kernel parameter sharing in a convolutional layer and the sparsity of interlayer connection enable the convolutional neural network to learn grid-like topology (pixels and audio) features with small calculation amount.
Specifically, before emotion recognition is carried out by the server, an emotion recognition model needs to be trained in advance, the emotion characteristics of the training sample are extracted by obtaining the training sample, wherein the emotion characteristics comprise facial characteristics, voice characteristics and physiological characteristics, the characteristic weights of the facial characteristics, the voice characteristics and the physiological characteristics are calculated based on a preset characteristic weight algorithm, and a preset initial recognition model is trained based on the training sample and the characteristic weights to obtain the emotion recognition model. The emotion recognition method and the emotion recognition system have the advantages that the emotion recognition model is trained on the basis of the feature weight algorithm by integrating the multi-dimensional emotion features such as the facial features, the voice features and the physiological features and the feature weights of all the emotion features, so that the emotion of a user is comprehensively recognized, and the accuracy of emotion recognition is improved.
The preset feature weight algorithm can adopt a Relief algorithm, the Relief algorithm randomly selects a sample R from any emotional feature combination D, then searches a sample H nearest to the sample R from the D, the sample H is called Near Hit, searches a sample M nearest to the sample R from other emotional feature combinations, the sample M is called Near Miss, and then the weight of each feature is updated according to the following rules: if the distance between R and Near Hit on a certain feature is smaller than the distance between R and Near Miss, namely the similarity between two emotional features, the feature is beneficial to distinguishing the nearest neighbors of the same class and different classes, and the weight of the feature is increased; conversely, if the distance between R and Near Hit in a feature is greater than the distance between R and Near Miss, indicating that the feature has a negative effect on distinguishing between similar and dissimilar nearest neighbors, the weight of the feature is reduced. Repeating the above processes m times to finally obtain the average weight of each feature, wherein the larger the weight of the feature is, the stronger the classification capability of the feature is, and conversely, the weaker the classification capability of the feature is. The running time of the Relief algorithm is increased linearly along with the increase of the sampling times m of the samples and the number N of the original features, so that the running efficiency is very high.
In the above embodiment, the facial features, the voice features and the physiological features of the training samples are extracted, the feature weights of the facial features, the voice features and the physiological features are calculated through a Relief algorithm, when the initial recognition model training is performed, the model training is performed by introducing the feature weights of the facial features, the voice features and the physiological features, the emotion recognition accuracy is improved, the server acquires the feature information of the target object in the information recommendation process, and guides the feature information into the pre-trained emotion recognition model, so that the emotion recognition result of the target object can be obtained.
Further, the step of calculating the feature weights of the facial features, the sound features, and the physiological features based on a preset feature weight algorithm specifically includes:
assigning the same initial weight to the facial features, the sound features, and the physiological features;
classifying the facial features, the sound features and the physiological features after the initial weight is given to obtain a plurality of emotional feature combinations;
calculating the similarity of the emotional features in the emotional feature combination of the same category to obtain a first similarity;
calculating the similarity of the emotional features among different types of emotional feature combinations to obtain a second similarity;
and adjusting the initial weights of the facial features, the sound features and the physiological features respectively based on the first similarity and the second similarity to obtain the feature weights of the facial features, the sound features and the physiological features.
Specifically, after the facial features, the voice features and the physiological features of the training samples are obtained, the server assigns the same initial weight to the facial features, the voice features and the physiological features, classifies the facial features, the voice features and the physiological features after the initial weight is assigned to obtain a plurality of emotional feature combinations, calculates the similarity of the emotional features in the emotional feature combinations of the same category to obtain a first similarity, calculates the similarity of the emotional features between different categories of emotional feature combinations to obtain a second similarity, and finally adjusts the initial weights of the facial features, the voice features and the physiological features respectively based on the first similarity and the second similarity to obtain the feature weights of the facial features, the voice features and the physiological features.
In the above embodiment, the server calculates the feature weights of the facial features, the voice features, and the physiological features of the training samples based on the feature weight algorithm, so that the training is performed by combining the feature weights of the facial features, the voice features, and the physiological features of the training samples during the training of the emotion recognition model, and the accuracy of emotion recognition is improved.
Furthermore, a plurality of information recommendation strategies are preset in the information recommendation strategy library, each information recommendation strategy corresponds to one type of history interaction information, and each information recommendation strategy corresponds to one type of emotion recognition result.
The embodiment discloses an information recommendation method, which belongs to the technical field of artificial intelligence, and is characterized in that identity information of a target object is determined based on a face image, historical interaction information of the target object is obtained according to the identity information, a first information recommendation strategy is obtained based on the historical interaction information, the first information recommendation strategy is called to perform information recommendation on the target object, feature information of the target object in an information recommendation process is obtained, the feature information is led into a pre-trained emotion recognition model, a second information recommendation strategy is obtained based on an emotion recognition result of the target object, and the second information recommendation strategy is called to perform information recommendation on the target object. According to the information recommendation method and device, information recommendation is carried out on a target object by adopting a first information recommendation strategy according to historical interactive information, emotional characteristics of the target object are continuously captured in the information recommendation process, emotion recognition of the target object is carried out according to the emotional characteristics so as to judge the receiving degree of the target object on the pushed information, and a corresponding second information recommendation strategy is selected to carry out information recommendation on the target object according to the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
It is emphasized that the identity information may also be stored in a node of a block chain in order to further ensure the privacy and security of the identity information.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an apparatus for information recommendation, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 3, the information recommendation apparatus according to this embodiment includes:
an identity confirmation module 301, configured to obtain a face image of a target object, and determine identity information of the target object based on the face image;
an information obtaining module 302, configured to obtain historical interaction information of the target object according to the identity information of the target object;
a first recommending module 303, configured to obtain a first information recommending policy from a preset information recommending policy library based on the historical interaction information, and invoke the first information recommending policy to perform information recommendation on the target object, where the first information recommending policy is matched with the historical interaction information of the target object;
the emotion recognition module 304 is configured to obtain feature information of the target object in an information recommendation process, and introduce the feature information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object;
a second recommending module 305, configured to obtain a second information recommendation policy from a preset information recommendation policy library based on the emotion recognition result, and invoke the second information recommendation policy to perform information recommendation on the target object, where the second information recommendation policy is matched with the emotion recognition result of the target object.
Further, the identity confirmation module 301 specifically includes:
the image shooting submodule is used for tracking and shooting the face of the target object to obtain a face image of the target object;
the image segmentation submodule is used for carrying out face region segmentation on the face image to obtain a face region image of the target object;
the feature recognition submodule is used for carrying out feature recognition on the face region image of the target object to obtain face feature information of the target object;
and the identity confirmation submodule is used for comparing the facial feature information with preset facial feature information and determining the identity information of the target object according to a comparison result.
Further, the feature identification submodule specifically includes:
the characteristic point acquisition unit is used for acquiring facial characteristic points of the target object on the facial region image of the target object;
the 3D mesh construction unit is used for establishing a human face 3D mesh according to the facial feature points of the target object;
the feature calculation unit is used for acquiring feature values of the face feature points and calculating the connection relation between the face feature points according to the feature values and the 3D mesh of the human face;
and the feature identification unit is used for acquiring the 3D space distribution feature information of the face feature points according to the feature values and the connection relation to obtain the face feature information of the target object.
Further, the feature calculation specifically includes:
the characteristic value operator unit is used for acquiring color information of the facial characteristic points and determining the characteristic values of the facial characteristic points based on the color information;
and the connection relation determining subunit is used for calculating the position information of the facial feature points based on the 3D grid of the human face and determining the connection relation between the facial feature points based on the feature values and the position information.
Further, the information recommendation device further comprises:
the emotion feature extraction module is used for acquiring a training sample and extracting emotion features of the training sample, wherein the emotion features comprise facial features, sound features and physiological features;
the feature weight calculation module is used for calculating feature weights of the facial features, the sound features and the physiological features based on a preset feature weight algorithm;
and the emotion model training module is used for training a preset initial recognition model based on the training samples and the feature weights to obtain an emotion recognition model.
Further, the feature weight calculation module specifically includes:
the weight assignment unit is used for assigning the same initial weight to the facial feature, the sound feature and the physiological feature;
the feature classification unit is used for classifying the facial features, the sound features and the physiological features which are endowed with the initial weights to obtain a plurality of emotion feature combinations;
the first similarity calculation unit is used for calculating the similarity of the emotional characteristics in the emotional characteristic combination of the same category to obtain a first similarity;
the second similarity calculation unit is used for calculating the similarity of the emotional characteristics among different types of emotional characteristic combinations to obtain a second similarity;
a feature weight calculation unit, configured to adjust initial weights of the facial feature, the sound feature, and the physiological feature based on the first similarity and the second similarity, respectively, to obtain feature weights of the facial feature, the sound feature, and the physiological feature.
Furthermore, a plurality of information recommendation strategies are preset in the information recommendation strategy library, each information recommendation strategy corresponds to one type of history interaction information, and each information recommendation strategy corresponds to one type of emotion recognition result.
The embodiment discloses an information recommendation device, which belongs to the technical field of artificial intelligence, and is characterized in that identity information of a target object is determined based on a face image, historical interaction information of the target object is obtained according to the identity information, a first information recommendation strategy is obtained based on the historical interaction information, the first information recommendation strategy is called to perform information recommendation on the target object, feature information of the target object in an information recommendation process is obtained, the feature information is led into a pre-trained emotion recognition model, a second information recommendation strategy is obtained based on an emotion recognition result of the target object, and the second information recommendation strategy is called to perform information recommendation on the target object. According to the information recommendation method and device, information recommendation is carried out on a target object by adopting a first information recommendation strategy according to historical interactive information, emotional characteristics of the target object are continuously captured in the information recommendation process, emotion recognition of the target object is carried out according to the emotional characteristics so as to judge the receiving degree of the target object on the pushed information, and a corresponding second information recommendation strategy is selected to carry out information recommendation on the target object according to the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only computer device 4 having components 41-43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various application software, such as computer readable instructions of a method for information recommendation. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the information recommendation method.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The application discloses computer equipment, which belongs to the technical field of artificial intelligence, and is characterized in that identity information of a target object is determined based on a face image, historical interaction information of the target object is obtained according to the identity information, a first information recommendation strategy is obtained based on the historical interaction information, the first information recommendation strategy is called to perform information recommendation on the target object, characteristic information of the target object in an information recommendation process is obtained, the characteristic information is led into a pre-trained emotion recognition model, a second information recommendation strategy is obtained based on an emotion recognition result of the target object, and the second information recommendation strategy is called to perform information recommendation on the target object. According to the information recommendation method and device, information recommendation is carried out on a target object by adopting a first information recommendation strategy according to historical interactive information, emotional characteristics of the target object are continuously captured in the information recommendation process, emotion recognition of the target object is carried out according to the emotional characteristics so as to judge the receiving degree of the target object on the pushed information, and a corresponding second information recommendation strategy is selected to carry out information recommendation on the target object according to the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
The present application provides yet another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the method of information recommendation as described above.
The application discloses a storage medium, which belongs to the technical field of artificial intelligence, and is characterized in that identity information of a target object is determined based on a face image, historical interaction information of the target object is obtained according to the identity information, a first information recommendation strategy is obtained based on the historical interaction information, the first information recommendation strategy is called to perform information recommendation on the target object, characteristic information of the target object in an information recommendation process is obtained, the characteristic information is led into a pre-trained emotion recognition model, a second information recommendation strategy is obtained based on an emotion recognition result of the target object, and the second information recommendation strategy is called to perform information recommendation on the target object. According to the information recommendation method and device, information recommendation is carried out on a target object by adopting a first information recommendation strategy according to historical interactive information, emotional characteristics of the target object are continuously captured in the information recommendation process, emotion recognition of the target object is carried out according to the emotional characteristics so as to judge the receiving degree of the target object on the pushed information, and a corresponding second information recommendation strategy is selected to carry out information recommendation on the target object according to the emotion recognition result of the target object. According to the method and the device, different information recommendation strategies are adopted for the target object according to the historical interaction information and the emotion recognition result, the user experience and the accuracy of information recommendation are improved, and intelligent auxiliary education is achieved.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A method for information recommendation, comprising:
acquiring a face image of a target object, and determining identity information of the target object based on the face image;
acquiring historical interaction information of the target object according to the identity information of the target object;
acquiring a first information recommendation strategy from a preset information recommendation strategy library based on the historical interaction information, and calling the first information recommendation strategy to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object;
acquiring characteristic information of the target object in an information recommendation process, and importing the characteristic information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object;
and acquiring a second information recommendation strategy from a preset information recommendation strategy library based on the emotion recognition result, and calling the second information recommendation strategy to perform information recommendation on the target object, wherein the second information recommendation strategy is matched with the emotion recognition result of the target object.
2. The information recommendation method according to claim 1, wherein the step of obtaining a face image of a target object and determining the identity information of the target object based on the face image specifically comprises:
tracking and shooting the face of the target object to obtain a face image of the target object;
carrying out face region segmentation on the face image to obtain a face region image of the target object;
carrying out feature recognition on the face region image of the target object to obtain face feature information of the target object;
and comparing the facial feature information with preset facial feature information, and determining the identity information of the target object according to a comparison result.
3. The information recommendation method according to claim 2, wherein the step of performing feature recognition on the face region image of the target object to obtain the face feature information of the target object specifically comprises:
collecting facial feature points of the target object on a facial region image of the target object;
establishing a human face 3D grid according to the facial feature points of the target object;
acquiring the characteristic values of the facial characteristic points, and calculating the connection relation between the facial characteristic points according to the characteristic values and the 3D mesh of the human face;
and acquiring the 3D space distribution characteristic information of the face characteristic points according to the characteristic values and the connection relation to obtain the face characteristic information of the target object.
4. The information recommendation method according to claim 3, wherein the step of obtaining feature values of the facial feature points and calculating connection relationships between the facial feature points according to the feature values and the 3D mesh of the human face specifically comprises:
acquiring color information of the facial feature points, and determining feature values of the facial feature points based on the color information;
and calculating the position information of the facial feature points based on the human face 3D grid, and determining the connection relation between the facial feature points based on the feature values and the position information.
5. The information recommendation method according to claim 1, wherein before the obtaining of the feature information of the target object in the information recommendation process and the importing of the feature information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object, the method further comprises:
acquiring a training sample, and extracting emotional characteristics of the training sample, wherein the emotional characteristics comprise facial characteristics, sound characteristics and physiological characteristics;
calculating feature weights of the facial features, the sound features and the physiological features based on a preset feature weight algorithm;
and training a preset initial recognition model based on the training samples and the feature weights to obtain an emotion recognition model.
6. The information recommendation method according to claim 5, wherein the step of calculating feature weights of the facial feature, the sound feature and the physiological feature based on a preset feature weight algorithm specifically comprises:
assigning the same initial weight to the facial features, the sound features, and the physiological features;
classifying the facial features, the sound features and the physiological features after the initial weight is given to obtain a plurality of emotional feature combinations;
calculating the similarity of the emotional features in the emotional feature combination of the same category to obtain a first similarity;
calculating the similarity of the emotional features among different types of emotional feature combinations to obtain a second similarity;
and adjusting the initial weights of the facial features, the sound features and the physiological features respectively based on the first similarity and the second similarity to obtain the feature weights of the facial features, the sound features and the physiological features.
7. The information recommendation method according to any one of claims 1 to 6, wherein a plurality of information recommendation strategies are preset in the information recommendation strategy library, each information recommendation strategy corresponds to a history interaction information, and each information recommendation strategy corresponds to an emotion recognition result.
8. An apparatus for information recommendation, comprising:
the identity confirmation module is used for acquiring a face image of a target object and determining identity information of the target object based on the face image;
the information acquisition module is used for acquiring historical interaction information of the target object according to the identity information of the target object;
the first recommendation module is used for acquiring a first information recommendation strategy from a preset information recommendation strategy library based on the historical interaction information, and calling the first information recommendation strategy to perform information recommendation on the target object, wherein the first information recommendation strategy is matched with the historical interaction information of the target object;
the emotion recognition module is used for acquiring the characteristic information of the target object in the information recommendation process, and importing the characteristic information into a pre-trained emotion recognition model to obtain an emotion recognition result of the target object;
and the second recommending module is used for acquiring a second information recommending strategy from a preset information recommending strategy library based on the emotion recognition result, and calling the second information recommending strategy to recommend information to the target object, wherein the second information recommending strategy is matched with the emotion recognition result of the target object.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed performs the steps of the method of information recommendation of any of claims 1-7.
10. A computer-readable storage medium, having computer-readable instructions stored thereon, which, when executed by a processor, implement the steps of a method of information recommendation according to any one of claims 1 to 7.
CN202110609706.8A 2021-06-01 2021-06-01 Information recommendation method and device, computer equipment and storage medium Pending CN113254491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110609706.8A CN113254491A (en) 2021-06-01 2021-06-01 Information recommendation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110609706.8A CN113254491A (en) 2021-06-01 2021-06-01 Information recommendation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113254491A true CN113254491A (en) 2021-08-13

Family

ID=77185734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110609706.8A Pending CN113254491A (en) 2021-06-01 2021-06-01 Information recommendation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113254491A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723093A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Personnel management strategy recommendation method and device, computer equipment and storage medium
CN113822211A (en) * 2021-09-27 2021-12-21 山东睿思奥图智能科技有限公司 Interactive person information acquisition method
CN114399821A (en) * 2022-01-13 2022-04-26 中国平安人寿保险股份有限公司 Policy recommendation method, device and storage medium
CN114494976A (en) * 2022-02-17 2022-05-13 平安科技(深圳)有限公司 Human body tumbling behavior evaluation method and device, computer equipment and storage medium
CN114612142A (en) * 2022-03-09 2022-06-10 深圳市瑞众科技有限公司 Multi-mode information fusion commercial content recommendation method and device and electronic equipment
CN114399821B (en) * 2022-01-13 2024-04-26 中国平安人寿保险股份有限公司 Policy recommendation method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599785A (en) * 2016-11-14 2017-04-26 深圳奥比中光科技有限公司 Method and device for building human body 3D feature identity information database
CN106778489A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 The method for building up and equipment of face 3D characteristic identity information banks
KR20200125507A (en) * 2019-04-25 2020-11-04 주식회사 마이셀럽스 Method for recommending item using degree of association between unit of language and using breakdown
CN112418059A (en) * 2020-11-19 2021-02-26 平安普惠企业管理有限公司 Emotion recognition method and device, computer equipment and storage medium
CN112581230A (en) * 2020-12-24 2021-03-30 安徽航天信息科技有限公司 Commodity recommendation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599785A (en) * 2016-11-14 2017-04-26 深圳奥比中光科技有限公司 Method and device for building human body 3D feature identity information database
CN106778489A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 The method for building up and equipment of face 3D characteristic identity information banks
KR20200125507A (en) * 2019-04-25 2020-11-04 주식회사 마이셀럽스 Method for recommending item using degree of association between unit of language and using breakdown
CN112418059A (en) * 2020-11-19 2021-02-26 平安普惠企业管理有限公司 Emotion recognition method and device, computer equipment and storage medium
CN112581230A (en) * 2020-12-24 2021-03-30 安徽航天信息科技有限公司 Commodity recommendation method and device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113723093A (en) * 2021-08-31 2021-11-30 平安科技(深圳)有限公司 Personnel management strategy recommendation method and device, computer equipment and storage medium
CN113723093B (en) * 2021-08-31 2024-01-19 平安科技(深圳)有限公司 Personnel management policy recommendation method and device, computer equipment and storage medium
CN113822211A (en) * 2021-09-27 2021-12-21 山东睿思奥图智能科技有限公司 Interactive person information acquisition method
CN114399821A (en) * 2022-01-13 2022-04-26 中国平安人寿保险股份有限公司 Policy recommendation method, device and storage medium
CN114399821B (en) * 2022-01-13 2024-04-26 中国平安人寿保险股份有限公司 Policy recommendation method, device and storage medium
CN114494976A (en) * 2022-02-17 2022-05-13 平安科技(深圳)有限公司 Human body tumbling behavior evaluation method and device, computer equipment and storage medium
CN114612142A (en) * 2022-03-09 2022-06-10 深圳市瑞众科技有限公司 Multi-mode information fusion commercial content recommendation method and device and electronic equipment
CN114612142B (en) * 2022-03-09 2023-09-26 深圳市瑞众科技有限公司 Multi-mode information fusion commercial content recommendation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN110929622B (en) Video classification method, model training method, device, equipment and storage medium
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
US10691928B2 (en) Method and apparatus for facial recognition
WO2022161286A1 (en) Image detection method, model training method, device, medium, and program product
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
CN111126347B (en) Human eye state identification method, device, terminal and readable storage medium
WO2021184754A1 (en) Video comparison method and apparatus, computer device and storage medium
CN112418292A (en) Image quality evaluation method and device, computer equipment and storage medium
CN111291863B (en) Training method of face changing identification model, face changing identification method, device and equipment
CN113763249A (en) Text image super-resolution reconstruction method and related equipment thereof
CN112418059A (en) Emotion recognition method and device, computer equipment and storage medium
CN110866469A (en) Human face facial features recognition method, device, equipment and medium
CN112036261A (en) Gesture recognition method and device, storage medium and electronic device
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN112634158A (en) Face image recovery method and device, computer equipment and storage medium
CN112668482A (en) Face recognition training method and device, computer equipment and storage medium
CN115510186A (en) Instant question and answer method, device, equipment and storage medium based on intention recognition
CN114861241A (en) Anti-peeping screen method based on intelligent detection and related equipment thereof
CN116701706B (en) Data processing method, device, equipment and medium based on artificial intelligence
CN114241411B (en) Counting model processing method and device based on target detection and computer equipment
CN115795355A (en) Classification model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination