CN114745592A - Bullet screen message display method, system, device and medium based on face recognition - Google Patents

Bullet screen message display method, system, device and medium based on face recognition Download PDF

Info

Publication number
CN114745592A
CN114745592A CN202210360228.6A CN202210360228A CN114745592A CN 114745592 A CN114745592 A CN 114745592A CN 202210360228 A CN202210360228 A CN 202210360228A CN 114745592 A CN114745592 A CN 114745592A
Authority
CN
China
Prior art keywords
information
age
bullet screen
filtering
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210360228.6A
Other languages
Chinese (zh)
Inventor
李兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Semiconductor Nanjing Co Ltd
Original Assignee
Spreadtrum Semiconductor Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Semiconductor Nanjing Co Ltd filed Critical Spreadtrum Semiconductor Nanjing Co Ltd
Priority to CN202210360228.6A priority Critical patent/CN114745592A/en
Publication of CN114745592A publication Critical patent/CN114745592A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4542Blocking scenes or portions of the received content, e.g. censoring scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a bullet screen message display method, a bullet screen message display system, bullet screen message display equipment and bullet screen message display media based on face recognition; the display method collects face image information of a user; acquiring age information of a user according to the face image information; determining a filtering level matched with the age information; and filtering the bullet screen message according to the filtering grade, and displaying the filtered bullet screen message. The method, the system, the equipment and the medium for displaying the bullet screen message based on the face recognition seamlessly integrate the acquisition, recognition and analysis of the face image information into the film watching process of the user, so that the user can watch the bullet screen message matched with the age on the premise that the film watching process is not influenced; therefore, classified display of the bullet screen message content can be accurately and scientifically carried out, unsuitable bullet screen messages are prevented from being presented to users of specific age groups, and the users are stronger in directivity and better in experience.

Description

Bullet screen message display method, system, device and medium based on face recognition
Technical Field
The invention relates to the technical field of computer human-computer interaction, in particular to a method, a system, equipment and a medium for displaying bullet screen messages based on face recognition.
Background
The bullet screen message is a rolling type film watching comment message, which means that the comment message of the user during film watching is presented on a film interface in real time and shared and interacted by other audiences. With the generation of the barrage message, the user can express the watching experience and attitude by sending the barrage message, thereby enhancing the participation of the user in watching the video and increasing the watching interest of the user.
At present, most video websites and software support the function of barrage messages, however, at the present stage, the display mode specifications of the barrage messages are different, the pushed content is mixed with fish and dragon, and even some advertisements and unhealthy messages are included, so that the watching experience of users is influenced, and even bad guidance can be generated for audiences of special groups. For this reason, there are many video websites or software which set filtering rules for the barrage message, and the barrage message is classified. However, the operation is relatively complicated, which causes obstacles or inconvenience to the film viewing process, resulting in poor user experience.
Disclosure of Invention
The invention provides a bullet screen message display method and system based on face recognition, equipment and a medium, aiming at overcoming the defects that in the prior art, the bullet screen message display mode is simplified, the setting operation is inconvenient, and the film watching experience of a user is poor.
The invention solves the technical problems through the following technical scheme:
the invention provides a bullet screen message display method based on face recognition, which comprises the following steps:
collecting face image information of a user;
acquiring age information of the user according to the face image information;
determining a filtering level matching the age information;
and filtering the bullet screen message according to the filtering grade, and displaying the filtered bullet screen message.
Preferably, the step of filtering the barrage message according to the filtering level and displaying the filtered barrage message includes:
acquiring text information of the bullet screen message;
acquiring a preset filtering field corresponding to the filtering grade;
filtering the bullet screen information according to a preset filtering field and the text information, and displaying the filtered bullet screen information;
preferably, the step of filtering the barrage message according to the filtering level includes:
and filtering the bullet screen message according to the filtering grade of the bullet screen message, and displaying the filtered bullet screen message.
Preferably, the step of obtaining the age information of the user is followed by:
receiving confirmation information of the user on the age information;
if the confirmation information is characterized in that the age information is wrong, receiving age information fed back by a user, and storing the age information fed back by the user and the face image information in a preset database in an associated manner;
and if the confirmation information is characterized in that the age information is correct, the age information and the face image information are stored in the preset database in an associated manner.
Preferably, the step of obtaining age information of the user according to the face image information includes:
detecting whether the preset database has the face image information or not;
if so, calling age information stored in association with the face image information as the age information of the user;
and if not, processing the face image information according to a preset age estimation algorithm to obtain the age information of the user.
Preferably, if the number of the face image information is greater than one, the step of obtaining the age information of the user according to the face image information includes:
respectively acquiring age information corresponding to each face image information;
acquiring minimum age information from the age information corresponding to each face image information;
the determining a filtering level matching the age information comprises:
determining a filtering level matching the minimum age information;
preferably, the step of obtaining the age information of the user according to the face image information comprises the following steps:
acquiring registered age information corresponding to a current login account;
judging whether the age information of the user acquired according to the face image information is matched with the registered age information or not, and if not, generating prompt information;
preferably, the step of determining the filtering level matching with the age information further comprises:
setting the filter level to a locked state;
the display method further comprises the following steps:
and if the change of the face image information is detected, setting the filtering level to be in an unlocking state.
The invention also provides a display system of the bullet screen message based on the face recognition, which comprises the following components:
the face image acquisition module is used for acquiring face image information of a user;
the age information acquisition module is used for acquiring the age information of the user according to the face image information;
the filtering grade determining module is used for determining a filtering grade matched with the age information;
and the bullet screen message display module is used for filtering bullet screen messages according to the filtering grade and displaying the filtered bullet screen messages.
Preferably, the bullet screen message display module includes:
the text message acquisition unit is used for acquiring text information of the bullet screen message;
a filtering field obtaining unit, configured to obtain a preset filtering field corresponding to the filtering level;
the display unit is used for filtering the bullet screen information according to the preset filtering field and the text information and displaying the filtered bullet screen information;
preferably, the bullet screen message display module is configured to filter the bullet screen message according to the filtering level of the bullet screen message, and display the filtered bullet screen message.
Preferably, the display system further comprises a confirmation information receiving module, a confirmation information judging module and an associated storage module:
the confirmation information receiving module is used for receiving the confirmation information of the user on the age information;
the confirmation information judging module is used for receiving age information fed back by a user and calling the associated storage module when the confirmation information is judged to be in error, so that the age information fed back by the user and the face image information are stored in a preset database in an associated manner;
the confirmation information judging module is further used for calling the association storage module to store the age information and the face image information in the preset database in an associated manner when the confirmation information is judged to be represented that the age information is correct.
Preferably, the age information acquiring module includes a database detecting unit and an age identifying unit:
the database detection unit is used for detecting whether the preset database has the face image information or not; if so, calling age information stored in association with the face image information as the age information of the user; if not, calling an age identification unit;
the age identification unit is used for processing the face image information according to a preset age estimation algorithm to obtain the age information of the user.
Preferably, if the number of the face image information is greater than one, the age information obtaining module is configured to obtain age information corresponding to each face image information, and obtain minimum age information from the age information corresponding to each face image information; the filtering grade determining module is used for determining the filtering grade matched with the minimum age information;
preferably, the display system further includes an age verification module, configured to obtain registered age information corresponding to a current login account, and determine whether the age information of the user obtained according to the face image information matches the registered age information, if not, then prompt information is generated;
preferably, the display system further comprises a filtering level locking module, configured to set the filtering level to a locked state when called by the filtering level determining module;
the filtering grade locking module is also used for setting the filtering grade to be in an unlocking state when the change of the face image information is detected.
The invention also provides electronic equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the display method of the bullet screen message based on the face recognition.
The present invention also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the above-described bullet screen message display method based on face recognition.
The positive progress effects of the invention are as follows: the method, the system, the equipment and the medium for displaying the bullet screen message based on the face recognition seamlessly integrate the acquisition, recognition and analysis of the face image information into the film watching process of the user, so that the user can watch the bullet screen message matched with the age on the premise that the film watching process is not influenced; therefore, classified display of the bullet screen message content can be accurately and scientifically carried out, unsuitable bullet screen messages are prevented from being presented to users of specific age groups, and the users are stronger in directivity and better in experience.
Drawings
Fig. 1 is a flowchart of a bullet screen message display method based on face recognition in embodiment 1 of the present invention.
Fig. 2 is a flowchart of a preferred embodiment of a method for displaying bullet screen messages based on face recognition in embodiment 1 of the present invention.
Fig. 3 is a flowchart of a specific example of a display method of a bullet screen message based on face recognition in embodiment 1 of the present invention.
Fig. 4 is a schematic block diagram of a display system of bullet screen messages based on face recognition in embodiment 2 of the present invention.
Fig. 5 is a schematic block diagram of a display system of bullet screen messages based on face recognition according to an embodiment 2 of the present invention.
Fig. 6 is an electronic block diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
Referring to fig. 1 and fig. 2, the present embodiment specifically provides a method for displaying bullet screen messages based on face recognition, including:
s1, collecting face image information of a user;
s2, acquiring age information of a user according to the face image information;
s3, determining a filtering grade matched with the age information;
and S4, filtering the bullet screen message according to the filtering grade, and displaying the filtered bullet screen message.
The application scenario of the bullet screen message display method based on face recognition in this embodiment includes, but is not limited to, that a user watches a video on an electronic device such as a television, a handheld terminal, an intelligent portable device, or a microcomputer, where the information of the face image of the user is collected by an accessory collection component of the electronic device, for example, a camera of the intelligent television. Of course, other independent acquisition devices may be used, for example, the acquisition may be performed by a camera communicatively connected to the smart television. In addition, the facial image information sent by the user may also be directly obtained, for example, before the user clicks and watches a movie on an APP (application program), the user first uploads a file including facial image information, such as a picture, a video, and the like, and then step S1 directly acquires the file.
According to different environments, the face images of one or more users are collected. Several users, for example, watching in front of the screen projection of a home theater; the acquisition of the face image can be triggered based on the retention time of the user at a preset position of the electronic equipment, for example, for a screen of a center console of a vehicle, when a vehicle owner gets on the vehicle, the user begins to capture a head portrait one minute later; in addition, the acquisition of the face image may also be triggered based on the playing of the video, for example, when the start of the playing of the video is detected, the acquisition of the face image is started.
Step S2 obtains the age information of the user from the face image information, and may call the stored history matching record, or may directly obtain the newly obtained face image information based on an age estimation algorithm.
In an alternative embodiment, the age information of the user is obtained by calling the stored history matching records according to the face image information. This embodiment is premised on the step S5 of receiving confirmation information of the user for the age information is performed each time after the step S2; if the confirmation information represents that the age information is wrong, executing step S6, receiving the age information fed back by the user, and storing the age information fed back by the user and the face image information in a preset database in an associated manner; if the confirmation information indicates that the age information is correct, step S7 is executed to store the age information and the face image information in a preset database in an associated manner. Of course, the age information fed back by the user can be further ensured to be true and effective through identity authentication and the like. Therefore, the age information directly confirmed or fed back by the user can be considered to be matched with the face image information in the record stored in the preset database.
Based on the face image information and the age information stored in the preset database in an associated manner, in this embodiment, step S2 includes:
s21, detecting whether the preset database has face image information or not; if yes, executing step S22, calling age information stored in association with the face image information as age information of the user; if not, step S23 is executed to process the face image information according to a preset age estimation algorithm to obtain the age information of the user.
Specifically, in step S21, the face image information obtained in step S1 is compared in a preset database, and if the same or similar pre-stored face image information meeting a preset requirement (e.g., similarity > 95) is matched, step S22 calls age information stored in association with the pre-stored face image information. Therefore, the age information can be determined quickly and efficiently, and related computing resources are saved.
Otherwise, if the same or the pre-stored face image information with the similarity meeting the preset requirement is not matched, step S23 processes the face image information according to the preset age estimation algorithm to obtain the age information of the user.
Age estimation algorithms fall into the specific category in face image recognition based. Since each age value can be considered as a class, age estimation can be considered as a classification problem; while the increase in age value is a continuous process of progression, age estimation can also be considered a regression problem. The age estimation method has respective advantages aiming at different age databases and different age characteristics, classification modes and regression modes, so that the age estimation algorithm organically integrates the age databases and the age characteristics, the classification modes and the regression modes, and the accuracy of age estimation can be effectively improved. Different modes have different advantages, and various estimation modes can be integrated to estimate the age.
The face recognition image as the processing object may be a plurality of frame images extracted from the video, certainly may be each frame image of the video when the resource allows, as the preprocessing, may be based on the techniques such as histogram equalization, etc., and the image statistical characteristic method is used to eliminate the specific area to obtain the face image; in addition, the human face is tracked according to the position and the speed of the human face in the multi-frame image, the definition of the human face is judged through fast discrete Fourier transform, and the human face with the best definition is selected.
The first stage of age identification can extract the skin texture characteristics of the human face in the picture, and a rough evaluation is carried out on the age range to obtain a specific age group. In particular, the method can be realized by a face detection correlation algorithm. The face detection refers to that any given input image is searched by adopting a certain calculation method to determine whether the face is contained in the input image and the position of the face in the input image. The detection of the key points of the human face is also called the detection, the positioning or the face alignment of the key points of the human face, which means that the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned by giving a human face image; the set of key points is usually called shape, the shape contains the position information of the key points, and the position information can be generally expressed by the position of the key points relative to the whole image or the position of the key points relative to the face frame, the former is absolute shape, the latter is relative shape, and the two shapes can be converted by the face frame. The face key point detection method generally includes an ASM (Active Shape Model) algorithm and an AAM (Active appearance Model) algorithm; a cascade shape regression-based method; methods based on deep learning, etc. The ASM algorithm and the AAM algorithm belong to parameterized methods, and the cascade regression and deep learning methods belong to unparameterized methods. The ASM algorithm expresses the geometric shapes of objects with similar appearances, such as human faces, by sequentially connecting the coordinates of a plurality of key points in series to form a shape vector. The ASM algorithm firstly calibrates a training set through a manual calibration method, obtains a shape model through training, and then realizes matching of a specific object through matching of key points. The AAM algorithm is an improvement of the ASM algorithm, which not only uses shape constraints, but also adds texture features to the entire face region, and is also divided into two stages, namely a model building stage and a model matching stage. The model establishing stage comprises the steps of respectively establishing a shape model and a texture model for a training sample, and then combining the two models to form an AAM model. The method is relatively high in precision and based on deep learning, and a plurality of effective deep learning models can achieve good effects of detecting key points of the human Face, such as Face + + version deep convolution neural algorithm and the like. Of course, those skilled in the art will appreciate that the face key point detection method that can be used in the present embodiment includes, but is not limited to, the above-mentioned list.
And in the second stage, age detailed evaluation is carried out, and a plurality of model classifiers corresponding to a plurality of age groups are established by a method of a support vector machine, and an appropriate model is selected for matching. Preferably, the face age estimation algorithm can be implemented by fusing Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG) features.
Histogram of oriented gradients feature is a feature descriptor used in computer vision and image processing for object detection. It constructs features by calculating and counting the histogram of gradient direction of local area of image. Firstly, dividing an image into small connected regions, and collecting gradient or edge direction histograms of all pixel points in the connected regions. Finally, the histograms are combined to form the feature descriptor. Compared with other feature description methods, the histogram of oriented gradient feature operates on the local grid unit of the image, so that the histogram of oriented gradient feature can keep good invariance to the geometric and optical deformation of the image, and the two deformations only appear in a larger space field and are particularly suitable for human body detection in the image. The local binary pattern is an operator used for describing local texture features of the image; it has the obvious advantages of rotation invariance, gray scale invariance and the like. The original LBP operator is defined as that in a window of 3 × 3, the central pixel of the window is used as a threshold value, the gray values of the adjacent 8 pixels are compared with the central pixel, if the values of the surrounding pixels are greater than the value of the central pixel, the position of the pixel is marked as 1, otherwise, the position is 0. Thus, 8 points in the 3x3 neighborhood can generate 8-bit binary numbers through comparison, namely the LBP value of the center pixel point of the window is obtained, and the LBP value is used for reflecting the texture information of the area. The face age estimation algorithm fusing the local binary pattern and the histogram of directional gradients extracts the local statistical characteristics of the face closely related to the age change, fuses the local statistical characteristics by using a typical correlation analysis method, and finally trains and tests a face library by a method of support vector machine regression.
Based on the fusion of the local binary pattern and the histogram of oriented gradients, age identification processing is performed by using a model including but not limited to a anthropometric model, a flexible model, an appearance model, and the like. Specifically, the anthropometric model uses the geometric features of the face to classify the age, mainly describes the mathematical law of the overall contour change of the face with the age, and measures a kind of structural information of the face. The main process is summarized as face contour detection, face feature point positioning, various geometric proportion measurement such as interpupillary distance and the like, and finally, the geometric proportion is used for distinguishing age groups, so that the method is mainly suitable for age classification of minors. The other flexible model organically combines the shape of the human face with the gray level/texture, fully extracts the shape information and the global texture information of the human face image from the whole, can be regarded as an upgraded version of a anthropometric model, and is typically represented by an active shape model and an active appearance model. The model can better adapt to the feature point positioning and feature extraction of complex images, is not only suitable for classifying the ages of teenagers, but also suitable for classifying the middle-aged and the elderly. The appearance model fuses geometric features of the human face with global information and local information such as facial texture information, frequency information and skin color information to describe the human face, further performs age estimation, better describes the texture characteristics of the face, and is often fused with shape features, so that age estimation of all ages can be better realized.
As a preferred embodiment, when the number of the acquired face image information of the user is more than one, step S2 includes: respectively acquiring age information corresponding to each face image information; acquiring minimum age information from the age information corresponding to each face image information; accordingly, step S3 includes: a filtering level matching the minimum age information is determined.
At the present stage, due to the fact that the content of the bullet screen message lacks of supervision means and supervision strength, unhealthy content fish dragons are often mixed and filled in, the watching experience of a user is influenced, and the bullet screen message has a bad guiding effect on the child user easily. Although most video playing media set the barrage level, the user can manually select barrage filtering rules of different levels, but the barrage filtering levels suitable for adult users and child users are different, and when the child users appear in the video watching process, the adult users need to manually adjust the barrage to the level suitable for the children, so that the watching experience of the users is influenced. Meanwhile, when the child user watches the video and opens the barrage independently, the child user can hardly adjust the subtitle to the filtering level suitable for the child user actively, so that the child contacts the subtitle content unsuitable for the child user, and the physical and mental health of the child is influenced to a certain extent. Through face identification's mode in this embodiment, at first the minimum age information among the multiple users of automatic identification, set up corresponding filtration grade and then filter bullet screen message content according to minimum age, on the one hand, can avoid effectively that children's user sees unhealthy bullet screen message content when watching the video, and on the other hand, the user also need not manual intervention setting to sight shadow experience has been promoted.
As a preferred embodiment, step S2 is followed by:
s8, acquiring registration age information corresponding to the current login account;
s9, judging whether the age information of the user acquired according to the face image information is matched with the registered age information, and if not, generating prompt information;
specifically, when the video content viewed by the user has a corresponding login account, for example, a login account of the user such as a network platform or APP providing the video content, the step S2 determines the age information of the user, and then the step S8 obtains the registered age information corresponding to the current login account, and compares the two information through the step S9 to determine whether the two information match for automatic verification, and if so, it can be understood that the step S3 is further performed. Otherwise, the prompt message may display age information corresponding to the account information, age information obtained based on the face image information, other information corresponding to the login account, and the like; therefore, the user can be helped to confirm the age information, and the situations that the age estimation is wrong, the user with the minimum age is not a login user when a plurality of persons watch the film and the like are avoided. Of course, for a scene in which multiple persons view and acquire the face image information of multiple persons, the judgment basis may be set as long as the registered age information corresponding to the login account is matched with any age information corresponding to the face image information of multiple persons.
Step S3 determines a filtering level matching the age information, and the filtering level is preset and can be used to distinguish different requirements of users of different ages for the barrage message, as known to those skilled in the art. For example, minor and adult who are classified according to legal meaning correspond to different filtering levels; or divided into 6 years or less, 6 to 12 years, 12 to 18 years and 18 years or more according to age groups, and the filter grades are respectively corresponding to the different ages.
As a preferred embodiment, step S3 is followed by:
s10, setting the filtering grade to be in a locking state;
the display method further comprises the following steps:
and S11, if the change of the face image information is detected, setting the filtering grade to be in an unlocking state.
Specifically, an automatic locking manner of the filtering level is provided, that is, the filtering level is locked after being determined at step S3, that is, it is not settable. It can be understood that the detection acquisition of the face image information is maintained during the film watching process, for example, the face image information is acquired according to a preset time interval, or the face image information is acquired again according to the switching of the video content, and the like. When the change of the face image information is detected, the filtering level is set to the unlocking state so that it can be set. The present embodiment is particularly suitable for scenes in which minors view images alone, and the like, so that the filtering level cannot be changed by interactively modifying age information other than face image recognition. Therefore, it is preferable that the above embodiment can be performed for users other than the age of 18 years or older.
And step S4, filtering the bullet screen information according to the filtering grade, and displaying the filtered bullet screen information. And filtering the barrage message, namely not presenting, partially presenting or completely presenting the barrage message corresponding to the video to a specified user according to the constraint condition corresponding to the filtering grade, wherein the user is the user for determining the age information.
In an alternative embodiment, step S4 filters the bullet screen message according to the filtering level of the bullet screen message, and displays the filtered bullet screen message. It can be understood that, in the present embodiment, the filtering level corresponds to a pre-divided barrage message set; for example, the filtration levels correspond to "children", "adults", respectively; and respectively presenting corresponding contents after determining the filtering grade of the bullet screen message based on the two categories.
In an alternative embodiment, step S4 includes:
s41, acquiring text information of the bullet screen message;
s42, acquiring a preset filtering field corresponding to the filtering grade;
s43, filtering the bullet screen information according to the preset filtering field and the text information, and displaying the filtered bullet screen information.
Step S41 may be implemented by calling a storage file corresponding to the barrage message, where the storage file includes, but is not limited to, a document in the extensible markup language format. By reading the storage file or files in other formats and similar purposes and analyzing the files, the text information of the bullet screen message can be determined. Step S43, on the basis of the preset filtering field corresponding to the filtering level obtained in step S42, performs field matching on the text information, for example, filters out the bullet screen information containing the text information of the preset filtering field or the combination thereof, and/or filters out the bullet screen information not containing the text information of the preset filtering field or the combination thereof, thereby displaying the filtered bullet screen information.
In one example, the single barrage message structure stored by the extensible markup language is as follows:
<d p="text,time,type,size,color,timestamp,pool,uid_crc32,row_id">。
wherein the text represents the text content of the bullet screen message; time represents the staying time of the bullet screen in the video; the type is a bullet screen type; size is font size; color is decimal RGB color; the timestamp is a bullet screen sending timestamp; pool is a bullet screen pool; uid _ crc32 is the sender; row _ id is used to mark the order and history bullet screen. Therefore, based on the structure of the bullet screen message, the text information can be obtained according to the content corresponding to the text field.
In one specific application example as shown in fig. 3, the execution body thereof includes but is not limited to a processing unit. The method comprises the following steps:
step 101, collecting face image information. Specifically, the collection can be performed by a camera of the smart device.
And 102, sending the face image. The acquired face image is sent to a processing unit. It can be understood that the processing unit can be integrated in a camera or an intelligent device; may be additionally provided.
S103, determining the corresponding age of the face image according to an age identification algorithm on the basis of obtaining the age; namely, the data corresponding to the face image is sent to an age identification algorithm module for processing. Specifically, the face features may be extracted through step S1031; step S1032 compares the human features, and step S1033 obtains the corresponding age of the human face.
And S104, determining the bullet screen filtering grade according to the age obtained in the S103.
The method for displaying the bullet screen message based on the face recognition seamlessly integrates the acquisition, recognition and analysis of the face image information into the film watching process of the user, so that the user can watch the bullet screen message matched with the age on the premise that the film watching process is not affected; therefore, classified display of the bullet screen message content can be accurately and scientifically carried out, unsuitable bullet screen messages are prevented from being provided for users of specific age groups, the user directivity is stronger, and the user experience is better.
Example 2
Referring to fig. 4 to 5, the embodiment specifically provides a display system of bullet screen messages based on face recognition, including:
a face image collecting module 51, configured to collect face image information of a user;
an age information obtaining module 52, configured to obtain age information of the user according to the face image information;
a filtering level determining module 53 for determining a filtering level matching the age information;
and a bullet screen message display module 54, configured to filter bullet screen messages according to the filtering level, and display the filtered bullet screen messages.
The application scenario of the display system of bullet screen messages based on face recognition in this embodiment includes, but is not limited to, that a user watches a video on an electronic device such as a television, a handheld terminal, an intelligent portable device, or a microcomputer, wherein the part for acquiring the face image information of the user can be acquired through an accessory acquisition component of the electronic device, for example, through a camera of the intelligent television. Of course, other independent acquisition devices may be used, for example, the acquisition is performed by a camera connected to the smart television in communication. In addition, the facial image information sent by the user may also be directly obtained, for example, before the user clicks and watches a movie on an APP, a file including facial image information, such as a picture and a video, is first uploaded, and then the facial image acquisition module 51 directly acquires the file.
According to different environments, the face images of one or more users are collected. Several users, for example, watching in front of the screen projection of a home theater; the acquisition of the face image can be triggered based on the stay time of the user at a preset position of the electronic equipment, for example, for a vehicle console screen, when a vehicle owner gets on the vehicle, the user head portrait begins to be captured one minute; in addition, the acquisition of the face image may also be triggered based on the playing of the video, for example, when the start of the playing of the video is detected, the acquisition of the face image is started.
The age information obtaining module 52 obtains the age information of the user according to the face image information, and may call the stored history matching record, or may directly obtain the newly obtained face image information based on an age estimation algorithm.
In an alternative embodiment, the age information of the user is obtained by calling the stored history matching records according to the face image information. The display system of the bullet screen message based on the face recognition further comprises a confirmation information receiving module 55, a confirmation information judging module 56 and an association storage module 57. The age information acquisition module 52 calls a confirmation information receiving module 55 to receive confirmation information of the age information of the user; the confirmation information judging module 56 is configured to receive the age information fed back by the user when it is judged that the confirmation information is characterized that the age information is wrong, and call the association storage module 57 to store the age information fed back by the user and the face image information in an association manner in a preset database; the confirmation information judging module 56 is further configured to call the association storage module 57 to associate and store the received age information and the face image information in the preset database when the confirmation information is judged to represent that the age information is not correct. Of course, the age information fed back by the user can be further ensured to be true and effective through identity authentication and the like. Therefore, the age information directly confirmed or fed back by the user can be considered to be matched with the face image information in the record stored in the preset database.
Based on the face image information and the age information stored in association in the preset database, in the present embodiment, the age information acquiring module 52 includes a database detecting unit 521 and an age identifying unit 522: the database detection unit is used for detecting whether the preset database has the face image information or not; if so, calling age information stored in association with the face image information as the age information of the user; if not, the age identification unit 522 is invoked; the age identifying unit is used 522 for processing the face image information according to a preset age estimation algorithm to obtain the age information of the user.
Specifically, the database detection unit 521 compares the obtained face image information in a preset database, and calls age information stored in association with the pre-stored face image information if the same or pre-stored face image information with similarity meeting a preset requirement (e.g., similarity > 95) is matched. Therefore, the age information can be determined quickly and efficiently, and related computing resources are saved.
Otherwise, if the same or similar pre-stored face image information meeting the preset requirement is not matched, the age identification unit 522 is called, and the face image information is processed according to the preset age estimation algorithm to obtain the age information of the user.
Age estimation algorithms fall into the specific category in face image recognition based. Since each age value can be considered as a class, age estimation can be considered as a classification problem; while the increase in age value is a continuous process of progression, age estimation can also be considered a regression problem. The age estimation method has respective advantages aiming at different age databases and different age characteristics, classification modes and regression modes, so that the age estimation algorithm organically integrates the age databases and the age characteristics, the classification modes and the regression modes, and the accuracy of age estimation can be effectively improved. Different modes have different advantages, and various estimation modes can be integrated to estimate the age.
The face recognition image as the processing object may be a plurality of frame images extracted from the video, certainly may be each frame image of the video when the resource allows, as the preprocessing, may be based on the techniques such as histogram equalization, etc., and the image statistical characteristic method is used to eliminate the specific area to obtain the face image; in addition, the human face is tracked according to the position and the speed of the human face in the multi-frame image, the definition of the human face is judged through fast discrete Fourier transform, and the human face with the best definition is selected.
The first stage of age identification can extract the skin texture characteristics of the human face in the picture, and a rough evaluation is carried out on the age range to obtain a specific age group. In particular, the method can be realized by a face detection correlation algorithm. The face detection means that any given input image is searched by adopting a certain calculation method to determine whether the input image contains a face and the position of the face in the input image. The detection of the key points of the human face is also called the detection, the positioning or the face alignment of the key points of the human face, which means that the key area positions of the human face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, are positioned by giving a human face image; the set of key points is usually called a shape, the shape contains the position information of the key points, and the position information can be generally represented by the position of the key points relative to the whole image or the position of the key points relative to the face frame, the former is an absolute shape, the latter is a relative shape, and the two shapes can be converted through the face frame. The face key point detection method generally comprises an ASM algorithm and an AAM algorithm; a cascade shape regression-based method; methods based on deep learning, etc. The ASM algorithm and the AAM algorithm belong to parameterized methods, and the cascade regression and deep learning methods belong to unparameterized methods. The ASM algorithm expresses geometric shapes of objects with similar shapes, such as human faces, by sequentially connecting coordinates of a plurality of key points in series to form a shape vector. The ASM algorithm firstly calibrates a training set through a manual calibration method, obtains a shape model through training, and then realizes matching of a specific object through matching of key points. The AAM algorithm is an improvement of the ASM algorithm, which not only uses shape constraints, but also adds texture features to the entire face region, and is also divided into two stages, namely a model building stage and a model matching stage. The model establishing stage comprises the steps of respectively establishing a shape model and a texture model for a training sample, and then combining the two models to form an AAM model. The method is relatively high in precision and based on deep learning, and a plurality of effective deep learning models can achieve good effects of detecting key points of the human Face, such as Face + + version deep convolution neural algorithm and the like. Of course, those skilled in the art will appreciate that the face key point detection method that can be used in the present embodiment includes, but is not limited to, the above-mentioned list.
And in the second stage, age detailed evaluation is carried out, and a plurality of model classifiers corresponding to a plurality of age groups are established by a method of a support vector machine, and an appropriate model is selected for matching. Preferably, the method can be realized by a face age estimation algorithm fusing local binary patterns and histogram of oriented gradients.
The histogram of oriented gradients feature is a feature descriptor used for object detection in computer vision and image processing. It constructs features by calculating and counting the histogram of gradient direction of local area of image. Firstly, dividing an image into small connected regions, and collecting gradient or edge direction histograms of all pixel points in the connected regions. Finally, the histograms are combined to form the feature descriptor. Compared with other feature description methods, the histogram of oriented gradient feature operates on the local grid unit of the image, so that the histogram of oriented gradient feature can keep good invariance to the geometric and optical deformation of the image, and the two deformations only appear in a larger space field and are particularly suitable for human body detection in the image. The local binary pattern is an operator used for describing local texture features of the image; it has the obvious advantages of rotation invariance, gray scale invariance and the like. The original LBP operator is defined as that in a window of 3 × 3, the central pixel of the window is used as a threshold value, the gray values of the adjacent 8 pixels are compared with the central pixel, if the values of the surrounding pixels are greater than the value of the central pixel, the position of the pixel is marked as 1, otherwise, the position is 0. Thus, 8 points in the 3x3 neighborhood can generate 8-bit binary numbers through comparison, namely the LBP value of the center pixel point of the window is obtained, and the LBP value is used for reflecting the texture information of the area. The face age estimation algorithm fusing the local binary pattern and the directional gradient histogram features extracts local statistical features of a face closely related to age change, and the local statistical features are fused by a typical correlation analysis method, and finally a face library is trained and tested by a method of support vector machine regression.
Based on the fusion of the local binary pattern and the histogram feature of the directional gradient, the age identification process is carried out by adopting a human body measurement model, a flexible model, an appearance model and the like. Specifically, the anthropometric model uses the geometric features of the face to classify the age, mainly describes the mathematical law of the overall contour change of the face with the age, and measures a kind of structural information of the face. The main process is summarized as face contour detection, face feature point positioning, various geometric proportion measurement such as interpupillary distance and the like, and finally, the geometric proportion is used for distinguishing age groups, so that the method is mainly suitable for age classification of minors. Another flexible model organically combines the shape of the face with the gray level/texture, starts with the whole, fully extracts the shape information and the global texture information of the face image, can be regarded as an upgraded version of a anthropometric model, and is typically represented by an active shape model and an active appearance model. The model can better adapt to the feature point positioning and feature extraction of complex images, is not only suitable for classifying the ages of teenagers, but also suitable for classifying the middle-aged and the elderly. The appearance model fuses geometric features of the human face with global information, local information such as facial texture information, frequency information and skin color information and the like to describe the human face, further performs age estimation, better describes the texture characteristics of the face, and is often fused with shape features, so that the age estimation of the whole age group can be better realized.
As a preferred embodiment, when the number of the acquired face image information of the user is greater than one, the age information acquiring module 52 is configured to acquire age information corresponding to each face image information; acquiring minimum age information from the age information corresponding to each face image information; accordingly, the filtering level determining module 53 is configured to determine a filtering level matching the minimum age information.
At the present stage, due to the fact that the content of the bullet screen message lacks of supervision means and supervision strength, unhealthy content fish dragons are often mixed and filled in, the watching experience of a user is influenced, and the bullet screen message has a bad guiding effect on the child user easily. Although most of video playing media set the barrage grade, the user can manually select barrage filtering rules of different grades, the filtering grades of the barrage suitable for the adult user and the child user are different, when the child user appears in the video watching process, the adult user needs to manually adjust the barrage to the grade suitable for the child, and the watching experience of the user is influenced. Meanwhile, when the child user watches the video and opens the barrage independently, the child user can hardly adjust the subtitle to the filtering level suitable for the child user actively, so that the child contacts the subtitle content unsuitable for the child user, and the physical and mental health of the child is influenced to a certain extent. Through face identification's mode in this embodiment, at first the minimum age information among the multiple users of automatic identification, set up corresponding filtration grade and then filter bullet screen message content according to minimum age, on the one hand, can avoid effectively that children's user sees unhealthy bullet screen message content when watching the video, and on the other hand, the user also need not manual intervention setting to sight shadow experience has been promoted.
As a preferred embodiment, the display system further includes an age verification module 58, configured to obtain registered age information corresponding to the current login account, determine whether the age information of the user obtained according to the face image information matches the registered age information, and if not, generate prompt information.
Specifically, when the video content viewed by the user has a corresponding login account, for example, a login account of the user such as a network platform or APP providing the video content, the age verification module 58 obtains the registered age information corresponding to the current login account, and compares whether the two are matched for automatic verification, and if so, the filtering level determination module 53 may be invoked. Otherwise, the prompt message may display age information corresponding to the account information, age information obtained based on the face image information, other information corresponding to the login account, and the like; therefore, the user can be helped to confirm the age information, and the situations that the age estimation is wrong, the user with the minimum age is not a login user when a plurality of persons watch the film and the like are avoided. Of course, for a scene in which multiple persons view and acquire the face image information of multiple persons, the judgment basis may be set as long as the registered age information corresponding to the login account is matched with any age information corresponding to the face image information of multiple persons.
The filtering level determining module 53 determines a filtering level matched with the age information, and as known to those skilled in the art, the filtering level is preset and can be used to distinguish different requirements of users of different ages for the barrage message. For example, minor and adult who are classified according to legal meaning correspond to different filtering levels; or divided into 6 years or less, 6 to 12 years, 12 to 18 years and 18 years or more according to age groups, and the filter grades are respectively corresponding to the different ages.
In a preferred embodiment, the display system further comprises a filtering level locking module 59, configured to set the filtering level to a locked state when called by the filtering level determining module; the filtering level locking module 59 is further configured to set the filtering level to the unlocked state when a change in the face image information is detected.
In particular, an automatic locking of the filter level is provided, i.e. the filter level is locked, i.e. not settable, after it has been determined. It can be understood that the detection acquisition of the face image information is maintained during the film watching process, for example, the face image information is acquired according to a preset time interval, or the face image information is acquired again according to the switching of the video content, and the like. When the change of the face image information is detected, the filtering level is set to the unlocking state so that it can be set. The embodiment is particularly suitable for scenes that minors view alone and the like, so that the minors cannot change the filtering level by interactively tampering with age information except for face image recognition. Therefore, it is preferable that the above embodiment can be performed for users other than the age of 18 years or older.
The bullet screen message display module 54 filters the bullet screen messages according to the filtering level and displays the filtered bullet screen messages. And filtering the barrage message, namely not presenting, partially presenting or completely presenting the barrage message corresponding to the video to a specified user according to the constraint condition corresponding to the filtering grade, wherein the user is the user for determining the age information.
In an alternative embodiment, step S4 filters the bullet screen message according to the filtering level of the bullet screen message, and displays the filtered bullet screen message. It can be understood that, in the present embodiment, the filtering level corresponds to a pre-divided barrage message set; for example, the filtration levels correspond to "children", "adults", respectively; and respectively presenting corresponding contents after determining the filtering grade of the bullet screen message based on the two categories.
The bullet screen message display module 54 includes:
a text message acquiring unit 541, configured to acquire text information of a bullet screen message;
a filtering field obtaining unit 542, configured to obtain a preset filtering field corresponding to a filtering level;
the display unit 543, configured to filter the barrage information according to the preset filtering field and the text information, and display the filtered barrage information;
the text message acquiring unit 541 may be implemented by calling a storage file corresponding to the bullet screen message, where the storage file includes, but is not limited to, a document in an extensible markup language format. By reading the storage file or files in other formats and similar purposes and analyzing the files, the text information of the bullet screen message can be determined. The display unit 543 performs field matching on the text information on the basis that the filtering field obtaining unit 542 obtains the preset filtering field corresponding to the filtering level, for example, filters out the bullet screen information containing the text information of the preset filtering field or the combination thereof, and/or filters out the bullet screen information not containing the text information of the preset filtering field or the combination thereof, thereby displaying the filtered bullet screen information.
In one example, the single barrage message structure stored by the extensible markup language is as follows:
<d p="text,time,type,size,color,timestamp,pool,uid_crc32,row_id">。
wherein the text represents the text content of the bullet screen message; time represents the staying time of the bullet screen in the video; the type is a bullet screen type; size is font size; color is decimal RGB color; the timestamp is a bullet screen sending timestamp; pool is a bullet screen pool; uid _ crc32 is the sender; row _ id is used to mark the order and history bullet screen. Therefore, the structure is carried out based on the bullet screen message, and the text information can be obtained according to the content corresponding to the text field.
The display system of the bullet screen message based on the face recognition of the embodiment seamlessly integrates the acquisition, recognition and analysis of the face image information into the film watching process of the user, so that the user can watch the bullet screen message matched with the age on the premise that the film watching process is not affected; therefore, classified display of the bullet screen message content can be accurately and scientifically carried out, unsuitable bullet screen messages are prevented from being provided for users of specific age groups, the user directivity is stronger, and the user experience is better.
Example 3
The present embodiments provide an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the processing device to execute the method for displaying bullet screen messages based on face recognition in embodiment 1.
It should be noted that the electronic device in this embodiment may be a separate chip, a chip module, or a network device, and may also be a chip or a chip module integrated in the network device; each module/unit included in the processing apparatus for communication data may be a software module/unit, a hardware module/unit, a part of the software module/unit, and a part of the hardware module/unit. For example, for each device or product applied to or integrated into a chip, each module/unit included in the device or product may be implemented by hardware such as a circuit, or at least a part of the module/unit may be implemented by a software program running on a processor integrated within the chip, and the rest (if any) part of the module/unit may be implemented by hardware such as a circuit; for each device or product applied to or integrated with the chip module, each module/unit included in the device or product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components of the chip module, or at least some of the modules/units may be implemented by using a software program running on a processor integrated within the chip module, and the rest (if any) of the modules/units may be implemented by using hardware such as a circuit; for each device and product applied to or integrated in the terminal, each module/unit included in the device and product may be implemented by using hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal, or at least part of the modules/units may be implemented by using a software program running on a processor integrated in the terminal, and the rest (if any) part of the modules/units may be implemented by using hardware such as a circuit.
As shown in fig. 6, as a preferred embodiment, this embodiment specifically provides an electronic device 30, which includes a processor 31, a memory 32, and a computer program stored in the memory 32 and capable of running on the processor 31, and when the processor 31 executes the program, the method for displaying a bullet-screen message based on face recognition in embodiment 1 is implemented. The electronic device 30 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
The electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 connecting the various system components (including the memory 32 and the processor 31).
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as a display method of bullet screen messages based on face recognition in embodiment 1 of the present invention, by running a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., a keyboard, a pointing device, etc.). Such communication may be through input/output (I/O) interfaces 35. Also, model-generating device 30 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via network adapter 36. Network adapter 36 communicates with the other modules of model-generating electronic device 30 via bus 33. Other hardware and/or software modules may be used in conjunction with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the display method of the bullet screen message based on face recognition in embodiment 1.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation manner, the present invention can also be implemented in the form of a program product, which includes program codes for causing a terminal device to execute a display method of a bullet screen message based on face recognition in implementation example 1 when the program product runs on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes or modifications to these embodiments may be made by those skilled in the art without departing from the principle and spirit of this invention, and these changes and modifications are within the scope of this invention.

Claims (12)

1. A bullet screen message display method based on face recognition is characterized by comprising the following steps:
collecting face image information of a user;
acquiring age information of the user according to the face image information;
determining a filtering level matching the age information;
and filtering the bullet screen message according to the filtering grade, and displaying the filtered bullet screen message.
2. The method for displaying barrage messages based on face recognition according to claim 1, wherein the step of filtering the barrage messages according to the filtering level and displaying the filtered barrage messages comprises:
acquiring text information of the bullet screen message;
acquiring a preset filtering field corresponding to the filtering grade;
filtering the bullet screen information according to a preset filtering field and the text information, and displaying the filtered bullet screen information;
or the like, or, alternatively,
and filtering the bullet screen message according to the filtering grade of the bullet screen message, and displaying the filtered bullet screen message.
3. The method for displaying bullet screen message based on human face recognition as claimed in claim 1, wherein said step of obtaining age information of said user comprises the following steps:
receiving confirmation information of the user on the age information;
if the confirmation information represents that the age information is wrong, receiving age information fed back by a user, and storing the age information fed back by the user and the face image information in a preset database in an associated manner;
and if the confirmation information is characterized in that the age information is correct, the age information and the face image information are stored in the preset database in an associated manner.
4. The method for displaying bullet screen message based on face recognition as claimed in claim 3, wherein said step of obtaining age information of said user according to said face image information comprises:
detecting whether the preset database has the face image information or not;
if so, calling age information stored in association with the face image information as the age information of the user;
and if not, processing the face image information according to a preset age estimation algorithm to obtain the age information of the user.
5. The method for displaying bullet screen messages based on face recognition according to claim 1, wherein if the number of the face image information is greater than one, the step of obtaining the age information of the user according to the face image information comprises:
respectively acquiring age information corresponding to each face image information;
acquiring minimum age information from the age information corresponding to each face image information;
the determining a filtering level matching the age information comprises:
determining a filtering level matching the minimum age information;
and/or the presence of a gas in the gas,
the step of obtaining the age information of the user according to the face image information comprises the following steps:
acquiring registered age information corresponding to a current login account;
judging whether the age information of the user acquired according to the face image information is matched with the registered age information or not, and if not, generating prompt information;
and/or the presence of a gas in the atmosphere,
said step of determining a filtering level matching said age information further comprises:
setting the filter level to a locked state;
the display method further comprises the following steps:
and if the change of the face image information is detected, setting the filtering level to be in an unlocking state.
6. A display system of bullet screen information based on face recognition is characterized by comprising:
the face image acquisition module is used for acquiring face image information of a user;
the age information acquisition module is used for acquiring the age information of the user according to the face image information;
the filtering grade determining module is used for determining a filtering grade matched with the age information;
and the bullet screen message display module is used for filtering bullet screen messages according to the filtering grade and displaying the filtered bullet screen messages.
7. The system for displaying bullet screen messages based on face recognition as claimed in claim 6, wherein said bullet screen message display module comprises:
the text message acquisition unit is used for acquiring text information of the bullet screen message;
a filtering field obtaining unit, configured to obtain a preset filtering field corresponding to the filtering level;
the display unit is used for filtering the bullet screen information according to the preset filtering field and the text information and displaying the filtered bullet screen information;
or the like, or a combination thereof,
and the bullet screen message display module is used for filtering the bullet screen messages according to the filtering grades of the bullet screen messages and displaying the filtered bullet screen messages.
8. The system for displaying bullet screen messages based on face recognition according to claim 6, wherein said display system further comprises a confirmation information receiving module, a confirmation information judging module and an associated storage module:
the confirmation information receiving module is used for receiving the confirmation information of the user on the age information;
the confirmation information judging module is used for receiving age information fed back by a user and calling the associated storage module when the confirmation information is judged to be in error, so that the age information fed back by the user and the face image information are stored in a preset database in an associated manner;
the confirmation information judging module is further used for calling the association storage module to store the age information and the face image information in the preset database in an associated manner when the confirmation information is judged to be represented that the age information is correct.
9. The system for displaying a bullet screen message based on human face recognition as claimed in claim 8, wherein said age information obtaining module comprises a database detecting unit and an age recognizing unit:
the database detection unit is used for detecting whether the preset database has the face image information or not; if so, calling age information stored in association with the face image information as the age information of the user; if not, calling an age identification unit;
the age identification unit is used for processing the face image information according to a preset age estimation algorithm to obtain the age information of the user.
10. The system for displaying bullet screen messages based on face recognition according to claim 6, wherein if the number of the face image information is greater than one, the age information obtaining module is configured to obtain the age information corresponding to each face image information respectively, and obtain the minimum age information from the age information corresponding to each face image information; the filtering grade determining module is used for determining the filtering grade matched with the minimum age information;
and/or the presence of a gas in the gas,
the display system also comprises an age verification module which is used for acquiring registered age information corresponding to the current login account, judging whether the age information of the user acquired according to the face image information is matched with the registered age information or not, and if not, generating prompt information;
and/or the presence of a gas in the gas,
the display system further comprises a filtering grade locking module, which is used for setting the filtering grade to be in a locking state when being called by the filtering grade determining module;
the filtering grade locking module is further used for setting the filtering grade to be in an unlocking state when the change of the face image information is detected.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for displaying a bullet screen message based on face recognition according to any one of claims 1 to 5 when executing the computer program.
12. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the method for displaying a bullet screen message based on face recognition according to any one of claims 1 to 5.
CN202210360228.6A 2022-04-06 2022-04-06 Bullet screen message display method, system, device and medium based on face recognition Pending CN114745592A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210360228.6A CN114745592A (en) 2022-04-06 2022-04-06 Bullet screen message display method, system, device and medium based on face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210360228.6A CN114745592A (en) 2022-04-06 2022-04-06 Bullet screen message display method, system, device and medium based on face recognition

Publications (1)

Publication Number Publication Date
CN114745592A true CN114745592A (en) 2022-07-12

Family

ID=82278347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210360228.6A Pending CN114745592A (en) 2022-04-06 2022-04-06 Bullet screen message display method, system, device and medium based on face recognition

Country Status (1)

Country Link
CN (1) CN114745592A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028167A (en) * 2016-06-27 2016-10-12 乐视控股(北京)有限公司 Barrage display method and device
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN109451333A (en) * 2018-11-29 2019-03-08 北京奇艺世纪科技有限公司 A kind of barrage display methods, device, terminal and system
CN110557671A (en) * 2019-09-10 2019-12-10 湖南快乐阳光互动娱乐传媒有限公司 Method and system for automatically processing unhealthy content of video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028167A (en) * 2016-06-27 2016-10-12 乐视控股(北京)有限公司 Barrage display method and device
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN109451333A (en) * 2018-11-29 2019-03-08 北京奇艺世纪科技有限公司 A kind of barrage display methods, device, terminal and system
CN110557671A (en) * 2019-09-10 2019-12-10 湖南快乐阳光互动娱乐传媒有限公司 Method and system for automatically processing unhealthy content of video

Similar Documents

Publication Publication Date Title
CN109359548B (en) Multi-face recognition monitoring method and device, electronic equipment and storage medium
Kumar et al. The p-destre: A fully annotated dataset for pedestrian detection, tracking, and short/long-term re-identification from aerial devices
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US10318797B2 (en) Image processing apparatus and image processing method
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
US8750573B2 (en) Hand gesture detection
US8792722B2 (en) Hand gesture detection
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN109657533A (en) Pedestrian recognition methods and Related product again
US11176679B2 (en) Person segmentations for background replacements
CN111242097A (en) Face recognition method and device, computer readable medium and electronic equipment
CN112101123B (en) Attention detection method and device
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN108986137B (en) Human body tracking method, device and equipment
CN112312215B (en) Startup content recommendation method based on user identification, smart television and storage medium
US11295117B2 (en) Facial modelling and matching systems and methods
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
CN113792661A (en) Image detection method, image detection device, electronic equipment and storage medium
CN113409056A (en) Payment method and device, local identification equipment, face payment system and equipment
CN111444822B (en) Object recognition method and device, storage medium and electronic device
CN114745592A (en) Bullet screen message display method, system, device and medium based on face recognition
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
Florea et al. Recognition of the gaze direction: Anchoring with the eyebrows
Weda et al. Automatic children detection in digital images
CN114694200A (en) Method for supervising healthy film watching of children and intelligent terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination