WO2017219450A1 - Information processing method and device, and mobile terminal - Google Patents

Information processing method and device, and mobile terminal Download PDF

Info

Publication number
WO2017219450A1
WO2017219450A1 PCT/CN2016/093112 CN2016093112W WO2017219450A1 WO 2017219450 A1 WO2017219450 A1 WO 2017219450A1 CN 2016093112 W CN2016093112 W CN 2016093112W WO 2017219450 A1 WO2017219450 A1 WO 2017219450A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
feature information
feature
facial feature
Prior art date
Application number
PCT/CN2016/093112
Other languages
French (fr)
Chinese (zh)
Inventor
郭辉
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017219450A1 publication Critical patent/WO2017219450A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • This application relates to, but is not limited to, the field of communication technology.
  • the facial recognition technology in the related art mainly uses a camera to collect facial information of a person, and uses facial detection or recognition technology to identify facial features of the user.
  • the facial emotion recognition in the related art is only a static recognition process. After the face information is collected by the camera, the facial features are extracted and compared with the preset feature values to obtain a similarity, thereby judging Out of user sentiment. This approach often has problems with lower accuracy and does not allow for real-time capture of changes in the user's facial features.
  • This paper provides an information processing method, device and mobile terminal.
  • static recognition by dynamically analyzing the changes of the facial features of the user, it is possible to more accurately determine the emotional state of the user.
  • An information processing method includes:
  • the startup camera dynamically collects the user image information according to the preset period
  • the feature information is subjected to continuous comparative analysis, and matched with template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
  • the acquiring the feature information of the facial feature part of the user according to the user image information includes:
  • the distinguishing the facial feature parts of the user according to the facial feature information, and extracting the feature information corresponding to the facial feature parts including:
  • performing the continuous comparison analysis on the feature information, and matching the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user including:
  • the method further includes:
  • the method further includes:
  • the emotional state of the user is verified in conjunction with the voice emotion.
  • the method further includes:
  • the feature information of the determined emotional state of the user is saved in the template feature information corresponding to the facial feature portion.
  • the method further includes:
  • An information processing apparatus comprising:
  • the first processing module is configured to: when detecting that the user performs an operation on the mobile terminal, start the camera to dynamically collect user image information according to a preset period;
  • the first obtaining module is configured to: acquire feature information of the facial feature part of the user according to the user image information collected by the first processing module;
  • the second processing module is configured to: perform continuous comparative analysis on the feature information acquired by the first acquiring module, and match the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user .
  • the first obtaining module includes:
  • a determining unit configured to: determine, by the face detection, a face area in the user image information collected by the first processing module;
  • the collecting unit is configured to: collect the facial feature information in the facial region determined by the determining unit;
  • the extracting unit is configured to: according to the facial feature information collected by the collecting unit, distinguish the facial feature part of the user, and extract feature information corresponding to the facial feature part.
  • the extracting unit comprises:
  • a region molecular unit configured to: distinguish the mouth region of the user according to the facial feature information collected by the collecting unit;
  • Obtaining a subunit configured to: acquire a feature point position of the mouth region obtained by the molecular unit of the region;
  • the processing subunit is configured to: obtain feature information of the mouth of the user according to the feature point position acquired by the acquiring subunit.
  • the second processing module includes:
  • a matching unit configured to: the feature information acquired by the first acquiring module and the pre- The stored template feature information of the corresponding facial feature part is matched to obtain a matching result;
  • the analyzing unit is configured to compare and analyze the feature information acquired by the first acquiring module with the feature information of the corresponding facial feature part acquired in the previous cycle to obtain an analysis result;
  • the processing unit is configured to: determine the emotional state of the user according to the matching result obtained by the matching unit and the analysis result obtained by the analyzing unit.
  • the information processing apparatus further includes:
  • the acquisition module is configured to: collect user voice information
  • the third processing module is configured to: determine the voice mood of the user according to the user voice information collected by the collection module and a preset emotional sound template.
  • the information processing apparatus further includes:
  • the verification module is configured to: verify the emotional state of the user in conjunction with the voice emotion determined by the third processing module.
  • the information processing apparatus further includes:
  • the fourth processing module is configured to: save the feature information of the determined emotional state of the user in the template feature information corresponding to the facial feature part.
  • the information processing apparatus further includes:
  • the second obtaining module is configured to: obtain a correspondence between the preset emotional state and the session template;
  • a selection module configured to: select, according to the correspondence relationship acquired by the second obtaining module, a session template corresponding to the determined emotional state of the user;
  • the session initiation module is configured to initiate a session with the user according to the session template selected by the selection module.
  • a mobile terminal comprising the information processing apparatus according to any of the above.
  • the information processing method and device and the mobile terminal provided by the embodiment of the present invention acquire user image information according to a preset period by automatically starting the camera when detecting that the user performs an operation on the mobile terminal; and then acquiring the user according to the user image information.
  • the feature information of the facial feature part since the camera is dynamically collected, the user image information at different time points can be collected. Therefore, the feature information corresponding to the acquired facial feature part of the user is also the feature information at different time points; After that, the feature information can be continuously compared and analyzed, and matched with the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user.
  • the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized.
  • the change more accurately analyzes the user's emotional state and gives the user a better experience.
  • FIG. 1 is a flowchart of an information processing method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of another information processing method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for performing step 123 in the information processing method provided by the embodiment shown in FIG. 2;
  • FIG. 5 is a flowchart of still another information processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of still another information processing apparatus according to an embodiment of the present invention.
  • the embodiment of the present invention is directed to the process of recognizing the user's emotions in the face of the related art, which is only a process of static recognition, and cannot capture the change of the facial features of the user in real time, and the problem of low accuracy is mentioned.
  • the process of static recognition by dynamically analyzing changes in the facial features of the user, the emotional state of the user is more accurately determined.
  • the information processing method provided in this embodiment may include the following steps: Step 110 to Step 130:
  • Step 110 When detecting that the user performs an operation on the mobile terminal, the startup camera dynamically collects user image information according to a preset period;
  • Step 120 Acquire feature information of a facial feature part of the user according to the user image information
  • Step 130 Perform continuous comparative analysis on the feature information, and match the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user.
  • the information processing method provided by the embodiment of the present invention is applied to a mobile terminal, where the feature information of a facial feature part (such as an eye, a mouth, etc.) of different emotions (such as laughing, crying, angry, etc.) can be stored in advance as Template feature information.
  • the pre-stored feature information may include the curvature of the corner of the mouth (for example, rising or falling) and the height difference of the upper and lower lip feature points obtained by constructing the coordinate system with the left and right corner feature points and the upper and lower lip feature points, corresponding to different
  • the mood, the angular range of the mouth, and the height difference between the upper and lower lip feature points may have a certain preset threshold.
  • the feature information is stored in the database according to the template feature information of different emotions in advance for subsequent matching.
  • the mobile terminal When the information processing method provided by the embodiment of the present invention detects that the user performs an operation on the mobile terminal, that is, when the user is using the mobile terminal, the mobile terminal automatically starts the camera (for example, adopts a front camera) to collect according to a preset period.
  • the camera for example, adopts a front camera
  • User image information User image information
  • acquiring feature information of the user's facial feature part since the camera is dynamically collected, the user image information at different time points can be collected, and therefore, the acquired facial features of the user
  • the feature information of the part is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature part to determine the emotional state of the user.
  • the information processing method provided by the embodiment of the present invention not only acquires the facial feature acquired.
  • the feature information of the part and the pre-stored template feature information The static processing of line matching also dynamically processes the feature information (ie, continuous contrast analysis), which realizes more accurate analysis of the user's emotional state through the change of the feature information of the facial feature parts, and brings a better experience to the user.
  • the preset period in the embodiment of the present invention is, for example, less than or equal to 1 second (s); on the other hand, in order to prevent the collected user image information from being processed in time, each collected user may be collected.
  • the image information is added to the collection queue, so that the collected user image information is subsequently extracted one by one from the collection queue for processing.
  • the application scenario in which the user performs operations on the mobile terminal includes, but is not limited to, the following scenario: the user lights the screen through the power button, unlocks the mobile phone, and detects that the screen is touched by the user. In the state where the mobile terminal is not operated, in order to avoid power consumption caused by the camera being always turned on in the background, there is no need to start.
  • the step 120 in this embodiment may include the following steps. That is, steps 121 to 123:
  • step 121 the face area is determined in the user image information by face detection.
  • the user image information collected after the camera is started may include not only a facial image but also other body part images and/or background images, and the user image information collected at a certain time point.
  • the facial image may not be included. Therefore, in order to ensure the validity of the finally obtained feature information, in this step, the user image information is extracted one by one from the collection queue, and the face region is determined through face detection to avoid other uselessness. Interference caused by regional image information.
  • Step 122 collecting facial feature information in the face area.
  • the collection of facial feature information is performed in the face region determined in step 121, wherein the facial feature information may include information such as the position, region, and the like of each facial organ.
  • facial feature information can be obtained by facial recognition, shape description of facial organs, and distance characteristics between each facial organ, and can also be based on algebraic features or statistical learning characterization. The method obtains facial feature information, which will not be described here.
  • Step 123 Differentiate the facial feature part of the user according to the facial feature information, and extract feature information corresponding to the facial feature part.
  • the facial feature portion of the user such as the eye and the mouth, can be distinguished according to the facial feature information. And correspondingly extracting feature information of the facial feature portion.
  • step 123 may include the following steps, that is, steps 1231 to 1233:
  • Step 1231 distinguishing the mouth area of the user according to the facial feature information.
  • the facial feature information includes information such as the position and region of each facial organ. Therefore, in this step, the mouth region of the user can be distinguished based on the facial feature information.
  • Step 1232 acquiring a feature point position of the mouth area.
  • the feature points of the mouth region such as the left and right mouth corners, and the feature points of the upper and lower lip vertices are set in advance. Therefore, in this step, corresponding features can be determined in the mouth region. Click to get the feature point location.
  • a preset number of sample positions can be selected between the feature point positions for auxiliary analysis.
  • Step 1233 according to the feature point position, obtaining feature information of the mouth of the user.
  • the feature information of the user's mouth such as the mouth angle curvature and the upper and lower lip feature points
  • Geometric feature information such as height difference.
  • the coordinate system can be constructed according to the left and right mouth angular position, the upper and lower lip vertex positions, and the sample position, and the height difference between the mouth angle arc and the upper and lower lip feature points is calculated by the coordinates of each of the above positions.
  • the embodiment shown in FIG. 3 only takes the facial feature as the mouth as an example, and illustrates that the feature information corresponding to the facial feature is extracted.
  • the human emotion is also revealed by the eye, and the facial feature may also be The eye area, the feature information corresponding to the eye area, can select the size of the pupil.
  • the eye feature information can be combined for processing to obtain a more accurate analysis result.
  • the mouth area and the eye area it can be combined with other parts for processing, and will not be enumerated here.
  • the formation process of the facial expression change of the person is considered, for example, for the step 130, the stored template feature information is
  • the matching is performed, the corresponding feature information at different time points can be continuously compared and analyzed.
  • FIG. 4 it is a flowchart of another information processing method provided by the embodiment of the present invention, which is based on the foregoing embodiment.
  • the step 130 in this embodiment may include the following steps, that is, steps 131 to 133; the embodiment shown in FIG. 4 is shown as an example on the basis of the embodiment shown in FIG. 1.
  • Step 131 Match the feature information with the pre-stored template feature information of the corresponding facial feature part to obtain a matching result.
  • the feature information of the facial feature part acquired in step 120 is extracted one by one, and the template feature information of the same facial feature part pre-stored in the database is matched, and the corresponding matching result can be obtained, thereby obtaining the matching result.
  • the degree of matching of the feature information of the facial feature portion with each template feature information is a threshold range having a corresponding emotional state.
  • the geometrical contour of the corner is Ascending, the height difference between the corner of the mouth and the feature points of the upper and lower lips are positive, and within a predetermined threshold range; the height difference between the current user's mouth angle and the upper and lower lip feature points is in the emotional state.
  • the template corresponds to the threshold range, the current user's mouth feature information and the emotional state are the highest match degree.
  • step 132 the feature information is compared with the feature information of the corresponding facial feature part acquired in the previous cycle, and the analysis result is obtained.
  • the feature information of the facial feature part acquired in step 120 is compared with the feature information of the corresponding facial feature part acquired in the previous cycle according to the chronological order, and the corresponding information is obtained.
  • the analysis results to understand the changes in the characteristics of the information.
  • Step 133 determining the emotional state of the user according to the matching result and the analysis result.
  • the matching degree of the user's facial feature part and each template feature information and the change of the feature information can be known by combining the matching result and the analysis result, and the user's emotional state can be determined.
  • the matching result obtained by the matching is used to know the matching degree between the current mouth feature information and the emotional state of the user. highest.
  • the curvature of the beginning of the mouth is constantly increasing, and then it will continue to become smaller. Therefore, by analyzing the trend of the curvature of the corner of the mouth, it is possible to determine the change of the user's emotional state and find that the change or Increasingly large, indicating that the user is laughing; the discovery is getting smaller, indicating that the emotional state of the user's laugh is coming to an end.
  • the analysis can be combined with the change of the height difference of the upper and lower lip feature points to more accurately complete the contrast analysis.
  • the height difference between the upper and lower lip feature points is constantly changing.
  • the information processing method of the embodiment of the present invention may further include: saving feature information of the determined emotional state of the user in template feature information corresponding to the facial feature part, in consideration of the applicability of the mobile terminal to the user.
  • the mobile terminal performs self-learning, stores feature information of the facial feature part after determining the user's emotional state, and fills the template feature information of the facial feature part in the database, so that personalized data is established according to the user.
  • Information, in the subsequent information processing process will also get a higher degree of matching, recognition is faster and more accurate.
  • the method may further include the following steps:
  • the user's voice emotion is determined according to the user voice information and the preset emotional sound template.
  • the embodiment of the present invention can also be combined with the voice assisted analysis.
  • Starting the user's voice after detecting that the user performs an operation on the mobile terminal The information can be collected by opening the call microphone to collect the user's voice information, and then identifying the emotional sound template (such as laughter, crying, snoring, etc. with emotional state) of the sound information of different emotional states pre-stored by the mobile terminal. To determine the user's voice emotions.
  • the method further includes: verifying the emotional state of the user in combination with the voice emotion.
  • the emotional state determined by the feature information of the facial feature portion of the current user is verified by the recognized sound emotion, and the real emotional state of the user is comprehensively analyzed.
  • the method may further include:
  • the mobile terminal may also select an appropriate session template according to the emotional state of the user, and automatically initiate the voice function to initiate a dialogue with the user, so that the user has a better user experience and moves.
  • the terminal is not only a physical tool, but also a "friend" of the chat.
  • FIG. 5 is a flowchart of still another information processing method according to an embodiment of the present invention
  • this embodiment provides an application example of a mobile terminal, and the method may include the following steps, that is, steps 501 to 511:
  • Step 501 In the normal standby sleep state of the mobile terminal, detecting whether the user performs an operation on the mobile terminal, if yes, executing step 502, and if not, continuing to detect;
  • Step 502 automatically start the front camera and the microphone in the background, respectively collect the user image information and the sound information, and add the collected information to the collection information queue; wherein the user image information is collected according to a preset period, for example, the preset period is 0.5s;
  • Step 503 Acquire current user image information from the collection information queue, and perform face detection. Determining a facial area to obtain facial feature information of the user;
  • Step 504 Extract feature information of the mouth region from the facial feature information by using face recognition
  • Step 505 the feature information of the mouth area and the template feature information of the mouth area pre-stored by the mobile terminal are matched one by one;
  • Step 506 Find an emotional state with the highest matching degree according to the matching result.
  • Step 507 comparing the feature information of the current mouth region with the feature information of the mouth region of the user image information of the previous cycle; if the corresponding emotional states of the two are different, performing step 511, if the emotional states of the two are corresponding Similarly, step 508 is performed;
  • Step 508 Initially determine the emotional state of the user according to the current mouth angle radians and the difference between the height difference of the upper and lower lip feature points and the previous period;
  • Step 509 Perform auxiliary analysis according to the collected sound information to determine a current emotional state of the user
  • Step 510 Start a voice function, and start an adaptive session according to the current emotional state of the user;
  • Step 511 The feature information of the determined emotional state is stored as new template feature information in the template feature information of the feature part of the corresponding part.
  • the information processing method provided by the embodiment of the present invention automatically detects the user when performing operations on the mobile terminal, that is, when the user is using the mobile terminal, the mobile terminal automatically starts the camera (for example, adopting a front camera)
  • the user image information is collected according to the preset period; then, according to the user image information, the feature information of the user's facial feature part is acquired; since the camera is dynamically collected, the user image information at different time points can be collected, and accordingly, the corresponding information is obtained.
  • the feature information of the facial feature portion of the user is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature portion to determine the user.
  • the emotional state is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature portion to determine the user.
  • the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized.
  • the change more accurately analyzes the user's emotional state and gives the user a better experience.
  • FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
  • the information processing apparatus provided in this embodiment may include: a first processing module 10 and a first acquiring module 20; And a second processing module 30.
  • the first processing module 10 is configured to: when detecting that the user performs an operation on the mobile terminal, start the camera to dynamically collect user image information according to a preset period;
  • the first obtaining module 20 is configured to: acquire feature information of a facial feature part of the user according to the user image information collected by the first processing module 10;
  • the second processing module 30 is configured to perform continuous comparative analysis on the feature information acquired by the first acquiring module 20, and match the template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
  • FIG. 7 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present invention.
  • the first obtaining module 20 in this embodiment may include: a determining unit 21, an obtaining unit 22, and an extracting unit 23.
  • the determining unit 21 is configured to: determine, by the face detection, the face area in the user image information collected by the first processing module 10;
  • the collecting unit 22 is configured to: collect the facial feature information in the face region determined by the determining unit 21;
  • the extracting unit 23 is configured to: according to the facial feature information collected by the collecting unit 22, distinguish the facial feature part of the user, and extract feature information corresponding to the facial feature part.
  • the foregoing extracting unit 23 may include:
  • the regional molecular unit is configured to: distinguish the mouth region of the user according to the facial feature information collected by the collecting unit 22;
  • Obtaining a subunit which is set to: acquire a feature point position of a mouth region obtained by the molecular unit of the region;
  • the processing subunit is configured to obtain the feature information of the user's mouth according to the position of the feature point acquired by the acquiring subunit.
  • FIG. 8 is a schematic structural diagram of still another information processing apparatus according to an embodiment of the present invention.
  • the second processing module 30 in this embodiment may include: a matching unit 31, an analyzing unit 32, and a processing unit 33; the embodiment shown in FIG. 8 is based on the device in the embodiment shown in FIG. The above is shown as an example.
  • the matching unit 31 is configured to: match the feature information acquired by the first acquiring module 20 with the template feature information of the pre-stored corresponding facial feature part to obtain a matching result;
  • the analyzing unit 32 is configured to compare and analyze the feature information acquired by the first acquiring module 20 and the feature information of the corresponding facial feature part acquired in the previous cycle to obtain an analysis result;
  • the processing unit 33 is configured to determine the emotional state of the user according to the matching result obtained by the matching unit 31 and the analysis result obtained by the analyzing unit 32.
  • the information processing apparatus may further include:
  • the acquisition module is configured to: collect user voice information
  • the third processing module is configured to: determine a user's voice emotion according to the user voice information collected by the collection module and the preset emotional sound template.
  • the information processing apparatus may further include:
  • the verification module is configured to: verify the emotional state of the user in conjunction with the voice emotion determined by the third processing module.
  • the information processing apparatus may further include:
  • the fourth processing module is configured to: save the feature information of the determined emotional state of the user in the template feature information corresponding to the facial feature part.
  • the information processing apparatus may further include:
  • the second obtaining module is configured to: obtain a correspondence between the preset emotional state and the session template;
  • the selection module is configured to: select a session template corresponding to the determined emotional state of the user according to the correspondence acquired by the second obtaining module;
  • the session initiation module is configured to initiate a session with the user according to the session template selected by the selection module.
  • the first processing module automatically starts the camera (for example, adopts a front camera) according to a preset period. Collecting user image information; then, the first obtaining module acquires feature information of the facial feature part of the user according to the user image information; since the camera is dynamically collected, the user image information at different time points can be collected, and accordingly, the corresponding image is acquired.
  • the feature information of the facial feature portion of the user is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed by the second processing module, and the template feature information of the pre-stored corresponding facial feature portion is performed.
  • Match to determine the emotional state of the user In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information is dynamically processed, thereby realizing the change of the feature information of the facial feature part. Accurately analyze the emotional state of the user and bring a better experience to the user.
  • the information processing device provided by the embodiment of the present invention is the device to which the information processing method is applied, and the implementation manner of the embodiment of the information processing method is applicable to the device provided by the embodiment of the present invention, and the same technical effect can be achieved.
  • the embodiment of the present invention further provides a mobile terminal, comprising: the information processing apparatus provided by any of the foregoing embodiments.
  • the mobile terminal When the mobile terminal provided by the embodiment detects that the user performs an operation on itself, that is, when the user is using the user, the mobile terminal automatically starts the camera (for example, adopts a front camera) to collect user image information according to a preset period; Then, according to the user image information, the feature information of the facial feature part of the user is acquired; since the camera is dynamically collected, the user image information at different time points can be collected, and therefore, the feature information of the acquired facial feature part of the user is correspondingly The feature information is also at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
  • the camera for example, adopts a front camera
  • the feature information is dynamically processed, thereby realizing the change of the feature information of the facial feature part.
  • the embodiment of the present invention provides a mobile terminal to which the mobile terminal is applied.
  • the implementation manner of the foregoing information processing method is applicable to the mobile terminal, and the same technical effect can be achieved.
  • the mobile terminals described in the specification herein include, but are not limited to, a smartphone, a tablet, etc., and many of the functional components described are referred to as modules to more particularly emphasize the independence of their implementation.
  • the modules may be implemented in software for execution by various types of processors.
  • an identified executable code module can comprise one or more physical or logical blocks of computer instructions, which can be constructed, for example, as an object, procedure, or function. Nonetheless, the executable code of the identified modules need not be physically located together, but may include different instructions stored in different bits that, when logically combined, constitute a module and implement the provisions of the module. purpose.
  • the above executable code modules may be a single instruction or a plurality of instructions, and may even be distributed over a plurality of different code segments, distributed among different programs, and distributed across multiple memory devices.
  • operational data may be identified within the modules and may be implemented in any suitable form and organized within any suitable type of data structure. The above operational data may be collected as a single data set or may be distributed at different locations, for example, on different storage devices, and at least in part may exist as an electronic signal only on the system or network.
  • the hardware circuit includes conventional Very Large Scale Integration (VLSI) circuits or gate arrays and semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI Very Large Scale Integration
  • the above modules can also be implemented with programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, and the like.
  • all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
  • the devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
  • the device/function module/function unit in the above embodiment is implemented in the form of a software function module and When sold or used as a stand-alone product, it can be stored on a computer readable storage medium.
  • the above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
  • the user image information is collected according to a preset period by automatically starting the camera; and then, according to the user image information, the feature information of the facial feature portion of the user is acquired; Dynamic acquisition, that is, user image information at different time points can be collected. Therefore, the feature information corresponding to the acquired facial feature parts of the user is also feature information at different time points; subsequently, the feature information can be continuously compared. The analysis is performed and matched with the pre-stored template feature information of the corresponding facial feature portion to determine the emotional state of the user.
  • the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized.
  • the change more accurately analyzes the user's emotional state and gives the user a better experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)
  • Image Analysis (AREA)

Abstract

Provided are an information processing method and device, and a mobile terminal. The method comprises: upon detection of a mobile terminal being operated, activating a camera to dynamically collect, according to a preset cycle, image information of a user; acquiring, according to the image information of the user, feature information of facial features of the user; and performing continuous comparative analyses on the feature information, and performing matching on the basis of pre-stored template feature information corresponding to the facial features to determine an emotion state of the user.

Description

一种信息处理方法、装置及移动终端Information processing method, device and mobile terminal 技术领域Technical field
本申请涉及但不限于通信技术领域。This application relates to, but is not limited to, the field of communication technology.
背景技术Background technique
相关技术中的面部识别技术主要是利用摄像头采集人的面部信息,利用人脸检测或识别技术,对用户的面部特征进行识别。The facial recognition technology in the related art mainly uses a camera to collect facial information of a person, and uses facial detection or recognition technology to identify facial features of the user.
但是,相关技术中的面部情绪识别仅是一个静态的识别过程,通过摄像头采集到面部信息后,对面部特征进行提取,并与预先设定好的特征值进行对比,得到一个相似度,从而判断出用户情绪。这种方式通常存在精确度较低的问题,而且无法去实时捕捉用户面部特征的变化。However, the facial emotion recognition in the related art is only a static recognition process. After the face information is collected by the camera, the facial features are extracted and compared with the preset feature values to obtain a similarity, thereby judging Out of user sentiment. This approach often has problems with lower accuracy and does not allow for real-time capture of changes in the user's facial features.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本文提供一种信息处理方法、装置及移动终端,在静态识别的过程中,通过动态分析用户面部特征部位的变化,实现了更加精确地确定出用户的情绪状态。This paper provides an information processing method, device and mobile terminal. In the process of static recognition, by dynamically analyzing the changes of the facial features of the user, it is possible to more accurately determine the emotional state of the user.
一种信息处理方法,包括:An information processing method includes:
在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;When detecting that the user performs an operation on the mobile terminal, the startup camera dynamically collects the user image information according to the preset period;
根据所述用户图像信息,获取所述用户的面部特征部位的特征信息;Obtaining feature information of the facial feature part of the user according to the user image information;
将所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态。The feature information is subjected to continuous comparative analysis, and matched with template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
可选地,所述根据所述用户图像信息,获取用户面部特征部位的特征信息,包括: Optionally, the acquiring the feature information of the facial feature part of the user according to the user image information includes:
通过人脸检测,在所述用户图像信息中确定出面部区域;Determining a face area in the user image information by face detection;
在所述面部区域进行面部特征信息的采集;Collecting facial feature information in the face area;
根据所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息。And distinguishing, according to the facial feature information, a facial feature part of the user, and extracting feature information corresponding to the facial feature part.
可选地,所述根据所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息,包括:Optionally, the distinguishing the facial feature parts of the user according to the facial feature information, and extracting the feature information corresponding to the facial feature parts, including:
根据所述面部特征信息,区分出所述用户的嘴部区域;Distinguishing the mouth area of the user according to the facial feature information;
获取所述嘴部区域的特征点位置;Obtaining a feature point location of the mouth region;
根据所述特征点位置,得到所述用户的嘴部的特征信息。According to the feature point position, feature information of the mouth of the user is obtained.
可选地,所述将所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态,包括:Optionally, performing the continuous comparison analysis on the feature information, and matching the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user, including:
将所述特征信息与所述预存储的对应面部特征部位的模板特征信息进行匹配,得到匹配结果;Matching the feature information with the template feature information of the pre-stored corresponding facial feature part to obtain a matching result;
将所述特征信息与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到分析结果;Comparing the feature information with the feature information of the corresponding facial feature portion acquired in the previous cycle to obtain an analysis result;
根据所述匹配结果和所述分析结果,确定所述用户的情绪状态。And determining an emotional state of the user according to the matching result and the analysis result.
可选地,所述在检测到用户对移动终端执行操作之后,所述方法还包括:Optionally, after the detecting that the user performs an operation on the mobile terminal, the method further includes:
采集用户声音信息;Collect user voice information;
根据所述用户声音信息和预设的情绪声音模板,确定所述用户的声音情绪。Determining the user's voice emotion according to the user voice information and the preset emotional sound template.
可选地,所述方法还包括:Optionally, the method further includes:
结合所述声音情绪验证所述用户的情绪状态。The emotional state of the user is verified in conjunction with the voice emotion.
可选地,所述方法还包括:Optionally, the method further includes:
将已确定用户的情绪状态的特征信息保存在对应面部特征部位的模板特征信息中。The feature information of the determined emotional state of the user is saved in the template feature information corresponding to the facial feature portion.
可选地,所述方法还包括: Optionally, the method further includes:
获取预设的情绪状态与会话模板的对应关系;Obtain a correspondence between a preset emotional state and a session template;
根据所述对应关系,选择已确定用户的情绪状态对应的会话模板;Determining, according to the correspondence, a session template corresponding to the determined emotional state of the user;
根据所述会话模板,发起与所述用户的会话。Initiating a session with the user according to the session template.
一种信息处理装置,包括:An information processing apparatus comprising:
第一处理模块,设置为:在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;The first processing module is configured to: when detecting that the user performs an operation on the mobile terminal, start the camera to dynamically collect user image information according to a preset period;
第一获取模块,设置为:根据所述第一处理模块采集的所述用户图像信息,获取所述用户额面部特征部位的特征信息;The first obtaining module is configured to: acquire feature information of the facial feature part of the user according to the user image information collected by the first processing module;
第二处理模块,设置为:将所述第一获取模块获取的所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态。The second processing module is configured to: perform continuous comparative analysis on the feature information acquired by the first acquiring module, and match the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user .
可选地,所述第一获取模块包括:Optionally, the first obtaining module includes:
确定单元,设置为:通过人脸检测,在所述第一处理模块采集的所述用户图像信息中确定出面部区域;a determining unit, configured to: determine, by the face detection, a face area in the user image information collected by the first processing module;
采集单元,设置为:在所述确定单元确定出的所述面部区域进行面部特征信息的采集;The collecting unit is configured to: collect the facial feature information in the facial region determined by the determining unit;
提取单元,设置为:根据所述采集单元采集的所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息。The extracting unit is configured to: according to the facial feature information collected by the collecting unit, distinguish the facial feature part of the user, and extract feature information corresponding to the facial feature part.
可选地,提取单元包括:Optionally, the extracting unit comprises:
区分子单元,设置为:根据所述采集单元采集的所述面部特征信息,区分出所述用户的嘴部区域;a region molecular unit, configured to: distinguish the mouth region of the user according to the facial feature information collected by the collecting unit;
获取子单元,设置为:获取所述区分子单元得到的所述嘴部区域的特征点位置;Obtaining a subunit, configured to: acquire a feature point position of the mouth region obtained by the molecular unit of the region;
处理子单元,设置为:根据所述获取子单元获取的所述特征点位置,得到所述用户的嘴部的特征信息。The processing subunit is configured to: obtain feature information of the mouth of the user according to the feature point position acquired by the acquiring subunit.
可选地,所述第二处理模块包括:Optionally, the second processing module includes:
匹配单元,设置为:将所述第一获取模块获取的所述特征信息与所述预 存储的对应面部特征部位的模板特征信息进行匹配,得到匹配结果;a matching unit, configured to: the feature information acquired by the first acquiring module and the pre- The stored template feature information of the corresponding facial feature part is matched to obtain a matching result;
分析单元,设置为:将所述第一获取模块获取的所述特征信息与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到分析结果;The analyzing unit is configured to compare and analyze the feature information acquired by the first acquiring module with the feature information of the corresponding facial feature part acquired in the previous cycle to obtain an analysis result;
处理单元,设置为:根据所述匹配单元得到的所述匹配结果和所述分析单元得到的所述分析结果,确定所述用户的情绪状态。The processing unit is configured to: determine the emotional state of the user according to the matching result obtained by the matching unit and the analysis result obtained by the analyzing unit.
可选地,所述信息处理装置还包括:Optionally, the information processing apparatus further includes:
采集模块,设置为:采集用户声音信息;The acquisition module is configured to: collect user voice information;
第三处理模块,设置为:根据所述采集模块采集的所述用户声音信息和预设的情绪声音模板,确定所述用户的声音情绪。The third processing module is configured to: determine the voice mood of the user according to the user voice information collected by the collection module and a preset emotional sound template.
可选地,所述信息处理装置还包括:Optionally, the information processing apparatus further includes:
验证模块,设置为:结合所述第三处理模块确定的所述声音情绪验证所述用户的情绪状态。The verification module is configured to: verify the emotional state of the user in conjunction with the voice emotion determined by the third processing module.
可选地,所述信息处理装置还包括:Optionally, the information processing apparatus further includes:
第四处理模块,设置为:将已确定用户的情绪状态的特征信息保存在对应面部特征部位的模板特征信息中。The fourth processing module is configured to: save the feature information of the determined emotional state of the user in the template feature information corresponding to the facial feature part.
可选地,所述信息处理装置还包括:Optionally, the information processing apparatus further includes:
第二获取模块,设置为:获取预设的情绪状态与会话模板的对应关系;The second obtaining module is configured to: obtain a correspondence between the preset emotional state and the session template;
选择模块,设置为:根据所述第二获取模块获取的所述对应关系,选择已确定用户的情绪状态对应的会话模板;a selection module, configured to: select, according to the correspondence relationship acquired by the second obtaining module, a session template corresponding to the determined emotional state of the user;
会话发起模块,设置为:根据所述选择模块选择的所述会话模板,发起与所述用户的会话。The session initiation module is configured to initiate a session with the user according to the session template selected by the selection module.
一种移动终端,包括上述任一项所述的信息处理装置。A mobile terminal comprising the information processing apparatus according to any of the above.
本发明实施例提供的信息处理方法、装置及移动终端,在检测到用户对移动终端执行操作时,通过自动启动摄像头按照预设周期采集用户图像信息;然后,根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点的用户图像信息,因此,对应获取到的用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随 后,就能够将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。本发明实施例不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进行匹配的静态处理,还对面部特征部位的特征信息进行动态处理,实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。The information processing method and device and the mobile terminal provided by the embodiment of the present invention acquire user image information according to a preset period by automatically starting the camera when detecting that the user performs an operation on the mobile terminal; and then acquiring the user according to the user image information. The feature information of the facial feature part; since the camera is dynamically collected, the user image information at different time points can be collected. Therefore, the feature information corresponding to the acquired facial feature part of the user is also the feature information at different time points; After that, the feature information can be continuously compared and analyzed, and matched with the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user. In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized. The change more accurately analyzes the user's emotional state and gives the user a better experience.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图概述BRIEF abstract
图1为本发明实施例提供的一种信息处理方法的流程图;FIG. 1 is a flowchart of an information processing method according to an embodiment of the present invention;
图2为本发明实施例提供的另一种信息处理方法的流程图;2 is a flowchart of another information processing method according to an embodiment of the present invention;
图3为图2所示实施例提供的信息处理方法中一种执行步骤123的方法的流程图;FIG. 3 is a flowchart of a method for performing step 123 in the information processing method provided by the embodiment shown in FIG. 2;
图4为本发明实施例提供的又一种信息处理方法的流程图;4 is a flowchart of still another information processing method according to an embodiment of the present invention;
图5为本发明实施例提供的再一种信息处理方法的流程图;FIG. 5 is a flowchart of still another information processing method according to an embodiment of the present invention;
图6为本发明实施例提供的一种信息处理装置的结构示意图;FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present disclosure;
图7为本发明实施例提供的另一种信息处理装置的结构示意图;FIG. 7 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present disclosure;
图8为本发明实施例提供的又一种信息处理装置的结构示意图。FIG. 8 is a schematic structural diagram of still another information processing apparatus according to an embodiment of the present invention.
本发明的实施方式Embodiments of the invention
下文中将结合附图对本发明的实施方式进行详细说明。需要说明的是,在不冲突的情况下,本文中的实施例及实施例中的特征可以相互任意组合。Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the features in the embodiments and the embodiments herein may be arbitrarily combined with each other.
在附图的流程图示出的步骤可以在诸根据一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。The steps illustrated in the flowchart of the figures may be executed in a computer system in accordance with a set of computer executable instructions. Also, although logical sequences are shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
本发明实施例针对相关技术中的面部识别用户情绪的方式仅是一个静态识别的过程,无法去实时捕捉用户面部特征的变化,精确度较低的问题,提 供一种信息处理方法,在静态识别的过程中,通过动态分析用户面部特征部位的变化,更加精确地确定出用户的情绪状态。The embodiment of the present invention is directed to the process of recognizing the user's emotions in the face of the related art, which is only a process of static recognition, and cannot capture the change of the facial features of the user in real time, and the problem of low accuracy is mentioned. For an information processing method, in the process of static recognition, by dynamically analyzing changes in the facial features of the user, the emotional state of the user is more accurately determined.
如图1所示,为本发明实施例提供的一种信息处理方法的流程图,本实施例提供的信息处理方法可以包括如下步骤,即步骤110~步骤130:As shown in FIG. 1 , which is a flowchart of an information processing method according to an embodiment of the present invention, the information processing method provided in this embodiment may include the following steps: Step 110 to Step 130:
步骤110,在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;Step 110: When detecting that the user performs an operation on the mobile terminal, the startup camera dynamically collects user image information according to a preset period;
步骤120,根据该用户图像信息,获取用户的面部特征部位的特征信息;Step 120: Acquire feature information of a facial feature part of the user according to the user image information;
步骤130,将该特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。Step 130: Perform continuous comparative analysis on the feature information, and match the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user.
本发明实施例提供的信息处理方法,应用于移动终端,该移动终端中可以预先存储不同情绪(例如笑、哭、生气等)的面部特征部位(例如眼部、嘴部等)的特征信息作为模板特征信息。以嘴部为例予以说明,预存储的特征信息可以包括以左右嘴角特征点和上下嘴唇特征点构建坐标系得到的嘴角弧度(例如上扬或下扬)以及上下嘴唇特征点的高度差,对应不同的情绪,嘴角弧度范围以及上下嘴唇特征点的高度差可以有一定的预设阈值。这些特征信息预先按照不同情绪的模板特征信息分类存储到数据库中,用于后续的匹配。The information processing method provided by the embodiment of the present invention is applied to a mobile terminal, where the feature information of a facial feature part (such as an eye, a mouth, etc.) of different emotions (such as laughing, crying, angry, etc.) can be stored in advance as Template feature information. Taking the mouth as an example, the pre-stored feature information may include the curvature of the corner of the mouth (for example, rising or falling) and the height difference of the upper and lower lip feature points obtained by constructing the coordinate system with the left and right corner feature points and the upper and lower lip feature points, corresponding to different The mood, the angular range of the mouth, and the height difference between the upper and lower lip feature points may have a certain preset threshold. The feature information is stored in the database according to the template feature information of different emotions in advance for subsequent matching.
本发明实施例提供的信息处理方法,在检测到用户对移动终端执行操作时,也就是用户正在使用移动终端时,该移动终端会自动启动摄像头(例如多采用前置摄像头)按照预设周期采集用户图像信息;然后,根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点的用户图像信息,因此,对应获取到的用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随后,就能够将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。When the information processing method provided by the embodiment of the present invention detects that the user performs an operation on the mobile terminal, that is, when the user is using the mobile terminal, the mobile terminal automatically starts the camera (for example, adopts a front camera) to collect according to a preset period. User image information; then, according to the user image information, acquiring feature information of the user's facial feature part; since the camera is dynamically collected, the user image information at different time points can be collected, and therefore, the acquired facial features of the user The feature information of the part is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature part to determine the emotional state of the user.
由于人的面部表情有一个形成变化的过程,若仅通过某一时刻得到的状态是无法准确地反映用户当前的情绪特征,而本发明实施例提供的信息处理方法,不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进 行匹配的静态处理,还会对该特征信息进行动态处理(即连续对比分析),实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。Since the facial expression of a person has a process of forming a change, if the state obtained by only a certain moment cannot accurately reflect the current emotional feature of the user, the information processing method provided by the embodiment of the present invention not only acquires the facial feature acquired. The feature information of the part and the pre-stored template feature information The static processing of line matching also dynamically processes the feature information (ie, continuous contrast analysis), which realizes more accurate analysis of the user's emotional state through the change of the feature information of the facial feature parts, and brings a better experience to the user.
考虑到在对比分析特征信息时要尽可能多的捕捉用户图像信息,才能清楚的了解到用户的表情变化,因此需要减少摄像头采集用户图像信息的时间间隔。可选地,本发明实施例中的预设周期例如小于或等于1秒(s);另一方面,为了避免采集到的用户图像信息得不到及时的处理,可以将每次采集到的用户图像信息加入到采集队列中,以便后续从该采集队列中逐个提取采集的用户图像信息进行处理。Considering that the user image information should be captured as much as possible when comparing and analyzing the feature information, the user's expression change can be clearly understood, so it is necessary to reduce the time interval at which the camera collects the user image information. Optionally, the preset period in the embodiment of the present invention is, for example, less than or equal to 1 second (s); on the other hand, in order to prevent the collected user image information from being processed in time, each collected user may be collected. The image information is added to the collection queue, so that the collected user image information is subsequently extracted one by one from the collection queue for processing.
可选地,在本发明实施例中,用户对移动终端执行操作的应用场景例如包括但不限于以下场景:用户通过电源键点亮屏幕、解锁手机、检测到屏幕被用户触摸。而在移动终端未被操作的状态下,为了避免摄像头在后台一直开启所引起的耗电,就无需启动。Optionally, in the embodiment of the present invention, the application scenario in which the user performs operations on the mobile terminal includes, but is not limited to, the following scenario: the user lights the screen through the power button, unlocks the mobile phone, and detects that the screen is touched by the user. In the state where the mobile terminal is not operated, in order to avoid power consumption caused by the camera being always turned on in the background, there is no need to start.
可选地,如图2所示,为本发明实施例提供的另一种信息处理方法的流程图,在图1所示实施例的基础上,本实施例中的步骤120可以包括如下步骤,即步骤121~步骤123:Optionally, as shown in FIG. 2, a flowchart of another information processing method according to an embodiment of the present invention. On the basis of the embodiment shown in FIG. 1, the step 120 in this embodiment may include the following steps. That is, steps 121 to 123:
步骤121,通过人脸检测,在用户图像信息中确定出面部区域。In step 121, the face area is determined in the user image information by face detection.
在本发明实施例中,由于摄像头启动后采集到的用户图像信息中不仅会包括面部图像,还可能采集到其他身体部位图像和/或背景图像,而且某一时间点采集到的用户图像信息中可能不包括面部图像,所以,为了保证最终获取到的特征信息的有效性,本步骤中,会从采集队列中对用户图像信息进行逐个提取,通过人脸检测,确定出面部区域,避免其他无用区域图像信息可能带来的干扰。In the embodiment of the present invention, the user image information collected after the camera is started may include not only a facial image but also other body part images and/or background images, and the user image information collected at a certain time point. The facial image may not be included. Therefore, in order to ensure the validity of the finally obtained feature information, in this step, the user image information is extracted one by one from the collection queue, and the face region is determined through face detection to avoid other uselessness. Interference caused by regional image information.
步骤122,在该面部区域进行面部特征信息的采集。Step 122, collecting facial feature information in the face area.
在本步骤中,在步骤121确定出的面部区域中进行面部特征信息的采集,其中,面部特征信息可以包括每个面部器官的位置、区域等信息。在实际应用中,可以通过面部识别,由面部器官的形状描述以及每个面部器官之间的距离特性来获得上述面部特征信息,也可以基于代数特征或统计学习的表征 方法得到面部特征信息,在此不再赘述。In this step, the collection of facial feature information is performed in the face region determined in step 121, wherein the facial feature information may include information such as the position, region, and the like of each facial organ. In practical applications, facial feature information can be obtained by facial recognition, shape description of facial organs, and distance characteristics between each facial organ, and can also be based on algebraic features or statistical learning characterization. The method obtains facial feature information, which will not be described here.
步骤123,根据该面部特征信息,区分出用户的面部特征部位,并提取面部特征部位对应的特征信息。Step 123: Differentiate the facial feature part of the user according to the facial feature information, and extract feature information corresponding to the facial feature part.
由于步骤122中,已经采集到包括每个面部器官的位置、区域等信息的面部特征信息,在本步骤中,即可根据该面部特征信息,区分出用户的面部特征部位,例如眼部、嘴部,并对应提取出该面部特征部位的特征信息。Since the facial feature information including the position, the area, and the like of each facial organ has been collected in step 122, in this step, the facial feature portion of the user, such as the eye and the mouth, can be distinguished according to the facial feature information. And correspondingly extracting feature information of the facial feature portion.
在人的面部表情中,处于不同情绪时嘴部的特征变化相对于其他部位更加明显,因此,在本发明上述实施例的基础上,提供一种基于嘴部对应的特征信息的信息处理方法。In a human facial expression, the characteristic change of the mouth is more apparent with respect to other parts when the emotion is different. Therefore, based on the above-described embodiment of the present invention, an information processing method based on the feature information corresponding to the mouth is provided.
可选地,如图3所示,为图2所示实施例提供的信息处理方法中一种执行步骤123的方法的流程图。在本实施例中,上述步骤123可以包括如下步骤,即步骤1231~步骤1233:Optionally, as shown in FIG. 3, a flowchart of a method for performing step 123 in the information processing method provided in the embodiment shown in FIG. In this embodiment, the foregoing step 123 may include the following steps, that is, steps 1231 to 1233:
步骤1231,根据面部特征信息,区分出用户的嘴部区域。Step 1231, distinguishing the mouth area of the user according to the facial feature information.
由上述实施例中的内容可知,面部特征信息包括每个面部器官的位置、区域等信息,所以,在本步骤中,能够根据面部特征信息,区分出用户的嘴部区域。As can be seen from the above-described embodiments, the facial feature information includes information such as the position and region of each facial organ. Therefore, in this step, the mouth region of the user can be distinguished based on the facial feature information.
步骤1232,获取该嘴部区域的特征点位置。Step 1232, acquiring a feature point position of the mouth area.
为了最终得到嘴部区域的特征信息,会预先设定嘴部区域的特征点,例如左右嘴角、上下嘴唇顶点的特征点,因此,在本步骤中,就可以在嘴部区域确定出对应的特征点,进而获取到特征点位置。在实际应用中,为了保证数据的真实性,还可以在特征点位置之间选取预设数量的样点位置进行辅助分析。In order to finally obtain the feature information of the mouth region, the feature points of the mouth region, such as the left and right mouth corners, and the feature points of the upper and lower lip vertices are set in advance. Therefore, in this step, corresponding features can be determined in the mouth region. Click to get the feature point location. In practical applications, in order to ensure the authenticity of the data, a preset number of sample positions can be selected between the feature point positions for auxiliary analysis.
步骤1233,根据该特征点位置,得到该用户的嘴部的特征信息。Step 1233, according to the feature point position, obtaining feature information of the mouth of the user.
在本步骤中,根据步骤1232中得到的特征点位置中的左右嘴角位置、上下嘴唇顶点位置,以及样点位置的辅助,得到用户的嘴部的特征信息,例如嘴角弧度和上下嘴唇特征点的高度差等几何特征信息。在实现过程中,可以根据左右嘴角位置、上下嘴唇顶点位置以及样点位置构建坐标系,通过上述每种位置的坐标计算嘴角弧度和上下嘴唇特征点的高度差。 In this step, according to the left and right mouth corner positions, the upper and lower lip vertex positions, and the sample position assistance in the feature point position obtained in step 1232, the feature information of the user's mouth, such as the mouth angle curvature and the upper and lower lip feature points, is obtained. Geometric feature information such as height difference. In the implementation process, the coordinate system can be constructed according to the left and right mouth angular position, the upper and lower lip vertex positions, and the sample position, and the height difference between the mouth angle arc and the upper and lower lip feature points is calculated by the coordinates of each of the above positions.
图3所示实施例仅以面部特征部位为嘴部为例,说明提取面部特征部位对应的特征信息的是实现方式;另外,人的情绪也会通过眼睛而表露出来,面部特征部位还可以是眼部区域,该眼部区域对应的特征信息可以选择瞳孔的大小。在提取嘴部特征信息的基础上,可以结合眼部特征信息共同处理,以得到更精确的分析结果。当然,除嘴部区域、眼部区域外,还可以再结合其他部位进行处理,在此不再一一列举。The embodiment shown in FIG. 3 only takes the facial feature as the mouth as an example, and illustrates that the feature information corresponding to the facial feature is extracted. In addition, the human emotion is also revealed by the eye, and the facial feature may also be The eye area, the feature information corresponding to the eye area, can select the size of the pupil. Based on the extraction of the mouth feature information, the eye feature information can be combined for processing to obtain a more accurate analysis result. Of course, in addition to the mouth area and the eye area, it can be combined with other parts for processing, and will not be enumerated here.
可选地,本发明实施例提供的信息处理方法,为了提升对用户情绪状态的判断的精确度,考虑到人的面部表情变化的形成过程,例如,对于步骤130,在与存储的模板特征信息进行匹配时,还可以将不同时间点的对应特征信息进行连续的对比分析,如图4所示,为本发明实施例提供的又一种信息处理方法的流程图,在上述实施例的基础上,本实施例中的步骤130可以包括如下步骤,即步骤131~步骤133;图4所示实施例以在图1所示实施例的基础上为例予以示出。Optionally, in the information processing method provided by the embodiment of the present invention, in order to improve the accuracy of the judgment of the user's emotional state, the formation process of the facial expression change of the person is considered, for example, for the step 130, the stored template feature information is When the matching is performed, the corresponding feature information at different time points can be continuously compared and analyzed. As shown in FIG. 4, it is a flowchart of another information processing method provided by the embodiment of the present invention, which is based on the foregoing embodiment. The step 130 in this embodiment may include the following steps, that is, steps 131 to 133; the embodiment shown in FIG. 4 is shown as an example on the basis of the embodiment shown in FIG. 1.
步骤131,将特征信息与预存储的对应面部特征部位的模板特征信息进行匹配,得到匹配结果。Step 131: Match the feature information with the pre-stored template feature information of the corresponding facial feature part to obtain a matching result.
在本步骤中,逐一提取经步骤120中获取到的面部特征部位的特征信息,与数据库中预存储的相同面部特征部位的模板特征信息分别进行匹配,就能够得到对应的匹配结果,从而得到该面部特征部位的特征信息与每个模板特征信息的匹配度。以嘴部的特征信息为例予以说明,数据库中不同情绪状态对应存储嘴角弧度和上下嘴唇特征点的高度差都是具有对应情绪状态的阈值范围,例如,情绪状态为笑时,嘴角几何轮廓是上扬的,嘴角弧度和上下嘴唇特征点的高度差都是正值,并且在一个预先设定的阈值范围内;在当前获取用户的嘴角弧度和上下嘴唇特征点的高度差都在情绪状态为笑的模板对应的阈值范围内时,可以得到当前用户嘴部特征信息与情绪状态为笑的匹配度最高。In this step, the feature information of the facial feature part acquired in step 120 is extracted one by one, and the template feature information of the same facial feature part pre-stored in the database is matched, and the corresponding matching result can be obtained, thereby obtaining the matching result. The degree of matching of the feature information of the facial feature portion with each template feature information. Taking the characteristic information of the mouth as an example, the difference between the height of the stored mouth and the height difference of the upper and lower lip feature points in the database is a threshold range having a corresponding emotional state. For example, when the emotional state is laughing, the geometrical contour of the corner is Ascending, the height difference between the corner of the mouth and the feature points of the upper and lower lips are positive, and within a predetermined threshold range; the height difference between the current user's mouth angle and the upper and lower lip feature points is in the emotional state. When the template corresponds to the threshold range, the current user's mouth feature information and the emotional state are the highest match degree.
步骤132,将特征信息与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到分析结果。In step 132, the feature information is compared with the feature information of the corresponding facial feature part acquired in the previous cycle, and the analysis result is obtained.
在本步骤中,将步骤120中获取的面部特征部位的特征信息按照时间顺序与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到对应 的分析结果,了解特征信息的变化情况。In this step, the feature information of the facial feature part acquired in step 120 is compared with the feature information of the corresponding facial feature part acquired in the previous cycle according to the chronological order, and the corresponding information is obtained. The analysis results, to understand the changes in the characteristics of the information.
步骤133,根据匹配结果和分析结果,确定用户的情绪状态。Step 133, determining the emotional state of the user according to the matching result and the analysis result.
在本步骤中,结合匹配结果和分析结果可以了解到的用户的面部特征部位与每个模板特征信息的匹配度以及特征信息的变化情况,即可确定用户的情绪状态。In this step, the matching degree of the user's facial feature part and each template feature information and the change of the feature information can be known by combining the matching result and the analysis result, and the user's emotional state can be determined.
以某一时间点的用户图像信息为例,经步骤120获取到当前嘴部特征信息后,经步骤131,通过匹配得到的匹配结果了解到用户当前嘴部特征信息与情绪状态为笑的匹配度最高。但是人在笑的过程中,开始嘴角上扬的弧度是不断变大的,之后则会不断变小,因此,通过分析嘴角弧度的变化趋势,能够确定出用户情绪状态的变化情况,发现不变或不断变大,说明用户正在笑;发现变小,说明用户笑的情绪状态即将结束。在分析中除了嘴角的弧度,还可以结合上下嘴唇特征点的高度差的变化更加精准的完成对比分析,例如,嘴巴在张开以及合拢过程中,上下嘴唇特征点的高度差是不断变化的。通过本实施例,不仅能够确定用户的情绪状态的类型,还可以确定情绪状态的变化阶段,从而更大程度上提升信息处理的精确度。Taking the user image information at a certain time point as an example, after obtaining the current mouth feature information in step 120, through step 131, the matching result obtained by the matching is used to know the matching degree between the current mouth feature information and the emotional state of the user. highest. However, in the process of laughing, the curvature of the beginning of the mouth is constantly increasing, and then it will continue to become smaller. Therefore, by analyzing the trend of the curvature of the corner of the mouth, it is possible to determine the change of the user's emotional state and find that the change or Increasingly large, indicating that the user is laughing; the discovery is getting smaller, indicating that the emotional state of the user's laugh is coming to an end. In addition to the curvature of the corner of the mouth, the analysis can be combined with the change of the height difference of the upper and lower lip feature points to more accurately complete the contrast analysis. For example, during the opening and closing of the mouth, the height difference between the upper and lower lip feature points is constantly changing. With the present embodiment, not only the type of the user's emotional state can be determined, but also the changing phase of the emotional state can be determined, thereby improving the accuracy of the information processing to a greater extent.
可选地,考虑到移动终端对用户的适用性,本发明实施例的信息处理方法,还可以包括:将已确定用户的情绪状态的特征信息保存在对应面部特征部位的模板特征信息中。Optionally, the information processing method of the embodiment of the present invention may further include: saving feature information of the determined emotional state of the user in template feature information corresponding to the facial feature part, in consideration of the applicability of the mobile terminal to the user.
通过该步骤,移动终端进行了自我学习,将确定用户情绪状态后的面部特征部位的特征信息进行存储,填充数据库中的该面部特征部位的模板特征信息,这样,会根据用户建立个性化的数据信息,在后续的信息处理过程中也就会得到更高的匹配度,识别更快更准确。Through this step, the mobile terminal performs self-learning, stores feature information of the facial feature part after determining the user's emotional state, and fills the template feature information of the facial feature part in the database, so that personalized data is established according to the user. Information, in the subsequent information processing process will also get a higher degree of matching, recognition is faster and more accurate.
可选地,本发明实施例的信息处理方法中,在检测到用户对移动终端执行操作之后,还可以包括如下步骤:Optionally, in the information processing method of the embodiment of the present invention, after detecting that the user performs an operation on the mobile terminal, the method may further include the following steps:
采集用户声音信息;Collect user voice information;
根据该用户声音信息和预设的情绪声音模板,确定用户的声音情绪。The user's voice emotion is determined according to the user voice information and the preset emotional sound template.
通过上述步骤,本发明实施例除通过面部的情绪进行分析外,还可以结合声音辅助分析。在检测到用户对移动终端执行操作之后,启动对用户声音 信息的采集,可以通过开启通话麦克风采集用户声音信息,随后通过移动终端预存储的不同情绪状态的声音信息的情绪声音模板(例如笑声、哭声、吼声等带有情绪状态的声音)进行识别,确定用户的声音情绪。Through the above steps, in addition to the analysis of the emotion of the face, the embodiment of the present invention can also be combined with the voice assisted analysis. Starting the user's voice after detecting that the user performs an operation on the mobile terminal The information can be collected by opening the call microphone to collect the user's voice information, and then identifying the emotional sound template (such as laughter, crying, snoring, etc. with emotional state) of the sound information of different emotional states pre-stored by the mobile terminal. To determine the user's voice emotions.
可选的,在本发明实施例中,还可以包括:结合声音情绪验证用户的情绪状态。Optionally, in the embodiment of the present invention, the method further includes: verifying the emotional state of the user in combination with the voice emotion.
通过识别出的声音情绪,验证由当前用户的面部特征部位的特征信息确定的情绪状态,综合分析识别用户的真实情绪状态。The emotional state determined by the feature information of the facial feature portion of the current user is verified by the recognized sound emotion, and the real emotional state of the user is comprehensively analyzed.
移动终端在人们的日常生活中虽然成为了重要部分,但是仅仅是一个物理工具,一个接触外部信息的“入口”,但是非常枯燥乏味。可选地,本发明实施例中,在确定用户的情绪状态后,还可以包括:Mobile terminals have become an important part of people's daily lives, but they are just a physical tool, an "entry" to external information, but very boring. Optionally, in the embodiment of the present invention, after determining the emotional state of the user, the method may further include:
获取预设的情绪状态与会话模板的对应关系;Obtain a correspondence between a preset emotional state and a session template;
根据该对应关系,选择已确定用户的情绪状态对应的会话模板;Determining, according to the correspondence, a session template corresponding to the determined emotional state of the user;
根据该会话模板,发起与用户的会话。Initiate a session with the user based on the session template.
通过上述步骤,移动终端在精确确定用户的情绪状态后,还可以根据用户的情绪状态,选择出合适的会话模板,自动启动语音功能主动与用户发起对话,使用户具有更佳的使用体验,移动终端不仅仅为一个物理工具,也可以是聊天的“朋友”。Through the above steps, after accurately determining the emotional state of the user, the mobile terminal may also select an appropriate session template according to the emotional state of the user, and automatically initiate the voice function to initiate a dialogue with the user, so that the user has a better user experience and moves. The terminal is not only a physical tool, but also a "friend" of the chat.
在实际应用中,在确定用户的情绪状态后,除了启动适应的语音对话外,还可以进行其他功能,例如音乐的播放等,在此不再一一列举。In an actual application, after determining the emotional state of the user, in addition to starting the adapted voice conversation, other functions, such as playing of music, etc., may be performed, and are not enumerated here.
可选地,如图5所示,为本发明实施例提供的再一种信息处理方法的流程图,本实施例提供了移动终端的应用示例,该方法可以包括如下步骤,即步骤501~步骤511:Optionally, as shown in FIG. 5, which is a flowchart of still another information processing method according to an embodiment of the present invention, this embodiment provides an application example of a mobile terminal, and the method may include the following steps, that is, steps 501 to 511:
步骤501,在移动终端的正常待机休眠状态,检测用户是否对移动终端执行操作,若是,则执行步骤502,若否,则继续检测;Step 501: In the normal standby sleep state of the mobile terminal, detecting whether the user performs an operation on the mobile terminal, if yes, executing step 502, and if not, continuing to detect;
步骤502,自动后台启动前置摄像头以及麦克风,分别采集用户图像信息和声音信息,并将采集到的信息加入到采集信息队列;其中,用户图像信息按照预设周期进行采集,预设周期例如为0.5s;Step 502: automatically start the front camera and the microphone in the background, respectively collect the user image information and the sound information, and add the collected information to the collection information queue; wherein the user image information is collected according to a preset period, for example, the preset period is 0.5s;
步骤503,从采集信息队列中获取当前的用户图像信息,通过人脸检测确 定出面部区域,获取用户的面部特征信息;Step 503: Acquire current user image information from the collection information queue, and perform face detection. Determining a facial area to obtain facial feature information of the user;
步骤504,使用人脸识别从面部特征信息中提取嘴部区域的特征信息;Step 504: Extract feature information of the mouth region from the facial feature information by using face recognition;
步骤505,将嘴部区域的特征信息与移动终端预存储的嘴部区域的模板特征信息进行逐一匹配;Step 505, the feature information of the mouth area and the template feature information of the mouth area pre-stored by the mobile terminal are matched one by one;
步骤506,根据匹配结果,找出匹配度最高的情绪状态;Step 506: Find an emotional state with the highest matching degree according to the matching result.
步骤507,将当前的嘴部区域的特征信息与前一周期用户图像信息嘴部区域的特征信息进行比对;若两者对应的情绪状态不同,则执行步骤511,若两者对应的情绪状态相同,执行步骤508;Step 507, comparing the feature information of the current mouth region with the feature information of the mouth region of the user image information of the previous cycle; if the corresponding emotional states of the two are different, performing step 511, if the emotional states of the two are corresponding Similarly, step 508 is performed;
步骤508,根据当前的嘴角弧度以及上下嘴唇特征点的高度差与前一周期的变化情况,初步确定用户的情绪状态;Step 508: Initially determine the emotional state of the user according to the current mouth angle radians and the difference between the height difference of the upper and lower lip feature points and the previous period;
步骤509,根据采集的声音信息进行辅助分析,确定出用户的当前情绪状态;Step 509: Perform auxiliary analysis according to the collected sound information to determine a current emotional state of the user;
步骤510,启动语音功能,根据用户的当前情绪状态开始适应的会话;Step 510: Start a voice function, and start an adaptive session according to the current emotional state of the user;
步骤511,将已确定情绪状态的特征信息作为新的模板特征信息,保存在对应部特征部位的模板特征信息中。Step 511: The feature information of the determined emotional state is stored as new template feature information in the template feature information of the feature part of the corresponding part.
综上所述,本发明实施例提供的信息处理方法,在检测到用户对移动终端执行操作时,也就是用户正在使用移动终端时,该移动终端会自动启动摄像头(例如多采用前置摄像头)按照预设周期采集用户图像信息;然后,根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点用户图像信息,因此,对应获取到的用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随后,就能够将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。本发明实施例不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进行匹配的静态处理,还对面部特征部位的特征信息进行动态处理,实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。In summary, the information processing method provided by the embodiment of the present invention automatically detects the user when performing operations on the mobile terminal, that is, when the user is using the mobile terminal, the mobile terminal automatically starts the camera (for example, adopting a front camera) The user image information is collected according to the preset period; then, according to the user image information, the feature information of the user's facial feature part is acquired; since the camera is dynamically collected, the user image information at different time points can be collected, and accordingly, the corresponding information is obtained. The feature information of the facial feature portion of the user is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature portion to determine the user. The emotional state. In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized. The change more accurately analyzes the user's emotional state and gives the user a better experience.
如图6所示,为本发明的实施例提供的一种信息处理装置的结构示意图,本实施例提供的信息处理装置可以包括:第一处理模块10、第一获取模块20 和第二处理模块30。FIG. 6 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention. The information processing apparatus provided in this embodiment may include: a first processing module 10 and a first acquiring module 20; And a second processing module 30.
其中,第一处理模块10,设置为:在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;The first processing module 10 is configured to: when detecting that the user performs an operation on the mobile terminal, start the camera to dynamically collect user image information according to a preset period;
第一获取模块20,设置为:根据第一处理模块10采集的用户图像信息,获取用户的面部特征部位的特征信息;The first obtaining module 20 is configured to: acquire feature information of a facial feature part of the user according to the user image information collected by the first processing module 10;
第二处理模块30,设置为:将第一获取模块20获取的特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定用户的情绪状态。The second processing module 30 is configured to perform continuous comparative analysis on the feature information acquired by the first acquiring module 20, and match the template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
可选地,如图7所示,为本发明实施例提供的另一种信息处理装置的结构示意图。在图6所示装置的结构基础上,本实施例中的第一获取模块20可以包括:确定单元21、采集单元22和提取单元23。Optionally, FIG. 7 is a schematic structural diagram of another information processing apparatus according to an embodiment of the present invention. Based on the structure of the apparatus shown in FIG. 6, the first obtaining module 20 in this embodiment may include: a determining unit 21, an obtaining unit 22, and an extracting unit 23.
确定单元21,设置为:通过人脸检测,在第一处理模块10采集的用户图像信息中确定出面部区域;The determining unit 21 is configured to: determine, by the face detection, the face area in the user image information collected by the first processing module 10;
采集单元22,设置为:在确定单元21确定出的面部区域进行面部特征信息的采集;The collecting unit 22 is configured to: collect the facial feature information in the face region determined by the determining unit 21;
提取单元23,设置为:根据采集单元22采集的面部特征信息,区分出用户的面部特征部位,并提取面部特征部位对应的特征信息。The extracting unit 23 is configured to: according to the facial feature information collected by the collecting unit 22, distinguish the facial feature part of the user, and extract feature information corresponding to the facial feature part.
可选地,在本发明实施例中,上述提取单元23可以包括:Optionally, in the embodiment of the present invention, the foregoing extracting unit 23 may include:
区分子单元,设置为:根据采集单元22采集的面部特征信息,区分出用户的嘴部区域;The regional molecular unit is configured to: distinguish the mouth region of the user according to the facial feature information collected by the collecting unit 22;
获取子单元,设置为:获取区分子单元得到的嘴部区域的特征点位置;Obtaining a subunit, which is set to: acquire a feature point position of a mouth region obtained by the molecular unit of the region;
处理子单元,设置为:根据获取子单元获取的特征点位置,得到用户嘴部的特征信息。The processing subunit is configured to obtain the feature information of the user's mouth according to the position of the feature point acquired by the acquiring subunit.
可选地,如图8所示,为本发明实施例提供的又一种信息处理装置的结构示意图。在上述实施例的基础上,本实施例中的第二处理模块30可以包括:匹配单元31、分析单元32和处理单元33;图8所示实施例以在图7所示实施例的装置基础上为例予以示出。 Optionally, FIG. 8 is a schematic structural diagram of still another information processing apparatus according to an embodiment of the present invention. On the basis of the foregoing embodiment, the second processing module 30 in this embodiment may include: a matching unit 31, an analyzing unit 32, and a processing unit 33; the embodiment shown in FIG. 8 is based on the device in the embodiment shown in FIG. The above is shown as an example.
其中,匹配单元31,设置为:将第一获取模块20获取的特征信息与预存储的对应面部特征部位的模板特征信息进行匹配,得到匹配结果;The matching unit 31 is configured to: match the feature information acquired by the first acquiring module 20 with the template feature information of the pre-stored corresponding facial feature part to obtain a matching result;
分析单元32,设置为:将第一获取模块20获取的特征信息与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到分析结果;The analyzing unit 32 is configured to compare and analyze the feature information acquired by the first acquiring module 20 and the feature information of the corresponding facial feature part acquired in the previous cycle to obtain an analysis result;
处理单元33,设置为:根据匹配单元31得到的匹配结果和分析单元32得到的分析结果,确定用户的情绪状态。The processing unit 33 is configured to determine the emotional state of the user according to the matching result obtained by the matching unit 31 and the analysis result obtained by the analyzing unit 32.
可选地,在本发明实施例中,该信息处理装置还可以包括:Optionally, in the embodiment of the present invention, the information processing apparatus may further include:
采集模块,设置为:采集用户声音信息;The acquisition module is configured to: collect user voice information;
第三处理模块,设置为:根据采集模块采集的用户声音信息和预设的情绪声音模板,确定用户的声音情绪。The third processing module is configured to: determine a user's voice emotion according to the user voice information collected by the collection module and the preset emotional sound template.
可选地,在本发明实施例中,该信息处理装置还可以包括:Optionally, in the embodiment of the present invention, the information processing apparatus may further include:
验证模块,设置为:结合第三处理模块确定的声音情绪验证用户的情绪状态。The verification module is configured to: verify the emotional state of the user in conjunction with the voice emotion determined by the third processing module.
可选地,在本发明实施例中,该信息处理装置还可以包括:Optionally, in the embodiment of the present invention, the information processing apparatus may further include:
第四处理模块,设置为:将已确定用户的情绪状态的特征信息保存在对应面部特征部位的模板特征信息中。The fourth processing module is configured to: save the feature information of the determined emotional state of the user in the template feature information corresponding to the facial feature part.
可选地,在本发明实施例中,该信息处理装置还可以包括:Optionally, in the embodiment of the present invention, the information processing apparatus may further include:
第二获取模块,设置为:获取预设的情绪状态与会话模板的对应关系;The second obtaining module is configured to: obtain a correspondence between the preset emotional state and the session template;
选择模块,设置为:根据第二获取模块获取的对应关系,选择已确定用户的情绪状态对应的会话模板;The selection module is configured to: select a session template corresponding to the determined emotional state of the user according to the correspondence acquired by the second obtaining module;
会话发起模块,设置为:根据选择模块选择的会话模板,发起与用户的会话。The session initiation module is configured to initiate a session with the user according to the session template selected by the selection module.
本发明实施例提供的信息处理装置,在检测到用户对移动终端执行操作时,也就是用户正在使用移动终端时,第一处理模块会自动启动摄像头(例多采用前置摄像头)按照预设周期采集用户图像信息;然后,第一获取模块根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点的用户图像信息,因此,对应获取到的 用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随后,就能够由第二处理模块将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。本发明实施例不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进行匹配的静态处理,还会对该特征信息进行动态处理,实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。When the information processing apparatus provided by the embodiment of the present invention detects that the user performs an operation on the mobile terminal, that is, when the user is using the mobile terminal, the first processing module automatically starts the camera (for example, adopts a front camera) according to a preset period. Collecting user image information; then, the first obtaining module acquires feature information of the facial feature part of the user according to the user image information; since the camera is dynamically collected, the user image information at different time points can be collected, and accordingly, the corresponding image is acquired. of The feature information of the facial feature portion of the user is also feature information at different time points; subsequently, the feature information can be continuously compared and analyzed by the second processing module, and the template feature information of the pre-stored corresponding facial feature portion is performed. Match to determine the emotional state of the user. In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information is dynamically processed, thereby realizing the change of the feature information of the facial feature part. Accurately analyze the emotional state of the user and bring a better experience to the user.
本发明实施例提供的信息处理装置是应用了上述信息处理方法的装置,上述信息处理方法的实施例的实现方式适用于本发明实施例提供的装置,也能达到相同的技术效果。The information processing device provided by the embodiment of the present invention is the device to which the information processing method is applied, and the implementation manner of the embodiment of the information processing method is applicable to the device provided by the embodiment of the present invention, and the same technical effect can be achieved.
本发明的实施例还提供了一种移动终端,包括:上述任一实施例提供的信息处理装置。The embodiment of the present invention further provides a mobile terminal, comprising: the information processing apparatus provided by any of the foregoing embodiments.
本实施例提供的移动终端,在检测到用户对自身执行操作时,也就是正在被用户使用时,该移动终端会自动启动摄像头(例如多采用前置摄像头)按照预设周期采集用户图像信息;然后,根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点的用户图像信息,因此,对应获取到的用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随后,就能够将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。本发明实施例不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进行匹配的静态处理,还会对该特征信息进行动态处理,实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。When the mobile terminal provided by the embodiment detects that the user performs an operation on itself, that is, when the user is using the user, the mobile terminal automatically starts the camera (for example, adopts a front camera) to collect user image information according to a preset period; Then, according to the user image information, the feature information of the facial feature part of the user is acquired; since the camera is dynamically collected, the user image information at different time points can be collected, and therefore, the feature information of the acquired facial feature part of the user is correspondingly The feature information is also at different time points; subsequently, the feature information can be continuously compared and analyzed, and matched with the template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user. In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information is dynamically processed, thereby realizing the change of the feature information of the facial feature part. Accurately analyze the emotional state of the user and bring a better experience to the user.
本发明实施例提供移动终端是应用了上述信息处理方法的移动终端,上述信息处理方法的实施例的实现方式适用于该移动终端,也能达到相同的技术效果。The embodiment of the present invention provides a mobile terminal to which the mobile terminal is applied. The implementation manner of the foregoing information processing method is applicable to the mobile terminal, and the same technical effect can be achieved.
需要说明的是,本文的说明书中所描述的移动终端包括但不限于智能手机、平板电脑等,且所描述的许多功能部件都被称为模块,以便更加特别地强调其实现方式的独立性。It should be noted that the mobile terminals described in the specification herein include, but are not limited to, a smartphone, a tablet, etc., and many of the functional components described are referred to as modules to more particularly emphasize the independence of their implementation.
本发明实施例中,模块可以用软件实现,以便由各种类型的处理器执行。 举例来说,一个标识的可执行代码模块可以包括计算机指令的一个或多个物理或者逻辑块,举例来说,其可以被构建为对象、过程或函数。尽管如此,所标识模块的可执行代码无需物理地位于一起,而是可以包括存储在不同位里上的不同的指令,当这些指令逻辑上结合在一起时,其构成模块并且实现该模块的规定目的。In an embodiment of the invention, the modules may be implemented in software for execution by various types of processors. For example, an identified executable code module can comprise one or more physical or logical blocks of computer instructions, which can be constructed, for example, as an object, procedure, or function. Nonetheless, the executable code of the identified modules need not be physically located together, but may include different instructions stored in different bits that, when logically combined, constitute a module and implement the provisions of the module. purpose.
在实际应用中,上述可执行代码模块可以是单条指令或者是许多条指令,并且甚至可以分布在多个不同的代码段上,分布在不同程序当中,以及跨越多个存储器设备分布。另外,操作数据可以在模块内被识别,并且可以依照任何适当的形式实现并且被组织在任何适当类型的数据结构内。上述操作数据可以作为单个数据集被收集,或者可以分布在不同位置上,例如可以分别在不同存储设备上,并且至少部分可以仅作为电子信号存在于系统或网络上。In practical applications, the above executable code modules may be a single instruction or a plurality of instructions, and may even be distributed over a plurality of different code segments, distributed among different programs, and distributed across multiple memory devices. Additionally, operational data may be identified within the modules and may be implemented in any suitable form and organized within any suitable type of data structure. The above operational data may be collected as a single data set or may be distributed at different locations, for example, on different storage devices, and at least in part may exist as an electronic signal only on the system or network.
在模块可以利用软件实现的基础上,考虑到相关技术中硬件工艺的水平,所以可以以软件实现的模块,在不考虑成本的情况下,本领域技术人员都可以搭建对应的硬件电路来实现对应的功能,所述硬件电路包括常规的超大规模集成(Very Large Scale Integration,简称为:VLSI)电路或者门阵列以及诸如逻辑芯片、晶体管之类的半导体或者是其它分立的元件。上述模块还可以用可编程硬件设备,诸如现场可编程门阵列、可编程阵列逻辑、可编程逻辑设备等实现。On the basis that the module can be implemented by software, considering the level of the hardware process in the related technology, the module that can be implemented by software can be constructed by a person skilled in the art without corresponding consideration of cost. The hardware circuit includes conventional Very Large Scale Integration (VLSI) circuits or gate arrays and semiconductors such as logic chips, transistors, or other discrete components. The above modules can also be implemented with programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, and the like.
本领域普通技术人员可以理解上述实施例的全部或部分步骤可以使用计算机程序流程来实现,所述计算机程序可以存储于一计算机可读存储介质中,所述计算机程序在相应的硬件平台上(根据系统、设备、装置、器件等)执行,在执行时,包括方法实施例的步骤之一或其组合。One of ordinary skill in the art will appreciate that all or a portion of the steps of the above-described embodiments can be implemented using a computer program flow, which can be stored in a computer readable storage medium on a corresponding hardware platform (according to The system, device, device, device, etc. are executed, and when executed, include one or a combination of the steps of the method embodiments.
可选地,上述实施例的全部或部分步骤也可以使用集成电路来实现,这些步骤可以被分别制作成一个个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。Alternatively, all or part of the steps of the above embodiments may also be implemented by using an integrated circuit. These steps may be separately fabricated into individual integrated circuit modules, or multiple modules or steps may be fabricated into a single integrated circuit module. achieve.
上述实施例中的装置/功能模块/功能单元可以采用通用的计算装置来实现,它们可以集中在单个的计算装置上,也可以分布在多个计算装置所组成的网络上。The devices/function modules/functional units in the above embodiments may be implemented by a general-purpose computing device, which may be centralized on a single computing device or distributed over a network of multiple computing devices.
上述实施例中的装置/功能模块/功能单元以软件功能模块的形式实现并 作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。上述提到的计算机可读取存储介质可以是只读存储器,磁盘或光盘等。The device/function module/function unit in the above embodiment is implemented in the form of a software function module and When sold or used as a stand-alone product, it can be stored on a computer readable storage medium. The above mentioned computer readable storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
工业实用性Industrial applicability
本发明实施例,在检测到用户对移动终端执行操作时,通过自动启动摄像头按照预设周期采集用户图像信息;然后,根据该用户图像信息,获取到用户面部特征部位的特征信息;由于摄像头是动态采集,即可以采集到不同时间点的用户图像信息,因此,对应获取到的用户的面部特征部位的特征信息也是处于不同时间点的特征信息;随后,就能够将这些特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定该用户的情绪状态。本发明实施例不仅将获取到的面部特征部位的特征信息与预存储的模板特征信息进行匹配的静态处理,还对面部特征部位的特征信息进行动态处理,实现了通过面部特征部位的特征信息的变化更准确分析出用户的情绪状态,带给用户更佳的体验。 In the embodiment of the present invention, when detecting that the user performs an operation on the mobile terminal, the user image information is collected according to a preset period by automatically starting the camera; and then, according to the user image information, the feature information of the facial feature portion of the user is acquired; Dynamic acquisition, that is, user image information at different time points can be collected. Therefore, the feature information corresponding to the acquired facial feature parts of the user is also feature information at different time points; subsequently, the feature information can be continuously compared. The analysis is performed and matched with the pre-stored template feature information of the corresponding facial feature portion to determine the emotional state of the user. In the embodiment of the present invention, not only the static processing of the acquired feature information of the facial feature part and the pre-stored template feature information is performed, but also the feature information of the facial feature part is dynamically processed, and the feature information of the facial feature part is realized. The change more accurately analyzes the user's emotional state and gives the user a better experience.

Claims (11)

  1. 一种信息处理方法,包括:An information processing method includes:
    在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;When detecting that the user performs an operation on the mobile terminal, the startup camera dynamically collects the user image information according to the preset period;
    根据所述用户图像信息,获取所述用户的面部特征部位的特征信息;Obtaining feature information of the facial feature part of the user according to the user image information;
    将所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态。The feature information is subjected to continuous comparative analysis, and matched with template feature information of the pre-stored corresponding facial feature parts to determine the emotional state of the user.
  2. 根据权利要求1所述的信息处理方法,其中,所述根据所述用户图像信息,获取用户面部特征部位的特征信息,包括:The information processing method according to claim 1, wherein the acquiring the feature information of the facial feature of the user according to the user image information comprises:
    通过人脸检测,在所述用户图像信息中确定出面部区域;Determining a face area in the user image information by face detection;
    在所述面部区域进行面部特征信息的采集;Collecting facial feature information in the face area;
    根据所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息。And distinguishing, according to the facial feature information, a facial feature part of the user, and extracting feature information corresponding to the facial feature part.
  3. 根据权利要求2所述的信息处理方法,其中,所述根据所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息,包括:The information processing method according to claim 2, wherein the distinguishing the facial feature parts of the user according to the facial feature information and extracting the feature information corresponding to the facial feature parts comprises:
    根据所述面部特征信息,区分出所述用户的嘴部区域;Distinguishing the mouth area of the user according to the facial feature information;
    获取所述嘴部区域的特征点位置;Obtaining a feature point location of the mouth region;
    根据所述特征点位置,得到所述用户的嘴部的特征信息。According to the feature point position, feature information of the mouth of the user is obtained.
  4. 根据权利要求1所述的信息处理方法,其中,所述将所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态,包括:The information processing method according to claim 1, wherein the feature information is continuously compared and analyzed, and matched with template feature information of a pre-stored corresponding facial feature portion to determine an emotional state of the user. include:
    将所述特征信息与所述预存储的对应面部特征部位的模板特征信息进行匹配,得到匹配结果;Matching the feature information with the template feature information of the pre-stored corresponding facial feature part to obtain a matching result;
    将所述特征信息与前一周期获取到的对应面部特征部位的特征信息进行对比分析,得到分析结果;Comparing the feature information with the feature information of the corresponding facial feature portion acquired in the previous cycle to obtain an analysis result;
    根据所述匹配结果和所述分析结果,确定所述用户的情绪状态。 And determining an emotional state of the user according to the matching result and the analysis result.
  5. 根据权利要求1所述的信息处理方法,其中,所述在检测到用户对移动终端执行操作之后,所述方法还包括:The information processing method according to claim 1, wherein after the detecting that the user performs an operation on the mobile terminal, the method further comprises:
    采集用户声音信息;Collect user voice information;
    根据所述用户声音信息和预设的情绪声音模板,确定所述用户的声音情绪。Determining the user's voice emotion according to the user voice information and the preset emotional sound template.
  6. 根据权利要求5所述的信息处理方法,还包括:The information processing method according to claim 5, further comprising:
    结合所述声音情绪验证所述用户的情绪状态。The emotional state of the user is verified in conjunction with the voice emotion.
  7. 根据权利要求1所述的信息处理方法,还包括:The information processing method according to claim 1, further comprising:
    将已确定用户的情绪状态的特征信息保存在对应面部特征部位的模板特征信息中。The feature information of the determined emotional state of the user is saved in the template feature information corresponding to the facial feature portion.
  8. 根据权利要求1所述的信息处理方法,还包括:The information processing method according to claim 1, further comprising:
    获取预设的情绪状态与会话模板的对应关系;Obtain a correspondence between a preset emotional state and a session template;
    根据所述对应关系,选择已确定用户的情绪状态对应的会话模板;Determining, according to the correspondence, a session template corresponding to the determined emotional state of the user;
    根据所述会话模板,发起与所述用户的会话。Initiating a session with the user according to the session template.
  9. 一种信息处理装置,包括:An information processing apparatus comprising:
    第一处理模块,设置为:在检测到用户对移动终端执行操作时,启动摄像头按照预设周期动态采集用户图像信息;The first processing module is configured to: when detecting that the user performs an operation on the mobile terminal, start the camera to dynamically collect user image information according to a preset period;
    第一获取模块,设置为:根据所述第一处理模块采集的所述用户图像信息,获取所述用户的面部特征部位的特征信息;The first obtaining module is configured to: acquire feature information of the facial feature part of the user according to the user image information collected by the first processing module;
    第二处理模块,设置为:将所述第一获取模块获取的所述特征信息进行连续的对比分析,并与预存储的对应面部特征部位的模板特征信息进行匹配,确定所述用户的情绪状态。The second processing module is configured to: perform continuous comparative analysis on the feature information acquired by the first acquiring module, and match the pre-stored template feature information of the corresponding facial feature part to determine the emotional state of the user .
  10. 根据权利要求9所述的信息处理装置,其中,所述第一获取模块包括:The information processing device according to claim 9, wherein the first acquisition module comprises:
    确定单元,设置为:通过人脸检测,在所述第一处理模块采集的所述用户图像信息中确定出面部区域;a determining unit, configured to: determine, by the face detection, a face area in the user image information collected by the first processing module;
    采集单元,设置为:在所述确定单元确定出的所述面部区域进行面部特征信息的采集; The collecting unit is configured to: collect the facial feature information in the facial region determined by the determining unit;
    提取单元,设置为:根据所述采集单元采集的所述面部特征信息,区分出所述用户的面部特征部位,并提取所述面部特征部位对应的特征信息。The extracting unit is configured to: according to the facial feature information collected by the collecting unit, distinguish the facial feature part of the user, and extract feature information corresponding to the facial feature part.
  11. 一种移动终端,包括:如权利要求9或10所述的信息处理装置。 A mobile terminal comprising: the information processing apparatus according to claim 9 or 10.
PCT/CN2016/093112 2016-06-21 2016-08-03 Information processing method and device, and mobile terminal WO2017219450A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610448961.8A CN107526994A (en) 2016-06-21 2016-06-21 A kind of information processing method, device and mobile terminal
CN201610448961.8 2016-06-21

Publications (1)

Publication Number Publication Date
WO2017219450A1 true WO2017219450A1 (en) 2017-12-28

Family

ID=60734797

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/093112 WO2017219450A1 (en) 2016-06-21 2016-08-03 Information processing method and device, and mobile terminal

Country Status (2)

Country Link
CN (1) CN107526994A (en)
WO (1) WO2017219450A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN114125145A (en) * 2021-10-19 2022-03-01 华为技术有限公司 Method and equipment for unlocking display screen

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509041A (en) * 2018-03-29 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for executing operation
CN108804893A (en) * 2018-03-30 2018-11-13 百度在线网络技术(北京)有限公司 A kind of control method, device and server based on recognition of face
CN108830265A (en) * 2018-08-29 2018-11-16 奇酷互联网络科技(深圳)有限公司 Method, communication terminal and the storage device that mood in internet exchange is reminded
CN109343919A (en) * 2018-08-30 2019-02-15 深圳市口袋网络科技有限公司 A kind of rendering method and terminal device, storage medium of bubble of chatting
CN109192050A (en) * 2018-10-25 2019-01-11 重庆鲁班机器人技术研究院有限公司 Experience type language teaching method, device and educational robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
CN103309449A (en) * 2012-12-17 2013-09-18 广东欧珀移动通信有限公司 Mobile terminal and method for automatically switching wall paper based on facial expression recognition
CN104091153A (en) * 2014-07-03 2014-10-08 苏州工业职业技术学院 Emotion judgment method applied to chatting robot
US20140341473A1 (en) * 2011-12-06 2014-11-20 Kyungpook National University Industry-Academic Cooperation Foundation Apparatus and method for enhancing user recognition
CN104900007A (en) * 2015-06-19 2015-09-09 四川分享微联科技有限公司 Monitoring watch triggering wireless alarm based on voice
CN105549841A (en) * 2015-12-02 2016-05-04 小天才科技有限公司 Voice interaction method, device and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
US20140341473A1 (en) * 2011-12-06 2014-11-20 Kyungpook National University Industry-Academic Cooperation Foundation Apparatus and method for enhancing user recognition
CN103309449A (en) * 2012-12-17 2013-09-18 广东欧珀移动通信有限公司 Mobile terminal and method for automatically switching wall paper based on facial expression recognition
CN104091153A (en) * 2014-07-03 2014-10-08 苏州工业职业技术学院 Emotion judgment method applied to chatting robot
CN104900007A (en) * 2015-06-19 2015-09-09 四川分享微联科技有限公司 Monitoring watch triggering wireless alarm based on voice
CN105549841A (en) * 2015-12-02 2016-05-04 小天才科技有限公司 Voice interaction method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN114125145A (en) * 2021-10-19 2022-03-01 华为技术有限公司 Method and equipment for unlocking display screen

Also Published As

Publication number Publication date
CN107526994A (en) 2017-12-29

Similar Documents

Publication Publication Date Title
WO2017219450A1 (en) Information processing method and device, and mobile terminal
US10616475B2 (en) Photo-taking prompting method and apparatus, an apparatus and non-volatile computer storage medium
CN108280332B (en) Biological characteristic authentication, identification and detection method, device and equipment of mobile terminal
WO2016150001A1 (en) Speech recognition method, device and computer storage medium
WO2016172872A1 (en) Method and device for verifying real human face, and computer program product
TWI473080B (en) The use of phonological emotions or excitement to assist in resolving the gender or age of speech signals
WO2017045564A1 (en) Environmentally adaptive identity authentication method and terminal
US20140379351A1 (en) Speech detection based upon facial movements
US9299350B1 (en) Systems and methods for identifying users of devices and customizing devices to users
WO2021135685A1 (en) Identity authentication method and device
TW201741921A (en) Identity authentication method and apparatus
TW201606760A (en) Real-time emotion recognition from audio signals
CN104508597A (en) Method and apparatus for controlling augmented reality
CN109558788B (en) Silence voice input identification method, computing device and computer readable medium
WO2020147256A1 (en) Conference content distinguishing method and apparatus, and computer device and storage medium
US20150228278A1 (en) Apparatus and method for voice based user enrollment with video assistance
TW202006630A (en) Payment method, apparatus, and system
US11062126B1 (en) Human face detection method
WO2016168982A1 (en) Method, apparatus and terminal device for setting interrupt threshold for fingerprint identification device
WO2016197389A1 (en) Method and device for detecting living object, and mobile terminal
WO2017113407A1 (en) Gesture recognition method and apparatus, and electronic device
WO2021082045A1 (en) Smile expression detection method and apparatus, and computer device and storage medium
CN112286364A (en) Man-machine interaction method and device
Beton et al. Biometric secret path for mobile user authentication: A preliminary study
CN109065026B (en) Recording control method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16905990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16905990

Country of ref document: EP

Kind code of ref document: A1