CN109542230B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109542230B
CN109542230B CN201811440635.8A CN201811440635A CN109542230B CN 109542230 B CN109542230 B CN 109542230B CN 201811440635 A CN201811440635 A CN 201811440635A CN 109542230 B CN109542230 B CN 109542230B
Authority
CN
China
Prior art keywords
user
state
interactive
parameters
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811440635.8A
Other languages
Chinese (zh)
Other versions
CN109542230A (en
Inventor
李广
赵铠枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811440635.8A priority Critical patent/CN109542230B/en
Publication of CN109542230A publication Critical patent/CN109542230A/en
Application granted granted Critical
Publication of CN109542230B publication Critical patent/CN109542230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The embodiment of the application provides an image processing method and device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a facial image of a user in the process of the user participating in an interactive activity; acquiring a mental parameter of the user based on the facial image; judging the interaction state of the user on the next operation of the interaction activity based on the mental state parameters; and generating and outputting prompt information corresponding to the interactive state according to the interactive state. The interactive state of the user for the next operation of the interactive activity can be determined based on the mental state parameters of the user, so that when the decision of the user at the moment is determined to be improved according to the interactive state of the user, the prompt is generated and output for the user, and the user can be helped to quickly improve the decision level in the interactive activity.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, a user can participate in a plurality of interactive activities, such as chess interactive activities or card interactive activities, but in the process of the interactive activities, the user can hardly find out which decisions of the user are to be improved actively, so that the user is easy to encounter a bottleneck and can not improve the decision level in the interactive activities continuously.
Disclosure of Invention
The application aims to provide an image processing method, an image processing device, electronic equipment and a storage medium, so as to provide a prompt for a user to make a decision to be improved and help the user to quickly improve the decision level in interactive activities.
In order to achieve the above object, the embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring a facial image of a user in the process of the user participating in an interactive activity;
obtaining a mental parameter of the user based on the facial image;
judging the interaction state of the user on the next operation of the interaction activity based on the expression parameters;
and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
With reference to the first aspect, in some possible implementation manners, the determining, by the facial image, an interaction state of the user for a next operation of the interaction activity based on the emotional parameter includes:
obtaining M sight focuses of a user on an interactive interface of the interactive activity based on the M mental state parameters;
and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
With reference to the first aspect, in some possible implementation manners, determining, based on the M gaze focuses, whether an interaction state of a next operation of the interactive activity by the user is in a gaze focused state includes:
judging whether the number of sight line focuses in the same area of at least two areas on the interactive interface in the M sight line focuses is larger than or equal to a first preset number or not;
and if so, indicating that the interaction state of the user for the next operation of the interaction activity is in a sight focusing state.
With reference to the first aspect, in some possible implementations, obtaining M gaze focuses of a user on an interactive interface of the interactive activity based on the M mental parameters includes:
determining every two sight directions of the user corresponding to each mental state parameter in the M mental state parameters;
and determining sight focuses formed by every two sight directions on the interactive interface of the interactive activity, and determining the M sight focuses.
With reference to the first aspect, in some possible implementation manners, the face images further include N face images obtained within a second preset time period before the current time, where N is an integer greater than 1, the mind parameters include N mind parameters corresponding to the N face images, and after the interactive state of the user on the next operation of the interactive activity is determined based on the mind parameters, and before the prompt information corresponding to the interactive state is generated and output according to the interactive state, the method further includes:
after the user is determined to be in the sight focusing state, determining an emotion type corresponding to each of the N mental parameters based on the N mental parameters, wherein the N emotion types are total;
determining whether the interaction state of the user is in a non-positive emotion state based on the N emotion types;
if yes, executing the following steps: and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
With reference to the first aspect, in some possible implementation manners, determining whether the interaction state of the user is in a non-positive emotion state based on the N emotion types includes:
judging whether the number of non-positive emotions in the N emotion types is larger than or equal to a second preset number or not, wherein the fact that the number of the non-positive emotions is larger than or equal to the second preset number indicates that the interaction state of the user is in a non-positive emotion state.
With reference to the first aspect, in some possible implementation manners, based on the N mental state parameters, determining an emotion type corresponding to each mental state parameter of the N mental state parameters, where N emotion types are total, including:
analyzing each of the N emotional parameters through a human face emotion analysis model to obtain the probability that each of the emotional parameters output by the human face emotion analysis model is of each of a plurality of emotion types to be determined;
and determining the emotion type to be determined with the highest probability in the plurality of emotion types to be determined according to the probability of each emotion type to be determined, wherein the emotion type to be determined with the highest probability of each mental state parameter is the emotion type corresponding to each mental state parameter.
With reference to the first aspect, in some possible implementation manners, generating and outputting prompt information corresponding to the interaction state according to the interaction state includes:
determining an object in the interactive activity contained in the same area according to the condition that the interactive state of the user is in the sight focusing state;
and generating prompt information of the next operation according to the object, and outputting the prompt information.
With reference to the first aspect, in some possible implementation manners, generating prompt information about the next operation according to the object, and outputting the prompt information includes:
judging whether the object is an entity in the interactive activity or a background in the interactive activity according to the object;
if the object is an entity in the interactive activity, the weight used for calculating the entity in the current evaluation function is increased from a first value to a second value to obtain a current adjusted evaluation function, and prompt information of the next operation related to the entity is generated based on the current adjusted evaluation function; and if the object is the background in the interactive activity, generating prompt information of the next operation based on the current evaluation function.
With reference to the first aspect, in some possible implementations, the facial images include M facial images obtained within a first preset time period before the current time and N facial images obtained within a second preset time period before the current time, and the mental state parameters include: the method comprises the following steps that M expression parameters corresponding to the M facial images and N expression parameters corresponding to the N facial images are obtained, wherein M and N are integers larger than 1, and the interaction state of the user on the next operation of the interaction activity is judged based on the expression parameters, and the method comprises the following steps:
obtaining M sight focuses of a user on an interactive interface of the interactive activity based on the M mental state parameters, and determining an emotion type corresponding to each mental state parameter in the N mental state parameters based on the N mental state parameters, wherein N emotion types are total;
judging whether the number of sight line focuses in the same area of at least two areas on the interactive interface is larger than or equal to a first preset number or not in the M sight line focuses; judging whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number or not on the basis of the N emotion types;
when the number of the sight line focuses in the same area is judged to meet the first preset number, determining that the interaction state of the user for the next operation of the interaction activity is in a sight line focusing state; and when the number of the non-positive emotions in the N emotion types is judged to meet the second preset number, determining that the interaction state of the user is in a non-positive emotion state.
With reference to the first aspect, in some possible implementations, the method further includes:
generating and outputting an image capture angle adjustment prompt upon determining that at least some of the facial features of the user are not included in the facial image, wherein the facial features of the user include the five sense organs of the user.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
and the image obtaining module is used for obtaining the facial image of the user in the process of the user participating in the interactive activity.
And the expression obtaining module is used for obtaining expression parameters of the user based on the face image.
And the operation judgment module is used for judging the interaction state of the user on the next operation of the interaction activity based on the expression parameter.
And the prompt output module is used for generating and outputting prompt information corresponding to the interactive state according to the interactive state.
With reference to the second aspect, in some optional implementations, the face images include M face images obtained within a first preset time period before the current time, the appearance parameters include M appearance parameters corresponding to the M face images, M is an integer greater than 1,
the operation judgment module is further used for obtaining M sight focuses of the user on the interaction interface of the interaction activity based on the M mental state parameters; and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
In combination with the second aspect, in some alternative implementations,
the operation judgment module is further configured to judge whether the number of the sight focuses located in the same area of at least two areas on the interactive interface in the M sight focuses is greater than or equal to a first preset number; and if so, indicating that the interaction state of the user for the next operation of the interaction activity is in a sight focusing state.
In combination with the second aspect, in some alternative implementations,
the operation judgment module is further configured to determine every two sight directions of the user corresponding to each of the M emotional state parameters; and determining sight focuses formed by every two sight directions on the interactive interface of the interactive activity, and determining the M sight focuses.
With reference to the second aspect, in some optional implementations, the facial images further include N facial images obtained within a second preset time period before the current time, where N is an integer greater than 1, the emotional parameters include N emotional parameters corresponding to the N facial images, and the apparatus further includes, based on the emotional parameters:
the emotion determining module is used for determining an emotion type corresponding to each of the N mental state parameters based on the N mental state parameters after the user is determined to be in the sight focusing state, wherein the number of the emotion types is N;
a prompt determination module for determining whether the interaction state of the user is in a non-positive emotion state based on the N emotion types;
and the prompt execution module is used for executing the following steps if yes: and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
In combination with the second aspect, in some alternative implementations,
the prompt determining module is further configured to determine whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number, where the number of non-positive emotions is greater than or equal to the second preset number and indicates that the interaction state of the user is in a non-positive emotion state.
In combination with the second aspect, in some alternative implementations,
the emotion determining module is further used for analyzing each of the N emotional state parameters through a human face emotion analysis model to obtain the probability that each of the emotional state parameters output by the human face emotion analysis model is of each of a plurality of emotion types to be determined; and determining the emotion type to be determined with the highest probability in the plurality of emotion types to be determined according to the probability of each emotion type to be determined, wherein the emotion type to be determined with the highest probability of each mental state parameter is the emotion type corresponding to each mental state parameter.
In combination with the second aspect, in some alternative implementations,
the prompt output module is further configured to determine, according to the interaction state of the user being in the gaze focusing state, an object in the interaction activity included in the same region; and generating prompt information of the next operation according to the object, and outputting the prompt information.
In combination with the second aspect, in some alternative implementations,
the prompt output module is further used for judging whether the object is an entity in the interactive activity or a background in the interactive activity according to the object; if the object is the entity in the interactive activity, the weight used for calculating the entity in the current evaluation function is increased from a first value to a second value, the current adjusted evaluation function is obtained, and prompt information of the next operation related to the entity is generated based on the current adjusted evaluation function; and if the object is the background in the interactive activity, generating prompt information of the next operation based on the current evaluation function.
With reference to the second aspect, in some optional implementations, the facial images include M facial images obtained within a first preset time period before the current time and N facial images obtained within a second preset time period before the current time, and the mental state parameters include: m morphological parameters corresponding to the M facial images and N morphological parameters corresponding to the N facial images, wherein M and N are integers greater than 1,
the operation judgment module is further configured to obtain M sight focuses of the user on an interaction interface of the interaction activity based on the M mental state parameters, and determine an emotion type corresponding to each mental state parameter of the N mental state parameters based on the N mental state parameters, wherein the N emotion types are total; judging whether the number of sight line focuses in the M sight line focuses, which are positioned in the same area of at least two areas on the interactive interface, is greater than or equal to a first preset number or not based on the M sight line focuses; judging whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number or not on the basis of the N emotion types; when the number of the sight line focuses in the same area is judged to meet the first preset number, determining that the interaction state of the user on the next operation of the interaction activity is in a sight line focusing state; and when the number of the non-positive emotions in the N emotion types is judged to meet the second preset number, determining that the interaction state of the user is in a non-positive emotion state.
With reference to the second aspect, in some optional implementations, the apparatus further includes:
an angle prompt module, configured to generate and output an image capture angle adjustment prompt when it is determined that the facial image does not include at least some of the facial features of the user, where the facial features of the user include the five sense organs of the user.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes: a processor, a memory, a bus and a communication interface; the processor, the communication interface and the memory are connected by the bus. The memory is used for storing programs. The processor is configured to execute the image processing method according to the first aspect and any of the implementation manners of the first aspect by calling a program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having a non-volatile program code executable by a computer, where the program code causes the computer to execute the first aspect and the image processing method described in any one of the embodiments of the first aspect.
The beneficial effects of the embodiment of the application are that:
the interactive state of the user for the next operation of the interactive activity can be determined based on the mental state parameters of the user, so that when the decision of the user at the moment is determined to be improved according to the interactive state of the user, the prompt is generated and output for the user, and the user can be helped to quickly improve the decision level in the interactive activity.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 shows a block diagram of an electronic device according to a first embodiment of the present application;
FIG. 2 is a first flowchart of an image processing method provided in a second embodiment of the present application;
fig. 3 illustrates a first sub-flowchart of step S130 in a first flowchart of an image processing method according to a second embodiment of the present application;
FIG. 4 is a second flowchart of an image processing method provided in a second embodiment of the present application;
fig. 5 shows a second sub-flowchart of step S130 in the first flowchart of an image processing method according to the second embodiment of the present application;
fig. 6 shows a block diagram of an image processing apparatus according to a third embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without inventive step, are within the scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The terms "first," "second," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
First embodiment
Referring to fig. 1, an electronic device 10 is provided in an embodiment of the present application, where the electronic device 10 may be a terminal device or a server. The terminal device may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), or the like; the server may be a web server, a database server, a cloud server, or a server assembly composed of a plurality of sub servers, etc.
In this embodiment, the electronic device 10 may include: memory 11, communication interface 11, bus 13, and processor 14. The processor 14, the communication interface 11, and the memory 11 are connected by a bus 13. The processor 14 is arranged to execute executable modules, such as computer programs, stored in the memory 11. The components and configurations of electronic device 10 shown in FIG. 1 are for example, and not for limitation, and electronic device 10 may have other components and configurations as desired.
The memory 11 may include a high-speed random Access memory (RaMdom Access memory) and may also include a non-volatile memory (MoM-volatile memory), such as at least one disk memory. In the present embodiment, the memory 11 stores a program necessary for executing the image processing method.
The bus 13 may be an ISA bus, a PCI bus, an EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 1, but this does not indicate only one bus or one type of bus.
The processor 14 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware, integrated logic circuits, or software in the processor 14. The Processor 14 may be a general-purpose Processor, and includes a central processing unit (CeMtral processmg UMit, abbreviated as CPU), a network Processor (MP, abbreviated as MP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art.
The method performed by the flow process or the defined device disclosed in any of the embodiments of the present invention may be applied to the processor 14, or may be implemented by the processor 14. After the processor 14 receives the execution instruction and calls the program stored in the memory 11 through the bus 13, the processor 14 controls the communication interface 11 through the bus 13 to execute the method flow of the image processing method.
In addition, in some cases, if the electronic device 10 is a terminal device, the electronic device 10 may further have a camera 15, and the camera 15 may be a conventional high-definition camera. The camera 15 may be connected to the bus 13, and the camera 15 may be used to capture an image containing an object, such that the processor 14 of the electronic device 10 performs a method flow of the image processing method based on the bus 13 obtaining the image captured by the camera 15.
In other cases, if the electronic device 10 is a server, the electronic device 10 may obtain an image of the user with a terminal that captures the image of the user, so that the electronic device 10 may execute the method flow of the image processing method based on the obtained image.
Second embodiment
The present embodiment provides an image processing method, it should be noted that the steps shown in the flowchart of the figure may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different than here. The present embodiment will be described in detail below.
Referring to fig. 2, in the image processing method provided in this embodiment, the image processing method includes: step S110, step S120, step S130, and step S140.
Step S110: during the user's participation in the interactive activity, a facial image of the user is obtained.
Step S120: based on the facial image, obtaining a mental state parameter of the user.
Step S130: and judging the interaction state of the user on the next operation of the interaction activity based on the expression parameters.
Step S140: and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
The steps of the present application will be described in detail below with reference to fig. 2 and 3.
Step S110: during the user's participation in the interactive activity, a facial image of the user is obtained.
The interactive activity may be an interactive class of application, and the type of interactive class of application may be, for example: a small part of programs are installed on the terminal device, and most of the programs are web page type cloud applications on the cloud, or most of the programs are installed on traditional applications running on the terminal device.
Under the condition that the electronic equipment is the terminal equipment, the electronic equipment can provide an interactive interface for the user by running the interactive application, so that the user can participate in the interactive activity of the interactive application based on the interactive interface. Optionally, the interactive activity may be a chess-card-like activity, for example: gobang, go, chinese chess, poker, mahjong, etc., but the present embodiment is not limited thereto.
In order to increase the real experience of the user after participating in the interactive activities, the interactive interface can show the real game interfaces of the chess and card activities, for example, the game interfaces of Chinese chess can be displayed on the interactive interface, and the game interfaces of playing cards can also be displayed on the interactive interface.
Further, after the user participates in the interactive activity, the opponent of the user's opponent in the interactive activity may be an AI computer (Artificial Intelligence), or the opponent of the user's opponent in the interactive activity may be another user. For example, a user may engage in a Chinese chess game with an AI computer based on an electronic device, but the user may also engage in a Chinese chess game with other users based on an electronic device. That is, although the opponents of the user's hands may be different, the hands-on interface presented on the electronic device may be the same.
Furthermore, an option whether to start the operation prompt or not can be provided for the user on the game interface of the electronic equipment, and if the user selects to start the operation prompt based on the option, the electronic equipment can respond to the start operation of the user to start the execution of the image processing method; otherwise, the execution of the image processing method is not started, and the user is enabled to participate in the interactive activity in the traditional mode.
After the electronic device starts execution of the image processing method, the electronic device can shoot the face of the user based on the camera, and obtain a video stream of the user face shot by the camera. Since the video stream may be composed of a plurality of frames of consecutive face images of the user, obtaining the video of the face of the user by the electronic device may also be understood as obtaining the plurality of frames of face images of the user by the electronic device.
In the case where the electronic device is a server, the electronic device may implement the execution of the image processing method without directly interacting with the user. In this case, the user terminal used by the user may run the interactive application, and when the user terminal used by the user participates in the interactive activity of the interactive application, the electronic device may interact with the user terminal used by the user to implement the execution of the image processing method. Therefore, the electronic device can obtain a video stream of the face of the user from the user terminal, wherein the video stream is shot by the camera of the user terminal, and thus the electronic device also obtains a multi-frame face image of the user.
As an optional way to store the video stream, since the electronic device may continuously obtain the video stream of the face of the user after the electronic device starts to execute the image processing method and during the process of the user finishing participating in the interactive activity, the electronic device may update the stored video stream in real time to store the video stream in real time for a period of time before the current time under the condition that the storage space in the electronic device is limited. For example, the electronic device can ensure real-time storage of the video stream within 1-5 minutes before the current time through updating of the stored video stream during the entire interactive activity.
After the electronic device obtains the multi-frame face images of the user, the electronic device may continue to perform step S120.
Step S120: based on the facial image, obtaining a mental parameter of the user.
It can be understood that during the process of the user participating in the interactive activity, if the user does not know or does not determine how to operate the next step of the interactive activity properly, for example, during the game of the chinese chess, the user does not know where a certain chess piece should go well, in this case, based on the physiological response, the user often has a thinking state, that is, the user's sight is focused on a certain place to be convenient for thinking. The electronic device can determine whether the user is unaware of the appropriate action based on the physiological response.
When the user is in a thinking state, the fact that the user's gaze focuses to a certain place is a procedural behavior, namely the user's gaze can focus to a certain place for a period of time, so that the user can concentrate on thinking in the period of time. Therefore, in the multi-frame face images of the user, the thinking state of the user is difficult to accurately reflect by a single-frame face image, and the thinking state of the user can be accurately reflected by a plurality of face images within a period of time. Thus, the electronic device may be based on processing the plurality of facial images to determine whether the user is in a state of mind without knowing the appropriate operation to follow the interactive activity by processing the plurality of facial images.
Alternatively, since the time of the thinking state of the user is not too long in most cases, if the time duration corresponding to the plurality of facial images processed by the electronic device is too long, the obtained result may be inaccurate, for example, the user is in the thinking state in the first 3 seconds, and the latter 10 seconds is a distracted state, and if the plurality of facial images covering the whole 13 seconds are processed, the processing structure of the user in the thinking state is very likely to be unavailable. Therefore, the electronic device may process M face images obtained within a first preset time period from the current time to ensure accuracy of the obtained result, where M may be an integer greater than 1, and the first preset time period may be 1 to 3 seconds, but is not limited.
In this embodiment, as a way of obtaining M face images, the way of obtaining M face images by the electronic device may be that the electronic device extracts a recent video segment of 1 to 3 seconds from a stored video stream of 1 to 5 minutes, and takes a plurality of frame face images included in the video segment as the M face images.
As another way to obtain M face images, on a per-frame level, since the face appearance corresponding to every two adjacent frame face images is hard to change suddenly, the operation amount of the electronic device can be reduced while ensuring the accuracy of the result based on the principle that the electronic device can extract a part of frames from the segment of video of 1-3 seconds as the obtained M face images, for example, but not limited to, extracting a frame face image from every two consecutive frame face images or every three consecutive frame face images.
It will also be appreciated that, also because the state of thought of the user may generally be embodied as a gaze focus of the user, the electronic device may determine whether the user is in a state of thought based on processing and analyzing images of eye portions of the facial image of the user.
In this embodiment, a trained face emotion analysis model is preset in the electronic device, so that the electronic device can call the face emotion analysis model, and can input each face image of M face images into the face emotion analysis model, so that the face emotion analysis model can perform matting processing on each face image based on a deep neural network, and determine images of both eyes on the face of a user from each face image. Therefore, the electronic equipment can obtain image parameters corresponding to the images of the two eye parts in each facial image output by the facial emotion analysis model, and M image parameters are obtained in total.
It can be understood that, in this embodiment, the image of the two-eye portion may be used to determine the expression of the user, that is, to determine whether the expression of the user is in a thinking state, and therefore, the image data corresponding to the image of the two-eye portion may be the expression parameter of the user. In this way, the electronic device may obtain the M image parameters of the user to obtain the M emotional state parameters of the user.
After obtaining the M emotional parameters, the electronic device may continue to perform step S130.
Step S130: and judging the interaction state of the user on the next operation of the interaction activity based on the expression parameters.
As shown in fig. 3, in this embodiment, the sub-process of step S130 may include: step S131 and step S132.
Step S131: and obtaining M sight focuses of the user on an interactive interface of the interactive activity based on the M mental state parameters.
Step S132: and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
The flow of step S131 and step S132 will be described in detail below.
Step S131: and obtaining M sight focuses of the user on an interactive interface of the interactive activity based on the M mental state parameters.
Since the electronic device can determine whether the user's gaze is focused, the electronic device can determine the user's gaze focus based on the M mental parameters.
In detail, the electronic device may also continue to call the face emotion analysis model, so as to input the M mental parameters into the face emotion analysis model. The face emotion analysis model can calculate each of the M emotional parameters based on the deep neural network, so that every two sight directions of the user corresponding to each emotional parameter can be determined.
It should be noted that, in general, a user can watch through two eyes, the direction of each eyeball in the two eyes can correspond to one sight line direction, and each mental state parameter can obtain every two sight line directions of the user because each mental state parameter corresponds to an image of the two eyes of the user. In the case that the directions of the eyes of the user are different, the positions of the eyeballs may be different in the images reflected on the eyes of the user. Therefore, when the user gazes at different positions, based on different appearance parameters corresponding to the different positions, different two sight line directions can be determined.
In this embodiment, in order to determine the position where the user gazes, after obtaining every two gaze directions, the electronic device may input every two gaze directions into the human face emotion analysis model for calculation, and may predict the gaze focus formed by every two gaze directions on the interactive interface of the interactive activity by calculating every two gaze directions through the human face emotion analysis model. In this way, the electronic device can obtain a total of M gaze foci formed on the interactive interface of the interactive activity.
Thus, the electronic device may continue to perform step S132 through the M gaze focuses.
Step S132: and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
In order to accurately determine the position of the user's gaze on the interactive interface, the electronic device may divide the interactive interface into at least two regions in advance, for example, the interactive interface is divided into 20 regions equally but not limited, so that the regions serve as a criterion for measuring the position of the user's gaze.
Since the nature of determining the M gaze focuses may be determining coordinates of each of the M gaze focuses on the interactive interface, the electronic device may determine which of the at least two regions each gaze focus is located in based on the coordinates of each gaze focus on the interactive interface. In this way, the electronic device can determine the number of line-of-sight focal points in the same one of the at least two regions.
In this embodiment, if the number of the line of sight focuses in the same area is larger, it may be indicated that the user watches one area for a longer time, and it may be indicated that the user is in a line of sight focusing state for thinking. Therefore, a first preset number can be preset in the electronic device, and the first preset number can represent a lower limit value of the user in a focusing state of the sight line.
In this way, the electronic device may determine, based on the determined number of the line-of-sight focuses in the same area of the at least two areas, whether the number of the line-of-sight focuses located in the same area of the at least two areas on the interactive interface among the M line-of-sight focuses is greater than or equal to a first preset number. For example, the first preset number may be set to 30-60.
If the number of the sight line focuses in the same area is not greater than or equal to the first preset number, it indicates that the sight line focuses of the user in the first preset time period are not gathered, and therefore it can be indicated that the interaction state of the user for the next operation of the interaction activity in the first preset time period is not in the sight line focusing state, that is, it can be determined that the user knows the next appropriate operation of the interaction activity. In this way, the electronic device may terminate the execution of the subsequent flow in the execution process of the image processing method this time, so as to wait for the next round of execution of the image processing method.
It is to be noted that, after the electronic device starts executing the image processing method, and during the period from when the user selects to turn off the operation prompt based on the option of whether to turn on the operation prompt or when the interactive activity process is finished, the electronic device may perform the image processing method in a polling manner, for example, the electronic device may execute the image processing method by establishing a flow based on M face images, and if the electronic device obtains 5 face images of the M face images that are the latest face image, the electronic device may execute the image processing method by additionally establishing a new flow based on the M face images updated with the 5 face images while continuing to execute the original flow.
If the number of the sight line focuses in the same area is larger than or equal to the first preset number, it indicates that the sight line focuses of the user in the first preset time length are gathered, and therefore it can be indicated that the interaction state of the user for the next operation of the interaction activity in the first preset time length is in the sight line focusing state, and it can be determined that the user does not know the next appropriate operation of the interaction activity.
If it is determined that the user does not know the next suitable operation, the electronic device may continue to perform step S140.
Step S140: and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
In order to improve the user experience and make the generated prompt information related to the unknown or uncertain next operation of the user as much as possible, in this embodiment, the electronic device may generate the prompt information based on an object focused by the user on the interactive interface.
The electronic device determines, based on determining that the same area where the gaze focuses are located is greater than the first preset number, that is, based on determining that the interaction state of the user is in the gaze focusing state, the electronic device may further determine, based on analyzing the image of the same area in the interaction interface, an object in the interaction activity included in the same area, where the object may include: an entity in an interactive activity or a background in an interactive activity. Taking Chinese chess as an example, the entity in the interactive activity can be the chess pieces in the Chinese chess, and the background in the interactive activity can be other areas except the chess pieces in the interactive interface; furthermore, taking playing cards as an example, the entity in the interactive activity may be the card face belonging to the user in the playing cards, and the background in the interactive activity may be other areas except the card face belonging to the user in the interactive interface.
It can be understood that, in order to ensure the accuracy of determining whether the determined object is an entity or a background, the content included in each of the at least two regions is divided to be either an entity in the interactive activity or a background in the interactive activity, and it is not suggested that each region includes both the entity in the interactive activity and the background in the interactive activity.
Therefore, the electronic device determines the object in the interactive activity included in the same area, and the electronic device can determine whether the object is an entity in the interactive activity or a background in the interactive activity.
If it is determined that the object is an entity in the interactive activity, it may be determined that the gaze of the user may be, for example, a chess piece or a card.
As an optional manner, the manner in which the electronic device generates the prompt message may be: the electronic device is preset with a trained interactive activity evaluation model, the interactive activity evaluation model may include an evaluation function, and a parameter of each entity in the interactive activity in the evaluation function may be determined by a situation of each entity in the interactive activity, for example, the evaluation function may determine a parameter of each chess piece based on a position of each chess piece in a current game situation. Therefore, the interactive activity evaluation model calculates the evaluation function based on the parameters of each entity, and prompt information according with the current situation of the game can be determined.
Therefore, after the electronic device determines that the object is an entity in the interactive activity, the electronic device may increase the weight used for calculating the entity in the current evaluation function from the first value to the second value to obtain the current adjusted evaluation function, so that the entity in the current adjusted evaluation function may have a greater influence on the final result. In this way, the interactive activity assessment model is computed based on the current adjusted assessment function to generate hints for further actions related to the entity.
After the prompt information is calculated, in the case that the electronic device is a server, the electronic device may output the prompt information to a user terminal used by a user, so that the user terminal displays the prompt information. And in the case that the electronic device is a terminal device, the electronic device may display the prompt information in the form of animation or text. Correspondingly, the user can receive prompt information corresponding to uncertain or unknown operation, so that the user can have better user experience.
Taking the Chinese chess as an example, if the object is determined to be the 'cannon' in the Chinese chess, the fact that the user wants to perform the next operation based on the 'cannon' chess piece is shown, but the user does not determine how the 'cannon' chess piece is most suitable to operate under the current game situation. Thus, the electronic device may add the weight of the entity "shot" in the evaluation function to calculate the cue information, so that the electronic device may calculate the cue information related to the operation of the "shot". Based on this, the prompt message displayed by the electronic device may be: and fifthly, advancing the cannon into the cannon.
If it is determined that the object is the background in the interactive activity, it may be determined that the gaze of the user may be, for example, a grid.
Therefore, the electronic equipment does not adjust the weight of each entity in the current evaluation function, so that the interactive evaluation model can be calculated based on the current evaluation function, and further prompt information of next operation related to the current local situation can be generated. Therefore, the electronic device can also output the prompt information to the user terminal or display the prompt information.
As an optional way, to avoid false prompting, before the electronic device finishes executing step S130 and before starting to execute step S140, the electronic device may determine whether the user has performed the next operation based on analyzing the image of the interactive interface. If it is determined that the user has performed the next operation, the electronic device may terminate execution of a subsequent process in the process of executing the image processing method this time; otherwise, the electronic device continues to perform step S140.
Referring to fig. 4, as an optional implementation manner in this embodiment, after the electronic device completes step S130, and before the electronic device starts to execute step S140, the electronic device may further execute step S101 and step S102.
Step S101: after the user is determined to be in the sight line focusing state, determining the emotion type corresponding to each of the N mental parameters based on the N mental parameters, wherein the N emotion types are total.
Step S102: and judging whether the interaction state of the user is in a non-positive emotion state or not based on the N emotion types.
The flow of step S101 and step S102 will be described in detail below.
It can be understood that, although it is determined that the user does not know the appropriate operation following the interactive activity, in order to improve the experience of the user, the emotion of the user can be continuously detected without generating prompt information immediately and pushing the prompt information to the user, and the user can be prompted only when the emotion of the user is detected to be in a non-positive emotion state within a period of time.
Therefore, before executing step S101, in this embodiment, the electronic device may also correspondingly extract the face image required for detecting the emotion from the stored multiple frames of face images corresponding to the video stream, and the face image required for detecting the emotion may be N face images obtained by the electronic device within a second preset time period before the current time, where the second preset time period may be 10 to 30 seconds.
In this embodiment, as a way to obtain N face images, the electronic device may extract a last video segment of 10 to 30 seconds from a stored video stream of 1 to 5 minutes, and use a plurality of frame face images included in the video segment as the N face images.
As another way to obtain N face images, at the level of each frame, since the face appearance corresponding to every two adjacent frame face images is hard to change suddenly, the operation amount of the electronic device can be reduced while ensuring the accuracy of the result based on the principle that the electronic device can extract a part of frames from the video of 10-30 seconds as the obtained N face images, for example, a part of frames from every five consecutive frame face images or a part of frames from every six consecutive frame face images, but not limited thereto.
The electronic equipment can also call a preset face emotion analysis model, and each face image of the N face images can be input into the face emotion analysis model, so that the face emotion analysis model can perform matting processing on each face image based on a deep neural network, and remove backgrounds except the face of a user in each face image, and thus, the face-only image can be obtained. Therefore, the electronic equipment can obtain the image parameters corresponding to each image only of the face output by the face emotion analysis model, and N image parameters are obtained in total.
It is understood that, in this embodiment, the function of the face-only image may also be to determine the expression of the user, that is, to determine whether the expression of the interaction state of the user is still in a non-positive emotional state, and therefore, the image data of the face-only image may also be the expression parameter of the expression of the user. Thus, the electronic device may obtain N face-only images of the user and may obtain N mental parameters of the user.
After obtaining the N expression parameters, the electronic device may execute step S101.
Step S101: after the fact that the user is in the sight line focusing state is determined, based on the N mental state parameters, the emotion type corresponding to each mental state parameter in the N mental state parameters is determined, and N emotion types are determined in total.
The electronic device can also process and analyze the N mental parameters based on the face emotion analysis model, namely, the electronic device can also call the face emotion analysis model, so that the mental parameters of the N mental parameters can be all input into the face emotion analysis model. A plurality of emotion types are preset in the face emotion analysis model, and may include, for example: happy, optimistic, neutral, anxious and sick.
Therefore, the human face emotion analysis model processes and analyzes each mental state parameter based on a plurality of emotion types, and the probability of each emotion type to be determined in a plurality of emotion types to be determined represented by each mental state parameter can be determined. In this way, according to the probability of each emotion type to be determined, the electronic device can obtain the emotion type to be determined with the highest probability in the determination of the plurality of emotion types to be determined based on each mental state parameter, and the emotion type to be determined with the highest probability of each mental state parameter is the emotion type corresponding to each mental state parameter. For example, for a state parameter, the determined probability that the type of emotion to be determined is happy is 0.05, the determined probability that the type of emotion to be determined is optimistic is 0.05, the determined probability that the type of emotion to be determined is neutral is 0.01, the determined probability that the type of emotion to be determined is anxious is 0.7, and the determined probability that the type of emotion to be determined is sad is 0.1, then the electronic device may determine that the type of emotion corresponding to the state parameter is anxious.
Therefore, the electronic device can determine the emotion type corresponding to each of the N emotional parameters, so as to obtain N emotion types in total.
Step S102: based on the N emotion types, judging whether the interaction state of the user is in a non-positive emotion state.
In this embodiment, the electronic device may classify the plurality of emotion types as positive emotions and non-positive emotions, for example, happy and optimistic may be classified as positive emotions, and neutral, anxious and sad may be classified as non-positive emotions.
Then, in the N emotion types, if the number of non-positive emotions is larger, it may indicate that the user is in a non-positive emotional state for a longer time. Therefore, a second preset number can be preset in the electronic device, and the second preset number can represent that the interaction state of the user is in a lower limit value of a non-positive emotion state.
In this way, the electronic device may also determine whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number based on the determined N emotion types. For example, the second preset number may be set to 70-90.
If the number of the non-positive emotions in the N emotion types is smaller than a second preset number, the interaction state of the user is not in the non-positive emotion state within a second preset time period, that is, it can be judged that the current state of the user does not need to be prompted to the user. In this way, the electronic device may also terminate the execution of the subsequent flow in the execution process of the image processing method this time, so as to wait for the next round of execution of the image processing method.
If the number of the non-positive emotions in the N emotion types is greater than or equal to a second preset number, it indicates that the interaction state of the user is in the non-positive emotion state within a second preset time period, that is, it may be determined that the current state of the user is relatively anxious, and it may be necessary to prompt the user for the next operation.
Then, after determining that the next operation is required to be prompted, the electronic device may execute step S140 to prompt the user when the electronic device determines that the user has not performed the next operation based on the analysis of the image of the interactive interface.
As some optional manners in this embodiment, the electronic device may further perform analysis on the face image, and if it is determined that the face image does not include at least some of the facial features of the user, where the facial features of the user include the five sense organs of the user. The electronic device can generate and output the image acquisition angle adjustment prompt, so that the user can adjust the posture of the user based on the image acquisition angle adjustment prompt, and the face of the user can be located in the acquisition range of the camera.
As shown in fig. 5, as another possible implementation manner of step S130 in this embodiment, step S130 may include: step S1301, step S1302, and step S1303.
Step S1301: obtaining M sight focuses of the user on an interactive interface of the interactive activity based on the M mental state parameters, and determining an emotion type corresponding to each mental state parameter in the N mental state parameters based on the N mental state parameters, wherein the N emotion types are total.
Step S1302: judging whether the number of sight line focuses in the same area of at least two areas on the interactive interface is larger than or equal to a first preset number or not in the M sight line focuses; and judging whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number or not on the basis of the N emotion types.
Step S1303: when the number of the sight line focuses in the same area is judged to meet the first preset number, determining that the interaction state of the user for the next operation of the interaction activity is in a sight line focusing state; and when the number of the non-positive emotions in the N emotion types is judged to meet the second preset number, determining that the interaction state of the user is in a non-positive emotion state.
That is, the electronic device may use both the state of being in the focused line of sight and the state of being in the non-positive emotion as conditions for determining whether the user needs to be prompted, and then, in any state, the electronic device may determine that the user needs to be prompted.
It can be understood that the detailed implementation flows of step S1301, step S1302 and step S1303 may refer to the foregoing implementation manners, and will not be described in detail here.
Third embodiment
Referring to fig. 6, an embodiment of the present application provides an image processing apparatus 100, where the image processing apparatus 100 may be applied to an electronic device, and the image processing apparatus 100 includes:
the image obtaining module 110 is configured to obtain a facial image of the user during the user's participation in the interactive activity.
A mental state obtaining module 120, configured to obtain mental state parameters of the user based on the facial image.
An operation determining module 130, configured to determine, based on the mental state parameter, an interaction state of the user for a next operation of the interaction activity.
And a prompt output module 140, configured to generate and output prompt information corresponding to the interaction state according to the interaction state.
It should be noted that, as those skilled in the art can clearly understand, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In summary, the embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a storage medium. The method comprises the following steps: acquiring a facial image of a user in the process of the user participating in an interactive activity; acquiring a mental parameter of the user based on the facial image; judging the interaction state of the user on the next operation of the interaction activity based on the mental state parameters; and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
The interactive state of the user for the next operation of the interactive activity can be determined based on the mental state parameters of the user, so that when the decision of the user at the moment is determined to be improved according to the interactive state of the user, the prompt is generated and output for the user, and the user can be helped to quickly improve the decision level in the interactive activity.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. An image processing method, characterized in that the method comprises:
acquiring a facial image of a user in the process of the user participating in an interactive activity;
acquiring a mental state parameter of the user based on the face image, wherein the mental state parameter is image data corresponding to the images of the two eyes of the user;
judging the interaction state of the user on the next operation of the interaction activity based on the expression parameters;
generating and outputting prompt information corresponding to the interaction state according to the interaction state; the method comprises the following steps that the facial images comprise M facial images obtained within a first preset time before the current moment, the expression parameters comprise M expression parameters corresponding to the M facial images, M is an integer larger than 1, and the interaction state of the user on the next operation of the interaction activity is judged based on the expression parameters, wherein the method comprises the following steps:
obtaining M sight focuses of a user on an interactive interface of the interactive activity based on the M mental state parameters;
and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
2. The image processing method according to claim 1, wherein determining whether an interaction state of the user for a next operation of the interaction activity is in a gaze focusing state based on the M gaze focuses comprises:
judging whether the number of sight line focuses in the same area of at least two areas on the interactive interface in the M sight line focuses is larger than or equal to a first preset number or not;
and if so, indicating that the interaction state of the user for the next operation of the interaction activity is in a sight focusing state.
3. The image processing method according to claim 2, wherein obtaining M gaze foci of a user on an interactive interface of the interactive activity based on the M mental state parameters comprises:
determining every two sight directions of the user corresponding to each mental state parameter in the M mental state parameters;
and determining sight focuses formed by every two sight directions on the interactive interface of the interactive activity, and determining the M sight focuses.
4. The image processing method according to claim 1, wherein the facial images further include N facial images obtained within a second preset time period before the current time, N is an integer greater than 1, the mind parameters include N mind parameters corresponding to the N facial images, and after determining, based on the mind parameters, an interaction state of the user for a next operation of the interactive activity, and before generating and outputting prompt information corresponding to the interaction state according to the interaction state, the method further includes:
after the user is determined to be in the sight focusing state, determining an emotion type corresponding to each of the N mental parameters based on the N mental parameters, wherein the N emotion types are total;
based on the N emotion types, judging whether the interaction state of the user is in a non-positive emotion state;
if yes, executing the following steps: and generating and outputting prompt information corresponding to the interactive state according to the interactive state.
5. The image processing method according to claim 4, wherein determining whether the interaction state of the user is in a non-positive emotion state based on the N emotion types comprises:
judging whether the number of non-positive emotions in the N emotion types is larger than or equal to a second preset number or not, wherein the fact that the number of the non-positive emotions is larger than or equal to the second preset number indicates that the interaction state of the user is in a non-positive emotion state.
6. The image processing method according to claim 4, wherein based on the N mental state parameters, an emotion type corresponding to each mental state parameter in the N mental state parameters is determined, and the N emotion types comprise:
analyzing each of the N emotional state parameters through a human face emotion analysis model to obtain the probability that each of the emotional state parameters output by the human face emotion analysis model is of each of a plurality of emotion types to be determined;
and determining the emotion type to be determined with the highest probability in the plurality of emotion types to be determined according to the probability of each emotion type to be determined, wherein the emotion type to be determined with the highest probability of each mental state parameter is the emotion type corresponding to each mental state parameter.
7. The image processing method according to claim 2, wherein generating and outputting a prompt message corresponding to the interaction state according to the interaction state includes:
determining an object in the interactive activity contained in the same area according to the condition that the interactive state of the user is in the sight focusing state;
and generating prompt information of the next operation according to the object, and outputting the prompt information.
8. The image processing method according to claim 7, wherein generating a prompt message for the next operation based on the object and outputting the prompt message includes:
judging whether the object is an entity in the interactive activity or a background in the interactive activity according to the object;
if the object is the entity in the interactive activity, the weight used for calculating the entity in the current evaluation function is increased from a first value to a second value, the current adjusted evaluation function is obtained, and prompt information of the next operation related to the entity is generated based on the current adjusted evaluation function; and if the object is the background in the interactive activity, generating prompt information of the next operation based on the current evaluation function.
9. The image processing method according to claim 1, wherein the facial images include M facial images obtained within a first preset time period before the current time and N facial images obtained within a second preset time period before the current time, and the mental parameters include: the method comprises the following steps that M expression parameters corresponding to the M facial images and N expression parameters corresponding to the N facial images are obtained, M and N are integers larger than 1, and the interactive state of the user on the next operation of the interactive activity is judged based on the expression parameters, and the method comprises the following steps:
obtaining M sight focuses of a user on an interactive interface of the interactive activity based on the M mental state parameters, and determining an emotion type corresponding to each mental state parameter in the N mental state parameters based on the N mental state parameters, wherein the N emotion types are total;
judging whether the number of sight line focuses in the same area of at least two areas on the interactive interface is larger than or equal to a first preset number or not in the M sight line focuses; judging whether the number of non-positive emotions in the N emotion types is greater than or equal to a second preset number or not on the basis of the N emotion types;
when the number of the sight line focuses in the same area is judged to meet the first preset number, determining that the interaction state of the user for the next operation of the interaction activity is in a sight line focusing state; and when the number of the non-positive emotions in the N emotion types is judged to meet the second preset number, determining that the interaction state of the user is in a non-positive emotion state.
10. The image processing method according to any one of claims 1 to 9, further comprising:
generating and outputting an image capture angle adjustment prompt upon determining that at least some of the facial features of the user are not included in the facial image, wherein the facial features of the user include the five sense organs of the user.
11. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a facial image of a user in the process that the user participates in the interactive activity;
a mental state obtaining module, configured to obtain mental state parameters of the user based on the facial image, where the mental state parameters are image data corresponding to images of both eyes of the user;
the operation judgment module is used for judging the interaction state of the user on the next operation of the interaction activity based on the expression parameter;
the prompt output module is used for generating and outputting prompt information corresponding to the interaction state according to the interaction state;
the facial image comprises M facial images obtained within a first preset time before the current moment, the appearance parameters comprise M appearance parameters corresponding to the M facial images, M is an integer larger than 1, and the operation judgment module is used for obtaining M sight focuses of a user on an interactive interface of the interactive activity based on the M appearance parameters;
and judging whether the interaction state of the user for the next operation of the interaction activity is in a sight focusing state or not based on the M sight focuses.
12. An electronic device, characterized in that the electronic device comprises: a processor, a memory, a bus and a communication interface; the processor, the communication interface and the memory are connected through the bus;
the memory is used for storing programs;
the processor for executing the image processing method of any one of claims 1 to 10 by calling a program stored in the memory.
13. A computer-readable storage medium having computer-executable non-volatile program code, wherein the program code causes the computer to perform the image processing method according to any one of claims 1 to 10.
CN201811440635.8A 2018-11-28 2018-11-28 Image processing method, image processing device, electronic equipment and storage medium Active CN109542230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811440635.8A CN109542230B (en) 2018-11-28 2018-11-28 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811440635.8A CN109542230B (en) 2018-11-28 2018-11-28 Image processing method, image processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109542230A CN109542230A (en) 2019-03-29
CN109542230B true CN109542230B (en) 2022-09-27

Family

ID=65851075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811440635.8A Active CN109542230B (en) 2018-11-28 2018-11-28 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109542230B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI769419B (en) * 2019-12-10 2022-07-01 中華電信股份有限公司 System and method for public opinion sentiment analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071252A (en) * 2006-05-10 2007-11-14 佳能株式会社 Focus adjustment method, focus adjustment apparatus, and control method thereof
JP2016081319A (en) * 2014-10-17 2016-05-16 キヤノン株式会社 Edition support device for image material, album creation method and program for layout
CN107635147A (en) * 2017-09-30 2018-01-26 上海交通大学 Health information management TV based on multi-modal man-machine interaction
CN108388889A (en) * 2018-03-23 2018-08-10 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing facial image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8237807B2 (en) * 2008-07-24 2012-08-07 Apple Inc. Image capturing device with touch screen for adjusting camera settings
US20150186912A1 (en) * 2010-06-07 2015-07-02 Affectiva, Inc. Analysis in response to mental state expression requests
CN106708257A (en) * 2016-11-23 2017-05-24 网易(杭州)网络有限公司 Game interaction method and device
US10558701B2 (en) * 2017-02-08 2020-02-11 International Business Machines Corporation Method and system to recommend images in a social application
CN107784281B (en) * 2017-10-23 2019-10-11 北京旷视科技有限公司 Method for detecting human face, device, equipment and computer-readable medium
CN108197533A (en) * 2017-12-19 2018-06-22 迈巨(深圳)科技有限公司 A kind of man-machine interaction method based on user's expression, electronic equipment and storage medium
CN108434757A (en) * 2018-05-25 2018-08-24 深圳市零度智控科技有限公司 intelligent toy control method, intelligent toy and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071252A (en) * 2006-05-10 2007-11-14 佳能株式会社 Focus adjustment method, focus adjustment apparatus, and control method thereof
JP2016081319A (en) * 2014-10-17 2016-05-16 キヤノン株式会社 Edition support device for image material, album creation method and program for layout
CN107635147A (en) * 2017-09-30 2018-01-26 上海交通大学 Health information management TV based on multi-modal man-machine interaction
CN108388889A (en) * 2018-03-23 2018-08-10 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing facial image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一类具有相同结构的表情机器人共同注意方法;王巍等;《机器人》;20120515(第03期);全文 *
对电视节目主持人和电视品牌节目的符号学解析;黄雨水;《中国广播电视学刊》;20090520(第05期);全文 *

Also Published As

Publication number Publication date
CN109542230A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
JP6467965B2 (en) Emotion estimation device and emotion estimation method
US20160023116A1 (en) Electronically mediated reaction game
CN109191802B (en) Method, device, system and storage medium for eyesight protection prompt
US11132544B2 (en) Visual fatigue recognition method, visual fatigue recognition device, virtual reality apparatus and storage medium
CN113422977B (en) Live broadcast method and device, computer equipment and storage medium
CN113453034B (en) Data display method, device, electronic equipment and computer readable storage medium
US11758285B2 (en) Picture selection method and related device
CN107563325B (en) Method and device for testing fatigue degree and terminal equipment
US10877555B2 (en) Information processing device and information processing method for controlling user immersion degree in a virtual reality environment
JP2015220574A (en) Information processing system, storage medium, and content acquisition method
US20210312167A1 (en) Server device, terminal device, and display method for controlling facial expressions of a virtual character
JP2020098992A (en) Video distribution system, video distribution method, and video distribution program
CN111580665B (en) Method and device for predicting fixation point, mobile terminal and storage medium
CN110547756A (en) Vision test method, device and system
CN109634422B (en) Recitation monitoring method and learning equipment based on eye movement recognition
CN109542230B (en) Image processing method, image processing device, electronic equipment and storage medium
KR101498593B1 (en) Apparatus and method for evaluating game or internet addiction
JP2020099090A (en) Video distribution system, video distribution method, and video distribution program
US11800975B2 (en) Eye fatigue prediction based on calculated blood vessel density score
CN105138950B (en) A kind of photographic method and user terminal
CN111773693A (en) Method and device for processing view in game and electronic equipment
US9669297B1 (en) Using biometrics to alter game content
CN115116088A (en) Myopia prediction method, apparatus, storage medium, and program product
US11543884B2 (en) Headset signals to determine emotional states
CN113297960A (en) False face detection method, terminal device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant