CN111221406B - Information interaction method and device - Google Patents
Information interaction method and device Download PDFInfo
- Publication number
- CN111221406B CN111221406B CN201811409375.8A CN201811409375A CN111221406B CN 111221406 B CN111221406 B CN 111221406B CN 201811409375 A CN201811409375 A CN 201811409375A CN 111221406 B CN111221406 B CN 111221406B
- Authority
- CN
- China
- Prior art keywords
- image
- identified
- preset
- gesture information
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000004891 communication Methods 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 76
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides an information interaction method, which comprises the following steps: acquiring a first image to be identified, which is acquired for a user; identifying the first image to be identified to obtain first gesture information in the first image to be identified; and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user. In this way, the electronic device can determine and execute the corresponding trigger event by recognizing the gesture of the user, that is, the user can perform information interaction with the electronic device through the gesture, so for the scheme, the electronic device can provide a richer interaction experience for the user.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an information interaction method and apparatus.
Background
In some scenarios, the electronic device needs to interact with the user, for example, the user may send a voice command to the intelligent speaker, and after receiving the voice command of the user, the intelligent speaker may play a corresponding audio file; alternatively, the user may send instructions to the robot via the touch interface, the robot may perform corresponding actions after receiving the instructions from the user, and so on.
Generally, information interaction between an electronic device and a user is based on a voice signal sent by the user or an operation instruction sent by the user to the electronic device, and the application scene is single and cannot provide richer interaction experience.
Disclosure of Invention
The embodiment of the invention aims to provide an information interaction method for providing richer interaction experience. The specific technical scheme is as follows:
the embodiment of the invention provides an information interaction method which is applied to electronic equipment and comprises the following steps:
acquiring a first image to be identified, which is acquired for a user;
identifying the first image to be identified to obtain first gesture information in the first image to be identified;
and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user.
Optionally, the identifying the first image to be identified to obtain first gesture information in the first image to be identified includes:
template matching is carried out on the first image to be identified and a plurality of preset images, and the preset images matched with the image to be identified are determined to be matched images;
and taking gesture information corresponding to the matched image as first gesture information of the first image to be identified.
Optionally, the performing template matching on the first image to be identified and a plurality of preset images, determining a preset image matched with the first image to be identified, as a matching image, includes:
determining a plurality of areas to be detected from the first image to be identified according to a preset traversal rule;
calculating a difference value between each region to be detected and each preset image aiming at each preset image, and determining the region to be detected corresponding to the preset image as a target region according to the difference value;
and taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
Optionally, after the determining and executing the trigger event corresponding to the first gesture information to interact with the user, the method further includes:
acquiring a second image to be identified acquired for a user;
identifying the second image to be identified to obtain second gesture information in the second image to be identified;
judging whether the second gesture information meets a preset condition or not;
if yes, determining and executing a triggering event corresponding to the second gesture information; if not, acquiring a control instruction, and determining and executing a trigger event corresponding to the control instruction;
And returning to the step of acquiring the second image to be identified acquired for the user.
Optionally, the determining whether the second gesture information meets a preset condition includes:
judging whether the confidence coefficient of the triggering event corresponding to the second gesture information is larger than a preset threshold value or not;
if yes, judging that the second gesture information meets a preset condition; if not, judging that the second gesture information does not meet the preset condition.
Optionally, the following steps are adopted to determine the confidence level of the triggering event corresponding to the second gesture information:
acquiring a plurality of candidate trigger events and initial confidence coefficients; the initial confidence of each candidate trigger event is equal;
randomly selecting a target trigger event from the plurality of candidate trigger events, taking the target trigger event as a trigger event corresponding to the gesture information, and executing the trigger event;
if the interaction instruction of the user is received after the trigger event is executed, the confidence of the target trigger event is increased according to a preset updating rule;
and if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence coefficient of the target trigger event according to the preset updating rule.
Optionally, the determining whether the second gesture information meets a preset condition includes:
judging whether the time difference between the moment of identifying the second gesture information and the moment of executing any trigger event last time is larger than a preset threshold value or not;
if not, judging that the second gesture information meets a preset condition; if yes, judging that the second gesture information does not meet a preset condition.
The embodiment of the invention also provides an information interaction device which is applied to the electronic equipment, and the device comprises:
the image acquisition module is used for acquiring a first image to be identified acquired for a user;
the processor is used for identifying the first image to be identified, obtaining first gesture information in the first image to be identified, and determining and executing a triggering event corresponding to the first gesture information;
and the communication module is used for interacting with the user.
Optionally, the processor is specifically configured to perform template matching on the first image to be identified and a plurality of preset images, and determine a preset image matched with the image to be identified as a matching image; and taking gesture information corresponding to the matched image as first gesture information of the first image to be identified.
Optionally, the processor is specifically configured to:
determining a plurality of areas to be detected from the first image to be identified according to a preset traversal rule;
calculating a difference value between each region to be detected and each preset image aiming at each preset image, and determining the region to be detected corresponding to the preset image as a target region according to the difference value;
and taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
Optionally, the image acquisition module is further configured to acquire a second image to be identified acquired for the user;
the processor is further used for identifying the second image to be identified to obtain second gesture information in the second image to be identified; judging whether the second gesture information meets a preset condition or not; if yes, determining and executing a triggering event corresponding to the second gesture information; if not, triggering the communication module;
the communication module is used for acquiring a control instruction;
the processor is further used for determining and executing a trigger event corresponding to the control instruction; triggering the image acquisition module.
Optionally, the processor is further configured to determine whether a confidence level of the trigger event corresponding to the second gesture information is greater than a preset threshold; if yes, judging that the second gesture information meets a preset condition; if not, judging that the second gesture information does not meet the preset condition.
Optionally, the following steps are adopted to determine the confidence level of the triggering event corresponding to the second gesture information:
acquiring a plurality of candidate trigger events and initial confidence coefficients; the initial confidence of each candidate trigger event is equal;
randomly selecting a target trigger event from the plurality of candidate trigger events, taking the target trigger event as a trigger event corresponding to the gesture information, and executing the trigger event;
if the interaction instruction of the user is received after the trigger event is executed, the confidence of the target trigger event is increased according to a preset updating rule;
and if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence coefficient of the target trigger event according to the preset updating rule.
Optionally, the processor is further configured to determine whether a time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time is greater than a preset threshold; if not, judging that the second gesture information meets a preset condition; if yes, judging that the second gesture information does not meet a preset condition.
The embodiment of the invention also provides electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface, and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any one of the information interaction methods when executing the programs stored in the memory.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and the computer program realizes any one of the information interaction methods when being executed by a processor.
According to the information interaction method and device provided by the embodiment of the invention, the first image to be identified is acquired for the first image to be identified, so that the first gesture information in the first image to be identified is obtained, and then, the triggering event corresponding to the first gesture information is determined and executed to interact with the user; in this way, the electronic device can determine and execute the corresponding trigger event by recognizing the gesture of the user, that is, the user can perform information interaction with the electronic device through the gesture, so for the scheme, the electronic device can provide a richer interaction experience for the user. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an information interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a predetermined image;
FIG. 3 is a schematic diagram of determining a plurality of regions to be detected from a first image to be identified according to a preset traversal rule;
fig. 4 is a schematic flow chart of another information interaction method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an information interaction device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In some scenarios, the electronic device needs to interact with the user, for example, the user may send a voice command to the intelligent speaker, and after receiving the voice command of the user, the intelligent speaker may play a corresponding audio file; alternatively, the user may send instructions to the robot via the touch interface, the robot may perform corresponding actions after receiving the instructions from the user, and so on.
Generally, information interaction between an electronic device and a user is based on a voice signal sent by the user or an operation instruction sent by the user to the electronic device, and the application scene is single, especially for infants with weak language ability or people with limited language ability, the gesture actions of the infants and the people with limited language ability can not be identified, and more comprehensive and rich interaction experience can not be provided.
In order to solve the above technical problems, the present invention provides an information interaction method, which can be applied to electronic devices, such as smart cameras, mobile terminals, robots, etc., and the embodiments of the present invention are not limited thereto.
The information interaction method provided by the embodiment of the invention is generally described below.
In one implementation manner, the information interaction method includes:
Acquiring a first image to be identified, which is acquired for a user;
identifying the first image to be identified to obtain first gesture information in the first image to be identified;
and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user.
From the above, it can be seen that, by applying the information interaction method provided by the embodiment of the present invention, the electronic device can determine and execute the corresponding trigger event by identifying the gesture of the user, that is, the user can interact with the electronic device through the gesture, so that the electronic device can provide a richer interaction experience for the user by using the scheme.
The information interaction method provided by the embodiment of the invention is described in detail below through a specific embodiment.
As shown in fig. 1, a flow chart of an information interaction method provided by an embodiment of the present invention includes the following steps:
s101: a first image to be identified acquired for a user is acquired.
The first image to be identified is an image acquired for the user, that is, in the first image to be identified, image information of the user is included.
For example, the first image to be identified may be acquired by the electronic device (executing body) in real time, or may be acquired by the electronic device after receiving a trigger instruction of the user; the image may be a single image acquired by the electronic device (execution subject), or may be a frame in a video acquired by the electronic device (execution subject), which is not particularly limited.
S102: and identifying the first image to be identified to obtain first gesture information in the first image to be identified.
By identifying the first image to be identified, first gesture information of the user in the first image to be identified can be determined, and it can be understood that the first gesture information is a gesture made by the user at the time of collecting the first image to be identified, that is, an interaction instruction of the user on the electronic device (execution subject).
For example, the first image to be identified may be identified by means of template matching, so as to obtain first gesture information in the first image to be identified: the method comprises the steps of performing template matching on a first image to be identified and a plurality of preset images, determining the preset images matched with the image to be identified as matched images, and then taking gesture information corresponding to the matched images as first gesture information of the first image to be identified. For example, as shown in fig. 2, a preset image is shown, and it can be seen that the gesture information in the preset image is a number "3".
The template matching can be performed on the first image to be identified and a plurality of preset images in the following manner:
in the first step, a plurality of areas to be detected may be determined from the first image to be identified according to a preset traversal rule.
For example, as shown in fig. 3, in a case, a plurality of areas to be detected are determined from the first image to be identified according to a preset traversal rule. The size of the first image to be identified is 1920×1080 pixels, the gray area is a sliding window with the size of 128×128 pixels, the sliding window is the area to be detected, the first image to be identified can be traversed from the upper left corner, and one pixel is moved in each traversal.
And secondly, calculating a difference value between each region to be detected and each preset image according to each preset image.
For example, the difference value between each region to be detected and the preset image can be calculated by the following formula:
wherein template (i, j) represents a first image to be identified, current (i, j) represents a preset image, xx represents an abscissa of the first image to be identified, yy represents an ordinate of the first image to be identified, mxn represents a size of the preset image,representing the difference value between each region to be detected and the preset image.
And thirdly, determining a region to be detected corresponding to the preset image according to the difference value, and taking the region to be detected as a target region.
For example, the region to be detected with the lowest difference value can be directly used as the target region, so that each preset image has the corresponding target region; alternatively, the region to be detected whose difference value satisfies a preset value interval may be used as the target region. For example, in one implementation, the region to be detected with the lowest difference value and less than 50 may be used as the target region.
And fourthly, taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
Or, the first image to be identified may be identified by inputting the first image to be identified into a pre-trained neural network model for gesture information extraction, so as to obtain the first gesture information in the first image to be identified, which is not limited in detail.
S103: and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user.
For example, if the first gesture information is a number "3", the triggering event corresponding to the first gesture information may be playing a song with number 3 or a story related to number 3; alternatively, if the first gesture information is a bird character, the triggering event corresponding to the first gesture information may be playing audio content related to the bird, such as an audio introduction of the bird, or a call of the bird, etc.
That is, the electronic device (executing body) performs feedback on the gesture of the user by executing the trigger event corresponding to the first gesture information, so that information interaction with the user is realized.
After determining and executing the triggering event corresponding to the first gesture information, the information interaction process can be ended, or interaction with the user can be continued on the basis of the information interaction.
For example, continuing the above example, if the first gesture information is a number "3", the triggering event corresponding to the first gesture information is playing the song with the number 3, in which case the user may further interact with the user to exchange content related to the song with the number 3.
In this step, the process of further interacting with the user may be:
firstly, acquiring a second image to be identified, which is acquired by a user, identifying the second image to be identified to obtain second gesture information in the second image to be identified, then judging whether the second gesture information meets a preset condition, if so, determining and executing a trigger event corresponding to the second gesture information, if not, acquiring a control instruction, determining and executing a trigger event corresponding to the control instruction, and finally, returning to the step of acquiring the second image to be identified, which is acquired by the user, and continuing to perform information interaction with the user.
Under the condition that the second gesture information does not meet the preset condition, the control instruction can be obtained by sending help information to a preset user, wherein the preset user can be a user currently interacting with the electronic equipment (an execution main body) or a trusted third party user such as an administrator and a guardian. For example, if the user currently interacting with the electronic device (execution subject) is a child, the preset user may be a guardian of the child. In this way, the entire interaction process can be adjusted in real time by a trusted third party user and helps to maintain continuity of the interaction.
It can be understood that the manner of identifying the second image to be identified to obtain the second gesture information in the second image to be identified may be the same as the manner of identifying the first image to be identified to obtain the first gesture information in the first image to be identified.
Under the condition, whether the second gesture information meets the preset condition or not can be judged through the confidence coefficient of the triggering event corresponding to the second gesture information, if the confidence coefficient of the triggering event corresponding to the second gesture information is larger than a preset threshold value, the second gesture information is judged to meet the preset condition, and if the confidence coefficient of the triggering event corresponding to the second gesture information is not larger than the preset threshold value, the second gesture information is judged to not meet the preset condition.
The following steps may be adopted to determine the confidence level of the triggering event corresponding to the second gesture information:
firstly, acquiring a plurality of candidate trigger events and initial confidence degrees, wherein the initial confidence degrees of the candidate trigger events are equal, then randomly selecting a target trigger event from the plurality of candidate trigger events as a trigger event corresponding to gesture information, executing the trigger event, if an interaction instruction of a user is received after the trigger event is executed, increasing the confidence degrees of the target trigger event according to a preset updating rule, and if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence degrees of the target trigger event according to the preset updating rule.
In this way, the electronic device (execution body) can automatically analyze whether the electronic device has the capability of feeding back the second gesture information of the user, and when the confidence level of the triggering event corresponding to the second gesture information is lower than a certain value, the possibility that the triggering event can effectively feed back the second gesture information of the user is lower, so that it is determined that the second gesture information does not meet the preset condition. Meanwhile, the learning capacity of the electronic equipment (execution subject) can be improved by adjusting the confidence coefficient of the triggering event corresponding to the second gesture information.
Or in another case, whether the second gesture information meets the preset condition can be judged by identifying the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time, if the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time is not more than a preset threshold value, the second gesture information is judged to meet the preset condition, and if the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time is more than a preset threshold value, the second gesture information is judged not to meet the preset condition. Thus, errors caused by interaction timeout can be reduced, and the interaction experience of the user is further improved.
Alternatively, in this step, the process of further interacting with the user may also be a voice command of the user or an operation command on the touch screen, and the like, which is not limited specifically.
As can be seen from the above, in the information interaction method provided by the embodiment of the present invention, by acquiring a first image to be identified acquired for a user, identifying the first image to be identified to obtain first gesture information in the first image to be identified, and then determining and executing a trigger event corresponding to the first gesture information to interact with the user; in this way, the electronic device can determine and execute the corresponding trigger event by recognizing the gesture of the user, that is, the user can perform information interaction with the electronic device through the gesture, so for the scheme, the electronic device can provide a richer interaction experience for the user.
As shown in fig. 4, a flow chart of another information interaction method provided by the embodiment of the invention includes the following steps:
s401: a first image to be identified acquired for a user is acquired.
The first image to be identified is an image acquired for the user, that is, in the first image to be identified, image information of the user is included.
For example, the first image to be identified may be acquired by the electronic device (executing body) in real time, or may be acquired by the electronic device after receiving a trigger instruction of the user; the image may be a single image acquired by the electronic device (execution subject), or may be a frame in a video acquired by the electronic device (execution subject), which is not particularly limited.
S402: and identifying the first image to be identified to obtain first gesture information in the first image to be identified.
By identifying the first image to be identified, first gesture information of the user in the first image to be identified can be determined, and it can be understood that the first gesture information is a gesture made by the user at the time of collecting the first image to be identified, that is, an interaction instruction of the user on the electronic device (execution subject).
For example, the first image to be identified may be identified by means of template matching, so as to obtain first gesture information in the first image to be identified: the method comprises the steps of performing template matching on a first image to be identified and a plurality of preset images, determining the preset images matched with the image to be identified as matched images, and then taking gesture information corresponding to the matched images as first gesture information of the first image to be identified.
The template matching can be performed on the first image to be identified and a plurality of preset images in the following manner:
in the first step, a plurality of areas to be detected may be determined from the first image to be identified according to a preset traversal rule.
And secondly, calculating a difference value between each region to be detected and each preset image according to each preset image.
For example, the difference value between each region to be detected and the preset image can be calculated by the following formula:
wherein template (i, j) represents a first image to be identified, current (i, j) represents a preset image, xx represents an abscissa of the first image to be identified, yy represents an ordinate of the first image to be identified, mxn represents a size of the preset image,representing the difference value between each region to be detected and the preset image.
And thirdly, determining a region to be detected corresponding to the preset image according to the difference value, and taking the region to be detected as a target region.
For example, the region to be detected with the lowest difference value can be directly used as the target region, so that each preset image has the corresponding target region; alternatively, the region to be detected whose difference value satisfies a preset value interval may be used as the target region. For example, in one implementation, the region to be detected with the lowest difference value and less than 50 may be used as the target region.
And fourthly, taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
S403: and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user.
For example, if the first gesture information is a number "3", the triggering event corresponding to the first gesture information may be playing a song with number 3 or a story related to number 3; alternatively, if the first gesture information is a bird character, the triggering event corresponding to the first gesture information may be playing audio content related to the bird, such as an audio introduction of the bird, or a call of the bird, etc.
That is, the electronic device (executing body) performs feedback on the gesture of the user by executing the trigger event corresponding to the first gesture information, so that information interaction with the user is realized.
S404: and acquiring a second image to be identified, which is acquired by a user, and identifying the second image to be identified to obtain second gesture information in the second image to be identified.
For example, continuing the above example, if the first gesture information is a number "3", the triggering event corresponding to the first gesture information is playing the song with the number 3, in this case, the second image to be identified may be acquired for the user, and the second image to be identified may be identified, so as to obtain the second gesture information in the second image to be identified, and further interact with the user to exchange the content related to the song with the number 3.
It can be understood that in S404, the second image to be identified acquired for the user is acquired, and the second image to be identified is identified, so as to obtain the second gesture information in the second image to be identified, which may be the same as the ways in S401 and S402 for identifying the first image to be identified and obtaining the first gesture information in the first image to be identified.
S405: judging whether the second gesture information meets a preset condition or not; if yes, determining and executing a triggering event corresponding to the second gesture information; if not, acquiring a control instruction, and determining and executing a trigger event corresponding to the control instruction; returning to S404.
For example, in one case, whether the second gesture information meets the preset condition may be determined by the confidence level of the trigger event corresponding to the second gesture information, if the confidence level of the trigger event corresponding to the second gesture information is greater than the preset threshold, the second gesture information is determined to meet the preset condition, and if the confidence level of the trigger event corresponding to the second gesture information is not greater than the preset threshold, the second gesture information is determined to not meet the preset condition.
The following steps may be adopted to determine the confidence level of the triggering event corresponding to the second gesture information:
Firstly, acquiring a plurality of candidate trigger events and initial confidence degrees, wherein the initial confidence degrees of the candidate trigger events are equal, then randomly selecting a target trigger event from the plurality of candidate trigger events as a trigger event corresponding to gesture information, executing the trigger event, if an interaction instruction of a user is received after the trigger event is executed, increasing the confidence degrees of the target trigger event according to a preset updating rule, and if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence degrees of the target trigger event according to the preset updating rule.
For example, the initial confidence level of all candidate triggering events [ A1, A2, A3, … An ] may be set to be 0.5, and if the answer A5 is An answer, the user is willing to communicate further, the initial confidence level of the answer A5 is increased by 10%, i.e. the confidence level of the answer A5 is 0.5×1.1=5.5; if the user does not want to communicate further after answer A5 is answered, the initial confidence of answer A5 is reduced by 10%, i.e. the confidence of answer A5 is 0.5×0.9=4.0.
In this way, the electronic device (execution body) can automatically analyze whether the electronic device has the capability of feeding back the second gesture information of the user, and when the confidence level of the triggering event corresponding to the second gesture information is lower than a certain value, the possibility that the triggering event can effectively feed back the second gesture information of the user is lower, so that it is determined that the second gesture information does not meet the preset condition. Meanwhile, the learning capacity of the electronic equipment (execution subject) can be improved by adjusting the confidence coefficient of the triggering event corresponding to the second gesture information.
Or in another case, whether the second gesture information meets the preset condition can be judged by identifying the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time, if the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time is not more than a preset threshold value, the second gesture information is judged to meet the preset condition, and if the time difference between the time when the second gesture information is identified and the time when any trigger event is executed last time is more than a preset threshold value, the second gesture information is judged not to meet the preset condition. Thus, errors caused by interaction timeout can be reduced, and the interaction experience of the user is further improved.
Under the condition that the second gesture information does not meet the preset condition, the control instruction can be obtained by sending help information to a preset user, wherein the preset user can be a user currently interacting with the electronic equipment (an execution main body) or a trusted third party user such as an administrator and a guardian. For example, if the user currently interacting with the electronic device (execution subject) is a child, the preset user may be a guardian of the child. In this way, the entire interaction process can be adjusted in real time by a trusted third party user and helps to maintain continuity of the interaction.
As can be seen from the above, in the information interaction method provided by the embodiment of the present invention, by acquiring a first image to be identified acquired for a user, identifying the first image to be identified to obtain first gesture information in the first image to be identified, and then determining and executing a trigger event corresponding to the first gesture information to interact with the user; in this way, the electronic device can determine and execute the corresponding trigger event by recognizing the gesture of the user, that is, the user can perform information interaction with the electronic device through the gesture, so for the scheme, the electronic device can provide a richer interaction experience for the user.
Corresponding to the information interaction method, the embodiment of the invention also provides an information interaction device.
Fig. 5 is a schematic structural diagram of an information interaction device according to an embodiment of the present invention, which is applied to an electronic device, and the device includes:
the image acquisition module 501 is configured to acquire a first image to be identified acquired for a user;
the processor 502 is configured to identify the first image to be identified, obtain first gesture information in the first image to be identified, and determine and execute a trigger event corresponding to the first gesture information;
And a communication module 503, configured to interact with the user.
In an implementation manner, the processor 502 is specifically configured to perform template matching on the first image to be identified and a plurality of preset images, and determine a preset image matched with the image to be identified as a matching image; and taking gesture information corresponding to the matched image as first gesture information of the first image to be identified.
In one implementation, the processor 502 is specifically configured to:
determining a plurality of areas to be detected from the first image to be identified according to a preset traversal rule;
calculating a difference value between each region to be detected and each preset image aiming at each preset image, and determining the region to be detected corresponding to the preset image as a target region according to the difference value;
and taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
In one implementation, the image acquisition module 501 is further configured to acquire a second image to be identified acquired for the user;
the processor 502 is further configured to identify the second image to be identified, so as to obtain second gesture information in the second image to be identified; judging whether the second gesture information meets a preset condition or not; if yes, determining and executing a triggering event corresponding to the second gesture information; if not, triggering the communication module;
The communication module is used for acquiring a control instruction;
the processor is further used for determining and executing a trigger event corresponding to the control instruction; triggering the image acquisition module.
In an implementation manner, the processor 502 is further configured to determine whether a confidence level of the trigger event corresponding to the second gesture information is greater than a preset threshold; if yes, judging that the second gesture information meets a preset condition; if not, judging that the second gesture information does not meet the preset condition.
In one implementation, the following steps are adopted to determine the confidence level of the triggering event corresponding to the second gesture information:
acquiring a plurality of candidate trigger events and initial confidence coefficients; the initial confidence of each candidate trigger event is equal;
randomly selecting a target trigger event from the plurality of candidate trigger events, taking the target trigger event as a trigger event corresponding to the gesture information, and executing the trigger event;
if the interaction instruction of the user is received after the trigger event is executed, the confidence of the target trigger event is increased according to a preset updating rule;
and if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence coefficient of the target trigger event according to the preset updating rule.
In one implementation, the processor 502 is further configured to determine whether a time difference between the time when the second gesture information is identified and the time when any of the trigger events is executed last is greater than a preset threshold; if not, judging that the second gesture information meets a preset condition; if yes, judging that the second gesture information does not meet a preset condition.
Therefore, the electronic device can determine and execute the corresponding trigger event by identifying the gesture of the user, that is, the user can perform information interaction with the electronic device through the gesture, so that the electronic device can provide richer interaction experience for the user by using the information interaction device.
The embodiment of the invention also provides an electronic device, as shown in fig. 6, which comprises a processor 601, a communication interface 602, a memory 603 and a communication bus 604, wherein the processor 601, the communication interface 602 and the memory 603 complete communication with each other through the communication bus 604,
a memory 603 for storing a computer program;
the processor 601 is configured to execute the program stored in the memory 603, and implement the following steps:
acquiring a first image to be identified, which is acquired for a user;
Identifying the first image to be identified to obtain first gesture information in the first image to be identified;
and determining and executing a trigger event corresponding to the first gesture information so as to interact with the user.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
From the above, it can be seen that, by using the information interaction method provided by the embodiment of the present application, the electronic device can determine and execute the corresponding trigger event by identifying the gesture of the user, that is, the user can interact with the electronic device through the gesture, so that the electronic device can provide a richer interaction experience for the user.
In yet another embodiment of the present application, a computer readable storage medium is provided, in which instructions are stored, which when run on a computer, cause the computer to perform the information interaction method according to any of the above embodiments.
In a further embodiment of the present application, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the information interaction method of any of the above embodiments is also provided.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiment, the electronic device embodiment, and the storage medium embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and reference is made to the partial description of the method embodiment for relevant points.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.
Claims (8)
1. An information interaction method applied to electronic equipment is characterized by comprising the following steps:
acquiring a first image to be identified, which is acquired for a user;
identifying the first image to be identified to obtain first gesture information in the first image to be identified;
determining and executing a trigger event corresponding to the first gesture information so as to interact with the user;
acquiring a second image to be identified acquired for a user;
identifying the second image to be identified to obtain second gesture information in the second image to be identified;
judging whether the second gesture information meets a preset condition or not;
if yes, determining and executing a triggering event corresponding to the second gesture information; if the control instruction is not met, help seeking information is sent to a preset user, a control instruction sent by the preset user in response to the help seeking information is obtained, and a trigger event corresponding to the control instruction is determined and executed;
Returning to the step of acquiring the second image to be identified acquired for the user;
the judging whether the second gesture information meets the preset condition comprises the following steps:
judging whether the confidence coefficient of the triggering event corresponding to the second gesture information is larger than a preset threshold value or not; the confidence level of the triggering event corresponding to the second gesture information is determined by the following steps: acquiring a plurality of candidate trigger events and initial confidence coefficients; the initial confidence of each candidate trigger event is equal; randomly selecting a target trigger event from the plurality of candidate trigger events, taking the target trigger event as a trigger event corresponding to the gesture information, and executing the trigger event; if the interaction instruction of the user is received after the trigger event is executed, the confidence of the target trigger event is increased according to a preset updating rule; if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence coefficient of the target trigger event according to the preset updating rule;
if yes, judging that the second gesture information meets a preset condition; if not, judging that the second gesture information does not meet the preset condition.
2. The method according to claim 1, wherein the identifying the first image to be identified to obtain the first gesture information in the first image to be identified includes:
Template matching is carried out on the first image to be identified and a plurality of preset images, and the preset images matched with the image to be identified are determined to be matched images;
and taking gesture information corresponding to the matched image as first gesture information of the first image to be identified.
3. The method according to claim 2, wherein the template matching the first image to be identified with a plurality of preset images, determining a preset image matched with the first image to be identified as a matching image, includes:
determining a plurality of areas to be detected from the first image to be identified according to a preset traversal rule;
calculating a difference value between each region to be detected and each preset image aiming at each preset image, and determining the region to be detected corresponding to the preset image as a target region according to the difference value;
and taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
4. An information interaction device applied to an electronic device, the device comprising:
the image acquisition module is used for acquiring a first image to be identified acquired for a user;
The processor is used for identifying the first image to be identified, obtaining first gesture information in the first image to be identified, and determining and executing a triggering event corresponding to the first gesture information;
the communication module is used for interacting with the user;
the image acquisition module is also used for acquiring a second image to be identified acquired for the user;
the processor is further used for identifying the second image to be identified to obtain second gesture information in the second image to be identified; judging whether the second gesture information meets a preset condition or not; if yes, determining and executing a triggering event corresponding to the second gesture information; if not, sending help information to a preset user, and triggering the communication module;
the communication module is used for acquiring a control instruction sent by the preset user in response to the help seeking information;
the processor is further used for determining and executing a trigger event corresponding to the control instruction; triggering the image acquisition module;
the processor is further configured to determine whether a confidence level of the trigger event corresponding to the second gesture information is greater than a preset threshold; the confidence level of the triggering event corresponding to the second gesture information is determined by the following steps: acquiring a plurality of candidate trigger events and initial confidence coefficients; the initial confidence of each candidate trigger event is equal; randomly selecting a target trigger event from the plurality of candidate trigger events, taking the target trigger event as a trigger event corresponding to the gesture information, and executing the trigger event; if the interaction instruction of the user is received after the trigger event is executed, the confidence of the target trigger event is increased according to a preset updating rule; if the interaction instruction of the user is not received after the trigger event is executed, reducing the confidence coefficient of the target trigger event according to the preset updating rule;
If yes, judging that the second gesture information meets a preset condition; if not, judging that the second gesture information does not meet the preset condition.
5. The apparatus of claim 4, wherein the device comprises a plurality of sensors,
the processor is specifically configured to perform template matching on the first image to be identified and a plurality of preset images, and determine a preset image matched with the image to be identified as a matched image; and taking gesture information corresponding to the matched image as first gesture information of the first image to be identified.
6. The apparatus of claim 5, wherein the processor is configured to:
determining a plurality of areas to be detected from the first image to be identified according to a preset traversal rule;
calculating a difference value between each region to be detected and each preset image aiming at each preset image, and determining the region to be detected corresponding to the preset image as a target region according to the difference value;
and taking the preset image with the minimum difference value of the target area as a preset image matched with the first image to be identified.
7. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-3 when executing a program stored on a memory.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811409375.8A CN111221406B (en) | 2018-11-23 | 2018-11-23 | Information interaction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811409375.8A CN111221406B (en) | 2018-11-23 | 2018-11-23 | Information interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111221406A CN111221406A (en) | 2020-06-02 |
CN111221406B true CN111221406B (en) | 2023-10-13 |
Family
ID=70808521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811409375.8A Active CN111221406B (en) | 2018-11-23 | 2018-11-23 | Information interaction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111221406B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113869296A (en) * | 2020-06-30 | 2021-12-31 | 杭州九阳小家电有限公司 | Terminal equipment and automatic control method thereof |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104020843A (en) * | 2013-03-01 | 2014-09-03 | 联想(北京)有限公司 | Information processing method and electronic device |
CN106339067A (en) * | 2015-07-06 | 2017-01-18 | 联想(北京)有限公司 | Control method and electronic equipment |
CN106527674A (en) * | 2015-09-14 | 2017-03-22 | 上海羽视澄蓝信息科技有限公司 | Human-computer interaction method, equipment and system for vehicle-mounted monocular camera |
WO2018033154A1 (en) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | Gesture control method, device, and electronic apparatus |
CN107831995A (en) * | 2017-09-28 | 2018-03-23 | 努比亚技术有限公司 | A kind of terminal operation control method, terminal and computer-readable recording medium |
CN107831987A (en) * | 2017-11-22 | 2018-03-23 | 出门问问信息科技有限公司 | The error touch control method and device of anti-gesture operation |
CN107967061A (en) * | 2017-12-21 | 2018-04-27 | 北京华捷艾米科技有限公司 | Man-machine interaction method and device |
CN108446073A (en) * | 2018-03-12 | 2018-08-24 | 阿里巴巴集团控股有限公司 | A kind of method, apparatus and terminal for simulating mouse action using gesture |
CN108594995A (en) * | 2018-04-13 | 2018-09-28 | 广东小天才科技有限公司 | Electronic equipment operation method based on gesture recognition and electronic equipment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8194921B2 (en) * | 2008-06-27 | 2012-06-05 | Nokia Corporation | Method, appartaus and computer program product for providing gesture analysis |
JP2015095164A (en) * | 2013-11-13 | 2015-05-18 | オムロン株式会社 | Gesture recognition device and control method for gesture recognition device |
CN105892641A (en) * | 2015-12-09 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Click response processing method and device for somatosensory control, and system |
-
2018
- 2018-11-23 CN CN201811409375.8A patent/CN111221406B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104020843A (en) * | 2013-03-01 | 2014-09-03 | 联想(北京)有限公司 | Information processing method and electronic device |
CN106339067A (en) * | 2015-07-06 | 2017-01-18 | 联想(北京)有限公司 | Control method and electronic equipment |
CN106527674A (en) * | 2015-09-14 | 2017-03-22 | 上海羽视澄蓝信息科技有限公司 | Human-computer interaction method, equipment and system for vehicle-mounted monocular camera |
WO2018033154A1 (en) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | Gesture control method, device, and electronic apparatus |
CN107831995A (en) * | 2017-09-28 | 2018-03-23 | 努比亚技术有限公司 | A kind of terminal operation control method, terminal and computer-readable recording medium |
CN107831987A (en) * | 2017-11-22 | 2018-03-23 | 出门问问信息科技有限公司 | The error touch control method and device of anti-gesture operation |
CN107967061A (en) * | 2017-12-21 | 2018-04-27 | 北京华捷艾米科技有限公司 | Man-machine interaction method and device |
CN108446073A (en) * | 2018-03-12 | 2018-08-24 | 阿里巴巴集团控股有限公司 | A kind of method, apparatus and terminal for simulating mouse action using gesture |
CN108594995A (en) * | 2018-04-13 | 2018-09-28 | 广东小天才科技有限公司 | Electronic equipment operation method based on gesture recognition and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111221406A (en) | 2020-06-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112417439B (en) | Account detection method, device, server and storage medium | |
CN109523017B (en) | Gesture detection method, device, equipment and storage medium | |
CN110837758B (en) | Keyword input method and device and electronic equipment | |
US20200065471A1 (en) | Security verification method and relevant device | |
CN112183166B (en) | Method and device for determining training samples and electronic equipment | |
CN110969045B (en) | Behavior detection method and device, electronic equipment and storage medium | |
CN107527059A (en) | Character recognition method, device and terminal | |
CN110290280B (en) | Terminal state identification method and device and storage medium | |
WO2020125229A1 (en) | Feature fusion method and apparatus, and electronic device and storage medium | |
CN110321863A (en) | Age recognition methods and device, storage medium | |
CN112084959B (en) | Crowd image processing method and device | |
CN112232506A (en) | Network model training method, image target recognition method, device and electronic equipment | |
CN113010139B (en) | Screen projection method and device and electronic equipment | |
CN111027428A (en) | Training method and device of multi-task model and electronic equipment | |
WO2023011470A1 (en) | Machine learning system and model training method | |
CN115661917A (en) | Gesture recognition method and related product | |
CN112150457A (en) | Video detection method, device and computer readable storage medium | |
CN113194281A (en) | Video analysis method and device, computer equipment and storage medium | |
CN115953643A (en) | Knowledge distillation-based model training method and device and electronic equipment | |
CN111221406B (en) | Information interaction method and device | |
CN112259122B (en) | Audio type identification method, device and storage medium | |
CN112434717A (en) | Model training method and device | |
CN110335626A (en) | Age recognition methods and device, storage medium based on audio | |
CN113342170A (en) | Gesture control method, device, terminal and storage medium | |
CN109255016A (en) | Answer method, device and computer readable storage medium based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |