CN117971045A - Intelligent man-machine interaction method, device, terminal equipment and storage medium - Google Patents

Intelligent man-machine interaction method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN117971045A
CN117971045A CN202410149772.5A CN202410149772A CN117971045A CN 117971045 A CN117971045 A CN 117971045A CN 202410149772 A CN202410149772 A CN 202410149772A CN 117971045 A CN117971045 A CN 117971045A
Authority
CN
China
Prior art keywords
human body
state
face
human
positioning information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410149772.5A
Other languages
Chinese (zh)
Inventor
宋海霞
傅海华
冼嘉琪
古道宇
黄明昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Kaide Intelligent Technology Co ltd
Original Assignee
Guangdong Kaide Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Kaide Intelligent Technology Co ltd filed Critical Guangdong Kaide Intelligent Technology Co ltd
Priority to CN202410149772.5A priority Critical patent/CN117971045A/en
Publication of CN117971045A publication Critical patent/CN117971045A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application is applicable to the field of electronic control man-machine interaction, and provides an intelligent man-machine interaction method, an intelligent man-machine interaction device, terminal equipment and a storage medium, wherein the intelligent man-machine interaction method comprises the following steps: acquiring human body positioning information in a monitoring area; acquiring a face image based on the human body positioning information; determining a human body state based on the human body positioning information and/or the human face image; and outputting interaction information based on the human body state and the machine state. In the application, man-machine interaction is enhanced, and user experience is improved.

Description

Intelligent man-machine interaction method, device, terminal equipment and storage medium
Technical Field
The application belongs to the field of electronic control man-machine interaction, and particularly relates to an intelligent man-machine interaction method and device.
Background
With the technical progress, the requirements of people on household appliances are higher and higher, and the field of household appliances is also developed towards the direction of intelligence. However, most of the existing household appliances can only operate according to the rules set by the factory, and the display screen only displays the content of the fixed telephone which is easy to be popular.
Therefore, in the prior art, the problem that the use experience of the user is low because the existing home appliance display screen can only generally display the preset content and the user operation is required to trigger the change of the displayed content and the increasing demand of the user on the intelligent appliance cannot be met exists, is solved.
Disclosure of Invention
The embodiment of the application provides an intelligent human-computer interaction method and device, which can strengthen human-computer interaction, improve user experience and solve the problems that in the prior art, the existing electric appliance display screen can only generally display preset content, and the user operation is required to trigger the change of the displayed content, so that the increasing demands of users on intelligent electric appliances cannot be met, and the user experience is low.
In a first aspect, an embodiment of the present application provides an intelligent human-computer interaction method, including:
acquiring human body positioning information in a monitoring area; acquiring a face image based on the human body positioning information; determining a human body state based on the human body positioning information and/or the human face image; and outputting interaction information based on the human body state and the machine state.
In a second aspect, an embodiment of the present application provides an intelligent human-computer interaction device, including:
The human body positioning information acquisition module is used for acquiring human body positioning information in the monitoring area; the human face image acquisition module is used for acquiring a human face image based on the human body positioning information; the human body state determining module is used for determining the human body state based on the human body positioning information and/or the human face image; the interaction information output module is used for outputting interaction information based on the human body state and the machine state; and the module is used for being used for the following steps.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any one of the above first aspects when the computer program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising: the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the above first aspects.
In a fifth aspect, an embodiment of the application provides a computer program product for, when run on a terminal device, causing the terminal device to perform the method of any of the first aspects described above.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Compared with the prior art, the embodiment of the application has the beneficial effects that:
compared with the prior art, the intelligent human-computer interaction method provided by the application enhances human-computer interaction, improves user experience, and solves the problems that in the prior art, the existing electric appliance display screen generally only can display the preset content, and the user operation is required to trigger the change of the displayed content, so that the increasing demands of users on intelligent electric appliances cannot be met, and the user use experience is low.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an implementation of the method provided by the first embodiment of the present application;
FIG. 2 is a flow chart of an implementation of the method provided by the second embodiment of the present application;
FIG. 3 is a flow chart of an implementation of the method provided by the third embodiment of the present application;
FIG. 4 is a flow chart of an implementation of the method provided by the fourth embodiment of the present application;
FIG. 5 is a schematic diagram of an interactive information expression according to an embodiment of the present application;
FIG. 6 is a flow chart of an implementation of the method provided by the fifth embodiment of the present application;
FIG. 7 is a schematic diagram of an interactive information expression according to another embodiment of the present application;
FIG. 8 is a schematic diagram of a smart device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the execution subject of the flow is a terminal device. The terminal device is preferably a home appliance, which comprises a body sensor, a camera and a display which can be connected by local or remote means. Preferably, the terminal device is capable of acquiring machine status. Fig. 1 shows a flowchart of the implementation of the method provided in the first embodiment of the present application, which is described in detail below:
in S101, human body positioning information in a monitoring area is acquired.
In this embodiment, the monitoring area is a predetermined area, and generally, the human body positioning information in the monitoring area is obtained by a human body sensor; the body positioning information may include location and contour information of the body within the monitored area.
In S102, a face image is acquired based on the body positioning information.
In this embodiment, generally, the face image is acquired by a camera; the above-mentioned obtaining a face image based on the human body positioning information may specifically be: and determining a face area of the human body according to the position and outline information of the human body in the monitoring area, and controlling a camera to shoot the face area to obtain the face image.
It should be understood that if the face area of the human body cannot be determined, that is, the face of the human body is not present in the shooting range of the camera, the "face not present" is encapsulated into the human body positioning information.
In S103, a human body state is determined based on the human body positioning information and/or the human face image.
In the present embodiment, the human body state is used to describe a specific condition of a human body, and illustratively, the human body state is used to describe a state in which the human body is in "sitting" or "standing" or "lying", and further, may also be used to describe a state in which the human body is in "resting" or "working"; by way of example, a state of "facing the device" or "face side-to-device" or "facing away from the device" may also be described.
In a possible implementation manner, the above-mentioned determination of the human body state based on the human body positioning information and/or the human face image may specifically be: determining that the human body is in a "sitting" or "standing" or "lying" state based on the human body positioning information; and/or determining that the human body is in a state of being 'facing the device' or 'face side to device' or 'back to device' based on the face image.
In a possible implementation manner, the above method for determining a human body state based on the human body positioning information and/or the human face image may specifically further include: if the human body positioning information describes that a human face does not appear, determining that the human body state comprises a back-to-back device; if the face image describes a half face, determining that the human body state comprises a face side-to-side device; if the face image describes a "whole face", specifically, the face image includes all five sense organs, the determination of the human body state includes "facing the device".
Further, the method S103 provided in this embodiment includes S1031 to S1032, which are specifically described as follows:
Illustratively, the human body state includes a human body pose and a human face pose; the determining a human body state based on the human body positioning information and/or the human face image comprises:
in S1031, the human body posture is determined based on the human body positioning information.
Specifically, the human body posture is determined based on the human body contour of the human body positioning information, and is used for describing the specific posture of the human body, and may include, for example, a posture that a general human body can do, such as "sitting" or "standing" or "lying" or "bending down", and a shape such as "resting" or "working" or "dancing" or "walking".
In S1032, the face pose is determined based on the face image.
The face gesture may be specifically an emotion corresponding to an expression feature of the face, and the emotion corresponding to the expression feature of the face is a happy face if the expression feature of the face is a smiling face; the expression characteristic of the face is crying face, and the corresponding emotion is difficult; the facial expression is characterized by being frown, and the corresponding emotion is sad.
The human body posture is determined through the human body positioning information, and the human face posture is determined through the human face image, so that the accuracy of the human body state and the information richness can be improved.
In S104, interactive information is output based on the human body state and the machine state.
In this embodiment, the machine state is used to describe the state of the machine at the current stage, and may include a state parameter, which may be, for example, an operating state of the terminal device, a stored state or a current time axis state; the terminal device is illustratively a storage type home appliance, and may be specifically an ice chest, and the storage state may be specifically a storage condition of the ice chest, for example, a number of wine bottles stored in the ice chest, etc.
In this embodiment, the outputting the interaction information based on the human body state and the machine state may specifically be: presetting a correlation table, wherein the correlation table takes the human body state and the machine state as indexes to correlate preset interaction information; and based on the association table, determining and outputting preset interaction information which corresponds to the human body state and the machine state together. For example, the human body state identifies that the human body is in a "rest" state, and the machine state identifies that the beverage bottle which is not drunk is stored in the terminal device, namely, the machine is in a "beverage with allowance" state, preset interaction information of "leisure time drinking cup" is output, because the "rest" state and the "beverage with allowance" are jointly associated with the preset interaction information of the "leisure time drinking cup"; in particular, it may be determined by a weight sensor and/or a camera built into the machine whether the machine status identifies that the end device stores beverage bottles that have not been consumed.
It should be understood that the interaction information may include voice information and expression information; the voice information may be output through a speaker or the like, and the expression information may be output through a display screen or the like.
For example, the preset interaction information includes an expression library, and the outputting of the preset interaction information may be outputting any expression in the expression library at random periodically, and it should be understood that, in one period, expressions that are not output in the period are preferentially output.
It should be appreciated that the body positioning information and the face image may be continuously acquired and the interaction information may be continuously output.
In the embodiment, the human body state is determined through the human body positioning information and the human face image, and the interaction information is output based on the human body state and the machine state, so that the relevance of human-computer interaction can be enhanced, and the user experience is improved.
Fig. 2 shows a flow chart of an implementation of the method provided by the second embodiment of the application. Referring to fig. 2, with respect to the embodiment described in fig. 1, the method provided in this embodiment includes S201 to S203, which are specifically described as follows:
further, the acquiring the face image based on the human body positioning information includes:
In S201, a photographing direction is determined based on the body positioning information.
In this embodiment, the human body positioning information includes a location of a human body, and a shooting direction of the face image may be determined according to the location.
In S202, a heuristic image is acquired from the shooting direction.
In this embodiment, the heuristic image is used to heuristically shoot, and it is determined whether an image with a face feature can be shot in the shooting direction.
In S203, performing face preliminary recognition on the heuristic image, and if a face area is recognized in the heuristic image, recognizing the heuristic image as a face image.
In this embodiment, by determining the shooting direction and acquiring the heuristic image of the shooting direction, the workload of identifying the face image can be reduced, the face identification of all the images shot by the cameras is avoided, and the efficiency is improved.
Fig. 3 shows a flow chart of an implementation of the method provided by the fourth embodiment of the application. Referring to fig. 3, with respect to the embodiment described in fig. 1, the method S1032 provided in this embodiment includes S301, which is specifically described as follows:
further, the determining the face pose based on the face image includes:
in S301, the face image is imported into an emotion recognition model, and the face pose is output.
In this embodiment, the emotion recognition model may be a deep learning model, with the face image as input and the face pose as output, and specifically, the emotion recognition model includes a function of type recognition.
Further, the step S301 includes:
In S3011, face feature extraction is performed on the face image, so as to obtain a face feature image.
In this embodiment, the face feature extraction is performed on the face image to obtain a face feature image, which may specifically be that the face image is preprocessed to obtain a preprocessed image, where the preprocessing includes image preprocessing steps such as denoising, and the details of the preprocessing are not described herein; the feature extraction process, specifically, convolution process, is performed on the preprocessed image, so as to obtain a face feature image containing feature values of a plurality of channels.
In S3012, a emotion type of the face feature image is recognized based on an internal expression algorithm of the emotion recognition model.
In this embodiment, the foregoing emotion type recognition method based on the emotion recognition model may specifically be: based on the internal expression algorithm, the emotion type with the highest matching association degree is matched for the face feature image, namely the face feature image is classified and identified.
The internal expression algorithm may specifically data the facial feature distribution situation in the facial feature image, match the facial feature distribution situation with the standard facial feature distribution situation corresponding to the preset emotion type, and identify the preset emotion type with the highest matching degree as the emotion type corresponding to the facial feature image.
In S3013, the emotion type and the face feature image are encapsulated into the face pose.
In this embodiment, through the emotion recognition model, the emotion type of the human body can be recognized based on the face image, so that the interaction information can be output according to the emotion type later.
Fig. 4 shows a flowchart of an implementation of the method provided by the fifth embodiment of the application. Referring to fig. 4, with respect to the embodiment described in fig. 1, the method provided in this embodiment includes S401 to S402, which are specifically described as follows:
further, the intelligent man-machine interaction method further comprises the following steps:
In S401, a preset rule and corresponding preset interaction information are obtained.
In this embodiment, the preset rule may specifically refer to the preset association table of S104, and it should be noted that the preset rule includes the human body state and/or the machine state associated preset interaction information, and the preset association table of S104 is instead the human body state and the machine state associated preset interaction information together.
It should be understood that the preset rule may be obtained from the cloud, that is, the preset rule may be user-defined.
In S402, if the human body state and/or the machine state satisfy the preset rule, outputting the preset interaction information corresponding to the preset rule.
For example, if the human body state is "dancing", outputting corresponding preset interaction information according to the preset rule, specifically, the expression of "dancing" may be output; if the human body state is "happy", outputting corresponding preset interaction information according to the preset rule, specifically, the expression may be a "smiling face", specifically, refer to fig. 5, fig. 5 shows a schematic diagram of the interaction information expression provided by an embodiment of the present application, and the expression of the "smiling face" may be any expression of the happy expression library shown in fig. 5.
For example, if the machine state is that the memory margin is small, outputting corresponding preset interaction information according to the preset rule, specifically, the expression of "as fast as possible inventory" may be outputted; if the machine state is that the cabinet door is not closed, outputting corresponding preset interaction information according to the preset rule, specifically, the preset interaction information may be a "difficult-to-go" expression, specifically, referring to fig. 5, and the "difficult-to-go" expression may be any expression of the difficult-to-go expression library shown in fig. 5.
In one possible embodiment, the machine state includes a machine time; the preset rule includes a preset time, and if the human body state and/or the machine state meet the preset rule, outputting the preset interaction information corresponding to the preset rule may specifically include:
And if the machine time is the same as the preset time, outputting the preset interaction information corresponding to the preset rule.
In this embodiment, the preset time may be a specific time of a holiday specified by a country, or may be a fixed time of day or a date customized by a user or defaulted by a system.
In this embodiment, through preset rules and corresponding preset interaction information, man-machine interaction information can be diversified, and user experience is improved.
Fig. 6 shows a flowchart of an implementation of the method provided by the fifth embodiment of the application. Referring to fig. 6, with respect to the embodiment described in fig. 1, the method provided in this embodiment includes S601, which is specifically described as follows:
further, the outputting the interactive information based on the human body state and the machine state further includes:
In S601, an output direction of the interaction information is determined based on the human body state.
In this embodiment, the determining, based on the human body state, the output direction of the interaction information may specifically be: if the human body state identifier faces the device, determining the position of the human body described by the human body state as the output direction of the interaction information.
The output direction of the interactive information may be specifically an output direction of an expression, referring to fig. 7 specifically, fig. 7 shows a schematic diagram of an interactive information expression provided by another embodiment of the present application, and if the human body state describes that the human body is located at the left side of the terminal device, a "left-looking" expression is output, and, by way of example, corresponding voice interactive information may also be output to the left side; if the human body state describes that the human body is positioned right in front of the terminal equipment, determining the distance between the position of the human body described by the human body state and the terminal equipment, outputting a 'near-looking' expression if the distance is smaller than or equal to a preset threshold value, and outputting a 'far-looking' expression if the distance is larger than the preset threshold value.
In one possible implementation, if the human body state describes that the human body is in 'facing the equipment' for more than a preset time period, entering a viewing mode, in which, if the human body state describes that the eyes of the human body are aimed to the left, outputting a 'left view' expression; if the human body state describes that eyes of the human body aim right, a right-looking expression is output.
In this embodiment, by determining the output direction of the interaction information, the user can more easily notice the interaction information, and user experience is improved.
Corresponding to the method described in the above embodiments, fig. 8 shows a schematic structural diagram of an intelligent device according to an embodiment of the present application, and for convenience of explanation, only the portions related to the embodiments of the present application are shown.
Referring to fig. 8, the smart device includes: the human body positioning information acquisition module is used for acquiring human body positioning information in the monitoring area; the human face image acquisition module is used for acquiring a human face image based on the human body positioning information; the human body state determining module is used for determining the human body state based on the human body positioning information and/or the human face image; and the interaction information output module is used for outputting interaction information based on the human body state and the machine state.
It should be noted that, because the content of information interaction and execution process between the above devices is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 9, the terminal device 9 of this embodiment includes: at least one processor 90 (only one processor is shown in fig. 9), a memory 91 and a computer program 92 stored in the memory 91 and executable on the at least one processor 90, the processor 90 implementing the steps in any of the various method embodiments described above when executing the computer program 92.
The terminal device 9 may be a home appliance such as a refrigerator, a freezer, etc. The terminal device may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 9 is merely an example of the terminal device 9 and is not meant to be limiting as to the terminal device 9, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The Processor 90 may be a central processing unit (Central Processing Unit, CPU), the Processor 90 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may in some embodiments be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may in other embodiments also be an external storage device of the terminal device 9, such as a plug-in hard disk provided on the terminal device 9, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like. Further, the memory 91 may also include both an internal storage unit and an external storage device of the terminal device 9. The memory 91 is used for storing an operating system, application programs, boot loader (BootLoader), data, other programs, etc., such as program codes of the computer program. The memory 91 may also be used for temporarily storing data that has been output or is to be output.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A human-computer interaction method, comprising:
Acquiring human body positioning information in a monitoring area;
Acquiring a face image based on the human body positioning information;
Determining a human body state based on the human body positioning information and/or the human face image;
and outputting interaction information based on the human body state and the machine state.
2. The method of claim 1, wherein the acquiring a face image based on the body positioning information comprises:
Determining a shooting direction based on the human body positioning information;
Acquiring a heuristic image from the shooting direction;
and carrying out preliminary face recognition on the heuristic image, and if the heuristic image is recognized that a face area exists, recognizing the heuristic image as a face image.
3. The method of claim 1, wherein the human body state comprises a human body pose and a human face pose; the determining a human body state based on the human body positioning information and/or the human face image comprises:
Determining the human body pose based on the human body positioning information;
and determining the face pose based on the face image.
4. The method of claim 3, wherein the determining the face pose based on the face image comprises:
Importing the face image into an emotion recognition model, outputting the face gesture, and comprising: ;
extracting the facial features of the facial images to obtain facial feature images;
Identifying the emotion type of the facial feature image based on an internal expression algorithm of the emotion identification model;
And packaging the emotion type and the face feature image into the face gesture.
5. The method of claim 1, wherein the intelligent human-machine interaction method further comprises:
acquiring preset rules and corresponding preset interaction information;
And if the human body state and/or the machine state meet the preset rule, outputting the preset interaction information corresponding to the preset rule.
6. The method of claim 5, wherein the machine state comprises a machine time, and the preset rule comprises a preset time; and if the human body state and/or the machine state meet the preset rule, outputting the preset interaction information corresponding to the preset rule, including:
And if the machine time is the same as the preset time, outputting the preset interaction information corresponding to the preset rule.
7. The method of claim 1, wherein the outputting of the interaction information based on the human body state and the machine state further comprises, before:
and determining the output direction of the interaction information based on the human body state.
8. The utility model provides a man-machine interaction intelligent device which characterized in that includes:
The human body positioning information acquisition module is used for acquiring human body positioning information in the monitoring area;
the human face image acquisition module is used for acquiring a human face image based on the human body positioning information;
The human body state determining module is used for determining the human body state based on the human body positioning information and/or the human face image;
And the interaction information output module is used for outputting interaction information based on the human body state and the machine state.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202410149772.5A 2024-02-02 2024-02-02 Intelligent man-machine interaction method, device, terminal equipment and storage medium Pending CN117971045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410149772.5A CN117971045A (en) 2024-02-02 2024-02-02 Intelligent man-machine interaction method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410149772.5A CN117971045A (en) 2024-02-02 2024-02-02 Intelligent man-machine interaction method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117971045A true CN117971045A (en) 2024-05-03

Family

ID=90854442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410149772.5A Pending CN117971045A (en) 2024-02-02 2024-02-02 Intelligent man-machine interaction method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117971045A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031860A1 (en) * 2015-08-24 2017-03-02 百度在线网络技术(北京)有限公司 Artificial intelligence-based control method and system for intelligent interaction device
CN108592514A (en) * 2018-05-11 2018-09-28 青岛海尔股份有限公司 Intelligent refrigerator and its interaction control method
CN113012656A (en) * 2019-12-20 2021-06-22 佛山市云米电器科技有限公司 Display screen control method, intelligent household appliance and storage medium
CN113158707A (en) * 2020-01-22 2021-07-23 青岛海尔电冰箱有限公司 Refrigerator interaction control method, refrigerator and computer readable storage medium
CN114120292A (en) * 2021-11-29 2022-03-01 阿维塔科技(重庆)有限公司 In-vehicle intelligent interaction method, device, equipment and storage medium
CN114327062A (en) * 2021-12-28 2022-04-12 深圳Tcl新技术有限公司 Man-machine interaction method, device, electronic equipment, storage medium and program product
CN114428553A (en) * 2022-01-18 2022-05-03 上海商汤临港智能科技有限公司 Interaction method, system, device and computer readable storage medium
CN116466827A (en) * 2023-05-08 2023-07-21 武昌理工学院 Intelligent man-machine interaction system and method thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031860A1 (en) * 2015-08-24 2017-03-02 百度在线网络技术(北京)有限公司 Artificial intelligence-based control method and system for intelligent interaction device
CN108592514A (en) * 2018-05-11 2018-09-28 青岛海尔股份有限公司 Intelligent refrigerator and its interaction control method
CN113012656A (en) * 2019-12-20 2021-06-22 佛山市云米电器科技有限公司 Display screen control method, intelligent household appliance and storage medium
CN113158707A (en) * 2020-01-22 2021-07-23 青岛海尔电冰箱有限公司 Refrigerator interaction control method, refrigerator and computer readable storage medium
CN114120292A (en) * 2021-11-29 2022-03-01 阿维塔科技(重庆)有限公司 In-vehicle intelligent interaction method, device, equipment and storage medium
CN114327062A (en) * 2021-12-28 2022-04-12 深圳Tcl新技术有限公司 Man-machine interaction method, device, electronic equipment, storage medium and program product
CN114428553A (en) * 2022-01-18 2022-05-03 上海商汤临港智能科技有限公司 Interaction method, system, device and computer readable storage medium
CN116466827A (en) * 2023-05-08 2023-07-21 武昌理工学院 Intelligent man-machine interaction system and method thereof

Similar Documents

Publication Publication Date Title
CN109816441A (en) Tactful method for pushing, system and relevant apparatus
KR102448304B1 (en) Electronic device including metal housing
CN110113646B (en) AI voice-based intelligent interactive processing method, system and storage medium
CN109409220A (en) Business bootstrap technique, device and storage medium based on recognition of face
CN109413563A (en) The sound effect treatment method and Related product of video
CN109358922A (en) A kind of personalized menu methods of exhibiting, device, intelligent terminal and storage medium
CN106225174A (en) Air-conditioner control method and system and air-conditioner
CN110010125A (en) A kind of control method of intelligent robot, device, terminal device and medium
CN109658953A (en) A kind of vagitus recognition methods, device and equipment
CN103136321A (en) Method and device of multimedia information processing and mobile terminal
CN109101931A (en) A kind of scene recognition method, scene Recognition device and terminal device
CN111367407B (en) Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses
CN103164691A (en) System and method for recognition of emotion based on mobile phone user
CN111291151A (en) Interaction method and device and computer equipment
CN108965981A (en) Video broadcasting method, device, storage medium and electronic equipment
CN105940394B (en) Interactive server and method for control server
CN112822531A (en) Content display method and device, smart television and storage medium
CN107291238B (en) Data processing method and device
CN117971045A (en) Intelligent man-machine interaction method, device, terminal equipment and storage medium
CN112256890A (en) Information display method and device, electronic equipment and storage medium
EP3748980A1 (en) Interactive method and apparatus based on user action information, and electronic device
CN108491067B (en) Intelligent fan control method, intelligent fan and computer readable storage medium
CN110738267A (en) Image classification method and device, electronic equipment and storage medium
CN113487711B (en) Blink control method and device for avatar, electronic equipment and storage medium
CN108932704A (en) Image processing method, picture processing unit and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination