CN112860068A - Man-machine interaction method, device, electronic equipment, medium and computer program product - Google Patents

Man-machine interaction method, device, electronic equipment, medium and computer program product Download PDF

Info

Publication number
CN112860068A
CN112860068A CN202110185423.5A CN202110185423A CN112860068A CN 112860068 A CN112860068 A CN 112860068A CN 202110185423 A CN202110185423 A CN 202110185423A CN 112860068 A CN112860068 A CN 112860068A
Authority
CN
China
Prior art keywords
target user
digital person
information
reconfiguring
digital
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110185423.5A
Other languages
Chinese (zh)
Inventor
刘佳敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110185423.5A priority Critical patent/CN112860068A/en
Publication of CN112860068A publication Critical patent/CN112860068A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A human-computer interaction method, a human-computer interaction device, an electronic device, a computer readable storage medium and a computer program product are provided, which relate to the field of artificial intelligence, and further relate to the field of cloud computing and the field of intelligent customer service in the field of cloud computing. The method comprises the following steps: acquiring attribute information of a target user; reconfiguring the interactive information of the digital person according to the attribute information; and executing interactive operation of the digital person according to the reconfigured interactive information.

Description

Man-machine interaction method, device, electronic equipment, medium and computer program product
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, particularly to the field of cloud computing and the field of intelligent customer service in cloud computing, and in particular to a human-computer interaction method, a human-computer interaction device, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and the like.
With the development of artificial intelligence, the application of human-computer interaction equipment is more and more extensive. For the digital person on the current man-machine interaction equipment, the image design of the digital person is monotonous, and the interaction with the user lacks pertinence, so that the interaction efficiency is low.
Disclosure of Invention
The present disclosure provides a human-computer interaction method, a human-computer interaction apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a human-computer interaction method, including: acquiring attribute information of a target user; reconfiguring the interactive information of the digital person according to the attribute information; and executing interactive operation of the digital person according to the reconfigured interactive information.
According to another aspect of the present disclosure, there is provided a human-computer interaction device, including: the acquiring unit is used for acquiring the attribute information of the target user; the configuration unit is configured to reconfigure the interaction information of the digital person according to the attribute information; and the control unit is used for executing the interactive operation of the digital person according to the reconfigured interactive information.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the human-computer interaction method.
According to another aspect of the present disclosure, a computer-readable storage medium is provided, wherein the computer instructions are used for causing the computer to execute the above human-computer interaction method.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the above human-computer interaction method when executed by a processor.
According to one or more embodiments of the disclosure, the interaction between the user and the digital person can be more targeted, and the interaction efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a flow diagram of a human-machine interaction method according to some embodiments of the present disclosure;
FIG. 2 illustrates a schematic diagram of an example scenario of an example process of relevant reconfiguration of a digital person according to relevant information of a user, according to some embodiments of the present disclosure;
FIG. 3A illustrates a schematic diagram of an example scenario of an example process of reconfiguring persona information of a digital person according to a target user's eye position, age, and clothing, in accordance with some embodiments of the present disclosure;
FIG. 3B is a diagram illustrating an example scenario of an example process for reconfiguring persona information for a digital person based on a target user's eye position, age, and clothing, according to further embodiments of the present disclosure;
FIG. 4A illustrates a schematic diagram of an example scenario of an example process of reconfiguring service output information of a digital person according to an eye position and age of a target user, in accordance with some embodiments of the present disclosure;
FIG. 4B is a schematic diagram illustrating an example scenario of an example process for reconfiguring service output information for a digital person based on eye position and age of a target user according to further embodiments of the present disclosure;
FIG. 5 shows a block diagram of a human-computer interaction device, according to an embodiment of the disclosure;
FIG. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. It should be noted that the collection and use of user identity information in accordance with embodiments of the present disclosure should follow relevant privacy policies and industry conventions that are generally considered to at least meet the requirements of relevant legal regulations for maintaining user privacy.
According to an embodiment of the present disclosure, a human-computer interaction method is provided.
FIG. 1 shows a flow diagram of a human-machine interaction method 100 according to an embodiment of the disclosure. The human-computer interaction method 100 is described in detail below in conjunction with FIG. 1.
In step S101, attribute information of the target user is acquired.
According to some embodiments, the attribute information of the target user may be acquired in various ways. For example, the acquisition may be performed by shooting with a camera or a video camera, or may be detected by a distance detector.
According to some embodiments, the target user may be a user interacting with a digital person. There may be one or more users in front of and around the digital person. In some examples, when there are multiple users in front of and around the digital person, the target user may be determined from the multiple users in a predetermined manner. Illustratively, a predetermined number of users may be selected from the one or more users, each of the predetermined number of users having a distance relative to the digital person that is less than a distance relative to the digital person of each of the remaining plurality of users. Then, a target user is determined based on a predetermined rule according to the age of each user of the predetermined number of users. According to some embodiments, the predetermined number may be determined, for example, by a maximum number of users that the interaction range of the interaction device can accommodate. It will be appreciated that when there is only one user in front of and around the digital person, that user may be determined to be the target user.
Illustratively, the target user may be determined based on the age group in which each user of the predetermined number of users is located and the number of users in the same age group. For example, it may be determined first in which age group each of a predetermined number of users is respectively located, and then the number of users located in the same age group is determined. Based on these two factors and according to a certain priority, a target user is determined from a predetermined number of users. For example, assuming that the predetermined number of users is 3 users, if there is a middle-aged person among the 3 users, it is determined that the middle-aged person is a target user to interact with the digital person. If there are no middle aged people in the 3 users, it is further determined whether there are elderly people. If there is an elderly person, it is determined that the elderly person is a target user to interact with the digital person. If none of the 3 users is a middle-aged person or an elderly person, it is determined whether there is a young person among the 3 users. After determining which age group each of the predetermined number of users is in and determining the number of users in the same age group, a case where a plurality of users exist in a certain age group may be prioritized. For example, if the users in the same age group are more than 1, the user in the age group closest to the digital person is preferentially determined as the target user to interact with the digital person. For example, assuming that the predetermined number of users are 3 users, if there are 2 middle-aged people and 1 elderly person among the 3 users, the age bracket of the middle-aged people is prioritized, and the middle-aged person among the 2 middle-aged people, which is the closest distance between the middle-aged people and the digital person, is determined as a target user to interact with the digital person.
It should be noted that the above predetermined rule is only an illustrative example and is not a limitation of the present disclosure. Different rules may be selected to determine the target user to interact with the digital human, depending on the particular application and/or needs. Determining a target user to interact with the digital person through a predetermined rule may prevent a conflict caused by a plurality of persons participating in the operation of the digital person.
In step S102, the interaction information of the digital person is reconfigured according to the attribute information.
In some embodiments, the digital person is an intelligent assistant with an avatar that provides services by interacting with the person. The digital person can be suitable for mobile phones, pads, smart televisions, intelligent voice assistants with screens, large screens under lines and the like. The digital person can also be applied to the fields of medical treatment, finance, communication and the like. The digital person can replace the manual work, and information and services can be provided for the person efficiently.
In step S103, the interactive operation of the digital person is performed according to the reconfigured interactive information.
According to the man-machine interaction method, the attribute information of the target user is obtained, the interaction information of the digital person is reconfigured according to the attribute information of the target user, and the interaction operation of the digital person is executed according to the reconfigured interaction information. Therefore, the interaction between the user and the digital person can be more targeted, and the interaction efficiency is improved.
In some embodiments, reconfiguring the interaction information of the digital person according to the attribute information may include: and reconfiguring the interaction information of the digital person according to the attribute information in response to determining that the distance between the target user and the digital person is less than the first predetermined distance. The interaction information includes at least one of: image information, service output information.
According to some embodiments, the first predetermined distance may be derived from, for example, an analysis of human behavior habits during human-computer interaction. According to some embodiments, the first predetermined distance may be, for example, 3 m. Therefore, when the user is close to the digital person, at least one of the image information and the service output information of the digital person is reconfigured, thereby improving the configuration efficiency.
According to some embodiments, it is understood that when the distance between the user and the digital person is not less than the first predetermined distance, the digital person is not reconfigured because the user is far away from the digital person. At this time, the digital person may show only his preset character and service information.
According to some embodiments, after the distance between the user and the digital person is less than the first predetermined distance, the digital person may be further reconfigured according to the state or behavior of the user. In some embodiments, the interaction information may include character information and service output information. Reconfiguring the interaction information of the digital person according to the attribute information may include: reconfiguring the character information according to the attribute information in response to determining that the distance between the target user and the digital person is less than the first predetermined distance and greater than or equal to the second predetermined distance, the motion state of the target user with respect to the digital person is moving toward the digital person, and the digital person is located within the sight line of the target user; and in response to determining that the distance between the target user and the digital person is less than the second predetermined distance, the moving state of the target user relative to the digital person is a stationary state and the digital person is within a line of sight of the target user for a predetermined time, reconfiguring the service output information according to the attribute information.
Therefore, when the target user is close to the digital person (namely the distance between the target user and the digital person is smaller than the first preset distance and larger than or equal to the second preset distance), the target user moves towards the digital person, and the eyes of the target user look at the digital person, and at the moment, the target user is judged to be willing to interact with the digital person, and the image information of the digital person is reconfigured according to the attribute information of the target user, so that the interaction between the user and the digital person is further attracted, and the interaction efficiency is improved. When the target user is closer to the digital person (namely the distance between the target user and the digital person is smaller than a second preset distance), the target user moves towards the digital person, the eyes of the target user look at the digital person for a preset time, at the moment, it is judged that the target user has strong intention to interact with the digital person, and service output information of the digital person is reconfigured according to the attribute information of the target user. By respectively adjusting the image information and the service output information of the digital person according to the relevant information of the target user, the problem that the user does not have resource waste caused by interaction with the digital person at last because the image information and the service output information of the digital person are adjusted at the same time can be avoided, and the processing efficiency is improved.
According to some embodiments, reconfiguring the interaction information of the digital person according to the attribute information may include: reconfiguring the character information according to the attribute information, reconfiguring the character information and the service output information according to the attribute information in response to determining that the distance between the target user and the digital person is less than the first predetermined distance and greater than or equal to the second predetermined distance, the moving state of the target user with respect to the digital person is moving toward the digital person, and the digital person is located within the sight line of the target user.
Thus, when the target user is close to the digital person (i.e., the distance between the target user and the digital person is less than the first predetermined distance and greater than or equal to the second predetermined distance), the target user moves toward the digital person, and the eyes of the target user look at the digital person, at which time the character information and the service output information of the digital person are reconfigured according to the attribute information of the target user. By simultaneously adjusting the image information and the service output information of the digital person according to the relevant information of the target user, the reconfiguration of the image information and the service output information of the digital person can be completed in one-time processing, and the problem of inconvenient interaction caused by untimely reconfiguration is avoided.
According to some embodiments, the second predetermined distance may be, for example, 1.2 m. The first predetermined distance and the second predetermined distance may be determined according to practical situations, and the present disclosure is not limited thereto. In some examples, whether the target user is moving in a direction toward the digital person may be determined by determining whether the target user's distance from the digital person is continuously decreasing.
According to some embodiments, whether the digital person is located within the sight of the target user may be determined by an attention recognition technique in a face offline acquisition technique. Attention recognition technology can recognize the open and closed states of the eyes. For example, if the attention state of the target user to the area where the digital person is located is not closed, it may be determined that the line of sight of the user is located in the area where the digital person is located, that is, the digital person is located within the line of sight of the target user.
According to some embodiments, whether the target user is in a stationary state with respect to the digital person may be determined by the distance detector detecting whether the distance of the user with respect to the digital person is constant.
According to some embodiments, it may be determined whether the user's gaze stays within the area of the digital person for a predetermined time by eye tracking techniques to determine whether the digital person is within the gaze of the target user for a predetermined time. The eyeball tracking technology can identify and track the eyeball fixation point of the user and estimate the sight direction and the eye fixation point position.
Fig. 2 illustrates a schematic diagram of an example scenario of an example process of performing relevant reconfiguration of a digital person according to relevant information of a user according to an embodiment of the present disclosure. In the scenario 200 illustrated in fig. 2, when it is determined that the distance of the user 201 from the digital person is greater than or equal to the first predetermined distance, without reconfiguring the digital person, only the digital person and the service information of the digital person may be displayed on the interactive interface 202. When the distance of the user 201 relative to the digital person is less than the first predetermined distance and greater than or equal to the second predetermined distance, the user 201 moves towards the direction of the digital person, and the digital person is located within the sight line of the user 201, the character information and/or the service output information of the digital person are reconfigured. When the distance between the user 201 and the digital person is smaller than a second preset distance, the user 201 is in a static state relative to the digital person, and the digital person is located in the sight line of the user 201 and stays for a preset time, the service output information of the digital person is reconfigured. According to some embodiments, the attribute information of the target user may be intrinsic information of the target user (e.g., eye position, age, etc. of the target user) or extrinsic information of the target user (e.g., clothing, etc. of the target user). It is noted that the type of attribute information of a target user obtained according to the present disclosure should be directly associated with the business function of a product or service according to the present disclosure, and the obtained information is collected with the lowest frequency and the least amount; the attribute information of the target user acquired according to the method and the system are not directly related to the identity information of the user, so that the specific individual is prevented from being accurately positioned through the attribute information of the user; further, the storage time of the acquired user attribute information should not exceed the minimum time required to implement the business function of the product or service according to the present disclosure.
According to some embodiments, reconfiguring the interaction information of the digital person according to the attribute information may include at least one of: in response to determining that the attribute information includes the eye position of the target user, reconfiguring the display position of the digital person according to the eye position of the target user so that the head region of the digital person is in the central region of the line of sight of the target user; in response to determining that the attribute information includes the age of the target user, reconfiguring the displayed image of the digital person according to the age of the target user; and in response to determining that the attribute information includes the clothing of the target user, reconfiguring the displayed clothing of the digital person according to the clothing of the target user. Therefore, by customizing the position, the image and the clothes of the digital person, the interactivity between the user and the digital person is enhanced, the proximity between the user and the digital person is enhanced, and the user experience is improved.
According to some embodiments, the eye position of the user may be obtained by human body keypoint recognition techniques. The user's eye position may include spatial coordinates of the user's eyes relative to the interactive interface. The human body key point identification technology can identify and output a plurality of key points of each human body, including each main part, and simultaneously outputs coordinate information and the number of the key points.
According to some embodiments, the age of the user includes the age stage in which the user is. The age and clothing of the user can be obtained through human body detection and attribute identification technology. Human detection and attribute identification techniques may identify various types of general attributes of a human body, such as age, clothing, and the like.
According to some embodiments, a user's clothing may include long sleeves, short sleeves, trousers, shorts, longuette, skirt, shirt, suit, sweater, down jacket, upholstery, and the like.
According to some embodiments, the display position of the digital person on the interactive interface can be reconfigured according to the eye position of the target user, so that the head area of the digital person is in the central area of the sight line of the target user. Therefore, the user can more conveniently look at the digital person to form an equal communication scene, and the interaction efficiency is enhanced.
According to some embodiments, the demo of the digital person may be reconfigured according to the age of the target user. Illustratively, the demo of the digital person may be reconfigured so that the age bracket of the digital person is the same as the age bracket of the user. For example, when the target user is a teenager, the presentation character of the digital person may be reconfigured as a cartoon character. When the target user is a middle-aged person, the displayed image of the digital person can be reconfigured to be the real person 3D middle-aged person image. When the user is an elderly person, the displayed image of the digital person can be reconfigured to be a mature middle aged human image of a real person 3D. By matching the display image of the digital person with the age of the user, the closeness between the digital person and the user can be enhanced, and the interaction willingness of the user is enhanced.
According to some embodiments, the display clothes of the digital person can be reconfigured according to the clothes of the target user. Illustratively, the display clothes of the digital person can be reconfigured according to the clothes of the target user, so that the clothes style of the digital person is similar to the clothes style of the target user. For example, the seasons of the digital person's clothing may include, for example, spring and fall, summer, and winter. According to some embodiments, the categories of digital personal apparel include T-shirts, suits, sweaters, jackets, down jackets, windcoats, coats, and the like. Abundant clothing selection makes the clothing matching in the initialization more accurate, and the selection range is wider. By enabling the styles of the clothes of the digital people and the clothes of the users to be consistent, the accuracy of the change of the interactive image of the digital people is enhanced, and the interactive experience content is enriched.
Fig. 3A illustrates a schematic diagram of an example scenario of an example process of reconfiguring persona information of a digital person according to eye position, age, and clothing of a target user, according to some embodiments of the present disclosure. In the example scenario 300A illustrated in fig. 3A, the eye position, age, and clothing of the target user 301A may be obtained. According to the eye position of the target user 301A, the head of the digital person 302A can be reconfigured to be located in the central area of the line of sight of the target user 301A, according to the age of the target user 301A being youth, the image age of the digital person 302A can be reconfigured to belong to youth, and according to the clothing of the target user 301A being one-piece dress, the clothing of the digital person 302A can be reconfigured to be one-piece dress.
Fig. 3B is a diagram illustrating an example scenario of an example process for reconfiguring persona information of a digital person based on eye position, age, and clothing of a target user according to further embodiments of the present disclosure. In the example scenario 300B illustrated in fig. 3B, the eye position, age, and clothing of the target user 301B may be obtained. According to the eye position of the target user 301B, the head of the digital person 302B can be reconfigured to be located in the central area of the line of sight of the target user 301B, according to the age of the target user 301B being a middle year, the figure age of the digital person 302B can be reconfigured to also belong to a middle year, according to the clothing of the target user 302B being a suit, the clothing of the digital person 302B can be reconfigured to also be a suit.
It should be noted that the example scenarios 300A and 300B illustrated in FIGS. 3A and 3B are exemplary and not limiting of the present disclosure.
According to some embodiments, reconfiguring interaction information of the digital person according to the attribute information includes at least one of: in response to determining that the attribute information includes the eye position of the target user, reconfiguring a display position of the service output information of the digital person according to the eye position of the target user; and in response to determining that the attribute information includes the age of the target user, reconfiguring the service output information of the digital person according to the age of the target user.
And when the interactive information of the digital person comprises service output information, reconfiguring the display position of the service output information of the digital person according to the eye position of the target user. For example, the display position of the service output information of the digital person may be reconfigured so that the service output information is in the center area of the user's line of sight on the interactive interface. By reconfiguring the position of the output information of the digital person displayed on the interactive interface, the efficiency of the user for acquiring the information can be improved. The service output information of the digital person can be reconfigured according to the age of the target user.
According to some embodiments, the service output information of the digital person may be voice output information, text output information, or a combination of voice and text output information. In some examples, where the service output information includes voice output information, reconfiguring the service output information of the digital person according to the age of the target user may include: at least one of the tone color, volume and speech rate of the voice output information of the digital person is reconfigured according to the age of the target user.
The characteristics of the voice output information of the digital person can be reconfigured according to the age group to which the target user belongs, so that the tone, the volume and the speech speed of the voice output information of the digital person are matched with the age group to which the target user belongs. The characteristics of the voice output information of the digital person are reconfigured according to the age of the user, so that the accuracy of the user for acquiring the voice information can be improved, and the acceptance of the interactive information is improved.
In some examples, when the service output information includes text output information, reconfiguring the service output information of the digital person according to the age of the target user includes: and reconfiguring at least one of the text font size and the text line spacing of the character output information of the digital person according to the age of the target user.
The character size and line spacing of the character output information of the digital person can be reconfigured according to the age group of the target user. By setting the character output information characteristics of the digital person, the accuracy of the user for acquiring the character information can be improved, and the acceptance of the interactive information is improved.
Fig. 4A illustrates a schematic diagram of an example scenario of an example process of reconfiguring service output information of a digital person according to an eye position and age of a target user, according to some embodiments of the present disclosure. In the scenario 400A illustrated in fig. 4A, the eye position and age of the target user 401A may be obtained. According to the eye position of the target user 401A, the information output position of the digital person 402A can be reconfigured to be in the central area of the line of sight of the target user 401A, and according to the youth age of the target user 401A, the word size of the character output information of the digital person 402A is reconfigured to be a large word size, and the line spacing is a wide line spacing. Illustratively, the timbre, the volume and the speech rate of the voice output information of the digital person 402A may be further reconfigured to match the age of the youth according to the age of the target user 401A.
Fig. 4B is a schematic diagram illustrating an example scenario of an example process for reconfiguring service output information of a digital person based on an eye position and age of a target user according to further embodiments of the present disclosure. In the scenario 400B illustrated in fig. 4B, the eye position and age of the target user 401A may be obtained. According to the eye position of the target user 401A, the information output position of the digital person 402B can be reconfigured to be in the center area of the line of sight of the target user 401A. The timbre, volume and speech rate of the speech output information of the digital person 402B may be reconfigured to match the age of the middle age, depending on the age of the target user 401A being the middle age. According to the age of the target user 401A being middle age, the word size of the character output information of the digital person 402B can be reconfigured to be small word size, and the line spacing is narrow line spacing.
It should be noted that the example scenarios 400A and 400B illustrated in FIGS. 4A and 4B are exemplary and not limiting of the present disclosure.
Fig. 5 shows a block diagram of a human-computer interaction device 500 according to an embodiment of the present disclosure. The man-machine interaction device 500 is described in detail below with reference to fig. 5.
According to some embodiments, as shown in fig. 5, the human-computer interaction device 500 includes an acquisition unit 501, a configuration unit 502, and a control unit 503. The acquisition unit 501 is configured to acquire attribute information of a target user. The configuration unit 502 is configured to reconfigure interaction information of the digital person according to the attribute information. The control unit 503 is configured to perform interactive operations of the digital person according to the reconfigured interactive information.
There is also provided, in accordance with an embodiment of the present disclosure, an electronic device, a computer-readable storage medium, and a computer program product.
According to an embodiment of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the human-computer interaction method described above, such as method 100 and variations thereof.
According to an embodiment of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described human-computer interaction method, such as the method 100 and its various variations.
According to an embodiment of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, enables the computer to perform the above-described human-computer interaction method, such as the method 100 and variations thereof.
FIG. 6 illustrates a block diagram of an exemplary electronic device 600 that can be used to implement embodiments of the present disclosure. The electronic device 600 is described in detail below in conjunction with fig. 6.
Referring to fig. 6, an electronic device 600 is an example of a hardware device that may be applied to aspects of the present disclosure. Electronic device is intended to represent various forms of digital electronic computer devices. According to some exemplary embodiments, electronic device 600 may be, for example, an offline vertical screen digital human interaction device. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the device 600, and the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 608 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver, and/or a chipset, such as a bluetooth (TM) device, an 1302.11 device, a WiFi device, a WiMax device, a cellular communication device, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the method 100 and its various variants. For example, in some embodiments, the method 100 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method 100 described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method 100 in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (12)

1. A human-computer interaction method, comprising:
acquiring attribute information of a target user;
reconfiguring the interactive information of the digital person according to the attribute information; and
and executing the interactive operation of the digital person according to the reconfigured interactive information.
2. The human-computer interaction method according to claim 1, wherein the reconfiguring the interaction information of the digital human according to the attribute information comprises:
reconfiguring interaction information of the digital person according to the attribute information in response to determining that the distance between the target user and the digital person is less than a first predetermined distance;
wherein the interaction information comprises at least one of: image information, service output information.
3. The human-computer interaction method of claim 2, wherein the interaction information includes character information and service output information, and wherein the reconfiguring of the interaction information of the digital human according to the attribute information includes:
reconfiguring the avatar information according to the attribute information in response to determining that the distance between the target user and the digital person is less than the first predetermined distance and greater than or equal to a second predetermined distance, the motion state of the target user relative to the digital person is moving toward the digital person, and the digital person is located within a line of sight of the target user; and
reconfiguring the service output information according to the attribute information in response to determining that the distance between the target user and the digital person is less than the second predetermined distance, the moving state of the target user with respect to the digital person is a stationary state and the digital person is located within a line of sight of the target user for a predetermined time.
4. The human-computer interaction method according to claim 2, wherein the interaction information includes character information and service output information, and the reconfiguring of the interaction information of the digital human according to the attribute information includes:
reconfiguring the character information according to the attribute information, reconfiguring the character information and the service output information according to the attribute information, in response to determining that the distance between the target user and the digital person is less than the first predetermined distance and greater than or equal to a second predetermined distance, the moving state of the target user with respect to the digital person is moving toward the digital person, and the digital person is located within a sight line of the target user.
5. The human-computer interaction method according to claim 2, wherein the interaction information includes the avatar information; and is
Wherein, the reconfiguring the interaction information of the digital person according to the attribute information comprises at least one of the following steps:
in response to determining that the attribute information includes the eye position of the target user, reconfiguring a display position of the digital person according to the eye position of the target user so that a head region of the digital person is in a central region of a line of sight of the target user;
in response to determining that the attribute information includes the age of the target user, reconfiguring the displayed image of the digital person according to the age of the target user; and
in response to determining that the attribute information includes the clothing of the target user, reconfiguring a display clothing of the digital person according to the clothing of the target user.
6. The human-computer interaction method of claim 2, wherein the interaction information comprises the service output information; and is
Wherein, the reconfiguring the interaction information of the digital person according to the attribute information comprises at least one of the following steps:
in response to determining that the attribute information includes the target user's eye position, reconfiguring a display position of the service output information of the digital person in accordance with the target user's eye position; and
reconfiguring the service output information of the digital person according to the age of the target user in response to determining that the attribute information includes the age of the target user.
7. The method of claim 6, wherein the service output information comprises voice output information,
wherein reconfiguring the service output information of the digital person according to the age of the target user comprises:
reconfiguring at least one of timbre, volume and speech rate of the speech output information of the digital person according to the age of the target user.
8. The method of claim 6, wherein the service output information comprises text output information,
wherein reconfiguring the service output information of the digital person according to the age of the target user comprises:
reconfiguring at least one of a text font size and a text line spacing of the text output information of the digital person according to the age of the target user.
9. A human-computer interaction device, comprising:
the acquiring unit is used for acquiring the attribute information of the target user;
the configuration unit is configured to reconfigure the interaction information of the digital person according to the attribute information; and
and the control unit is used for executing the interactive operation of the digital person according to the reconfigured interactive information.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
12. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-8 when executed by a processor.
CN202110185423.5A 2021-02-10 2021-02-10 Man-machine interaction method, device, electronic equipment, medium and computer program product Pending CN112860068A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110185423.5A CN112860068A (en) 2021-02-10 2021-02-10 Man-machine interaction method, device, electronic equipment, medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110185423.5A CN112860068A (en) 2021-02-10 2021-02-10 Man-machine interaction method, device, electronic equipment, medium and computer program product

Publications (1)

Publication Number Publication Date
CN112860068A true CN112860068A (en) 2021-05-28

Family

ID=75988432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110185423.5A Pending CN112860068A (en) 2021-02-10 2021-02-10 Man-machine interaction method, device, electronic equipment, medium and computer program product

Country Status (1)

Country Link
CN (1) CN112860068A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114378850A (en) * 2022-03-23 2022-04-22 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium
TWI796880B (en) * 2021-12-20 2023-03-21 賴綺珊 Product problem analysis system, method and storage medium assisted by artificial intelligence

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499253A (en) * 2008-01-28 2009-08-05 宏达国际电子股份有限公司 Output picture regulation method and apparatus
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot
CN108510917A (en) * 2017-02-27 2018-09-07 北京康得新创科技股份有限公司 Event-handling method based on explaining device and explaining device
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
CN110609620A (en) * 2019-09-05 2019-12-24 深圳追一科技有限公司 Human-computer interaction method and device based on virtual image and electronic equipment
CN110822641A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method and device thereof and readable storage medium
CN110822647A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Control method of air conditioner, air conditioner and storage medium
CN110822648A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method thereof, and computer-readable storage medium
CN110822649A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Control method of air conditioner, air conditioner and storage medium
CN111339938A (en) * 2020-02-26 2020-06-26 广州腾讯科技有限公司 Information interaction method, device, equipment and storage medium
CN111857335A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Virtual object driving method and device, display equipment and storage medium
CN111986775A (en) * 2020-08-03 2020-11-24 深圳追一科技有限公司 Body-building coach guiding method and device for digital person, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499253A (en) * 2008-01-28 2009-08-05 宏达国际电子股份有限公司 Output picture regulation method and apparatus
CN105058393A (en) * 2015-08-17 2015-11-18 李泉生 Guest greeting robot
CN108510917A (en) * 2017-02-27 2018-09-07 北京康得新创科技股份有限公司 Event-handling method based on explaining device and explaining device
CN110339570A (en) * 2019-07-17 2019-10-18 网易(杭州)网络有限公司 Exchange method, device, storage medium and the electronic device of information
CN110609620A (en) * 2019-09-05 2019-12-24 深圳追一科技有限公司 Human-computer interaction method and device based on virtual image and electronic equipment
CN110822641A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method and device thereof and readable storage medium
CN110822647A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Control method of air conditioner, air conditioner and storage medium
CN110822648A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method thereof, and computer-readable storage medium
CN110822649A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Control method of air conditioner, air conditioner and storage medium
CN111339938A (en) * 2020-02-26 2020-06-26 广州腾讯科技有限公司 Information interaction method, device, equipment and storage medium
CN111857335A (en) * 2020-07-09 2020-10-30 北京市商汤科技开发有限公司 Virtual object driving method and device, display equipment and storage medium
CN111986775A (en) * 2020-08-03 2020-11-24 深圳追一科技有限公司 Body-building coach guiding method and device for digital person, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI796880B (en) * 2021-12-20 2023-03-21 賴綺珊 Product problem analysis system, method and storage medium assisted by artificial intelligence
CN114378850A (en) * 2022-03-23 2022-04-22 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium
CN114378850B (en) * 2022-03-23 2022-07-01 北京优全智汇信息技术有限公司 Interaction method and system of customer service robot, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108009521B (en) Face image matching method, device, terminal and storage medium
US20200412975A1 (en) Content capture with audio input feedback
KR102347336B1 (en) Gaze point determination method and apparatus, electronic device and computer storage medium
CN105868827B (en) A kind of multi-modal exchange method of intelligent robot and intelligent robot
US8723796B2 (en) Multi-user interactive display system
US10019779B2 (en) Browsing interface for item counterparts having different scales and lengths
US9349131B2 (en) Interactive digital advertising system
US20130201105A1 (en) Method for controlling interactive display system
CN106897659B (en) The recognition methods of blink movement and device
CN113656582B (en) Training method of neural network model, image retrieval method, device and medium
US20160086020A1 (en) Apparatus and method of user interaction
KR20190030140A (en) Method for eye-tracking and user terminal for executing the same
CN106202316A (en) Merchandise news acquisition methods based on video and device
JP2014225288A (en) User interface method and system based on natural gesture
KR102595790B1 (en) Electronic apparatus and controlling method thereof
CN109614925A (en) Dress ornament attribute recognition approach and device, electronic equipment, storage medium
CN112860068A (en) Man-machine interaction method, device, electronic equipment, medium and computer program product
US20200412864A1 (en) Modular camera interface
US10949654B2 (en) Terminal and server for providing video call service
KR20190072066A (en) Terminal and server providing a video call service
US10026176B2 (en) Browsing interface for item counterparts having different scales and lengths
CN112857268A (en) Object area measuring method, device, electronic device and storage medium
CN112925412A (en) Control method and device of intelligent mirror and storage medium
KR20200016629A (en) Server and method for generating similar user cluster of characteristics
CN114998963A (en) Image detection method and method for training image detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination