CN114762981B - Interaction method and related device - Google Patents

Interaction method and related device Download PDF

Info

Publication number
CN114762981B
CN114762981B CN202011642650.8A CN202011642650A CN114762981B CN 114762981 B CN114762981 B CN 114762981B CN 202011642650 A CN202011642650 A CN 202011642650A CN 114762981 B CN114762981 B CN 114762981B
Authority
CN
China
Prior art keywords
growth
information
target
sub
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011642650.8A
Other languages
Chinese (zh)
Other versions
CN114762981A (en
Inventor
傅峰峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fugang Life Intelligent Technology Co Ltd
Original Assignee
Guangzhou Fugang Life Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fugang Life Intelligent Technology Co Ltd filed Critical Guangzhou Fugang Life Intelligent Technology Co Ltd
Priority to CN202011642650.8A priority Critical patent/CN114762981B/en
Publication of CN114762981A publication Critical patent/CN114762981A/en
Application granted granted Critical
Publication of CN114762981B publication Critical patent/CN114762981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the application provides an interaction method and a related device, wherein the interaction method is applied to a robot system and comprises the following steps: acquiring a target image of a target user; determining action information of the target user according to the target image; transmitting the action information to a server; receiving growth information sent by the server, wherein the growth information is determined by the server according to the action information; and performing growth adjustment according to the growth information to obtain an adjusted robot system. The interaction method and the related device can improve the interestingness of the robot.

Description

Interaction method and related device
Technical Field
The application relates to the technical field of data processing, in particular to an interaction method and a related device.
Background
Intelligent robots are capable of understanding human language, talking to operators with human language, and forming an external environment in their own "consciousness", an actual detailed model, that enables them to "live". It can analyze the condition, adjust the own action to meet the all the requirements of the operator, and draw the expected action and complete the actions under the condition of insufficient information and rapid change of environment. In the current robot system, the system is single in use, and only conventional actions are usually supported, so that the interestingness of the robot system is low.
Disclosure of Invention
The embodiment of the application provides an interaction method and a related device, which can improve the interestingness of a robot.
A first aspect of an embodiment of the present application provides an interaction method, applied to a robot system, where the method includes:
acquiring a target image of a target user;
determining action information of the target user according to the target image;
transmitting the action information to a server;
receiving growth information sent by the server, wherein the growth information is determined by the server according to the action information;
and performing growth adjustment according to the growth information to obtain an adjusted robot system.
With reference to the first aspect, in one possible implementation manner, the growing information includes first sub-growing information, second sub-growing information, and third sub-growing information, and the growing adjustment is performed according to the growing information, so as to obtain an adjusted robot system, which includes:
obtaining a growth category of the robot system, and obtaining at least one reference growth category;
determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
The first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
With reference to the first aspect, in one possible implementation manner, the method further includes:
and sending the adjusted target growth category to a target terminal, and indicating the target terminal to display the target growth category, wherein the target terminal is equipment associated with the target user.
A second aspect of the embodiments of the present application provides an interaction method, where the method is applied to a server, and the method includes:
receiving action information sent by a robot system;
determining growth information of the robot system according to the action information;
and sending the growth information to the robot system.
With reference to the second aspect, in one possible implementation manner, the growing information includes first sub growing information, second sub growing information, and third sub growing information, and the determining the growing information of the robot system according to the action information includes:
Acquiring face motion information, hand motion information and leg motion information in the motion information;
determining the mood value of the target user according to the facial motion information;
determining the first sub-growth information according to the mood value;
determining the second sub-growing information according to the hand motion information and the leg motion information;
and determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
A third aspect of embodiments of the present application provides an interaction device, the device being applied to a robotic system, the device comprising:
an acquisition unit configured to acquire a target image of a target user;
a determining unit, configured to determine motion information of the target user according to the target image;
a sending unit, configured to send the action information to a server;
the receiving unit is used for receiving the growth information sent by the server, wherein the growth information is determined by the server according to the action information;
and the adjusting unit is used for carrying out growth adjustment according to the growth information to obtain an adjusted robot system.
With reference to the third aspect, in one possible implementation manner, the growth information includes first sub-growth information, second sub-growth information, and third sub-growth information, and the adjustment unit is configured to:
Obtaining a growth category of the robot system, and obtaining at least one reference growth category;
determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
the first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
With reference to the third aspect, in one possible implementation manner, the apparatus is further configured to:
and sending the adjusted target growth category to a target terminal, and indicating the target terminal to display the target growth category, wherein the target terminal is equipment associated with the target user.
A fourth aspect of the embodiments of the present application provides an interaction device, where the device is applied to a server, the device includes:
the receiving unit is used for receiving the action information sent by the robot system;
a determining unit configured to determine growth information of the robot system according to the motion information;
and the sending unit is used for sending the growth information to the robot system.
With reference to the fourth aspect, in one possible implementation manner, the growth information includes first sub-growth information, second sub-growth information, and third sub-growth information, and the determining unit is configured to:
acquiring face motion information, hand motion information and leg motion information in the motion information;
determining the mood value of the target user according to the facial motion information;
determining the first sub-growth information according to the mood value;
determining the second sub-growing information according to the hand motion information and the leg motion information;
and determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
A fifth aspect of the embodiments of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps as described in the first aspect of the embodiments of the present application.
A sixth aspect of the embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The implementation of the embodiment of the application has at least the following beneficial effects:
the method comprises the steps of obtaining a target image of a target user, determining action information of the target user according to the target image, sending the action information to a server, receiving growth information sent by the server, determining the growth information by the server according to the action information, and carrying out growth adjustment according to the growth information to obtain an adjusted robot system, so that the robot system can be adjusted according to the action information of the target user, the adjusted robot system is obtained, and the interestingness and convenience of the robot system are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an interaction method according to an embodiment of the present application;
FIG. 2 is an interaction schematic diagram of an interaction method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another interaction method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an interaction device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an interaction device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario of an interaction method according to an embodiment of the present application. As shown in fig. 1, the robot system may include an image pickup device, which may be a camera, a Kinect sensor, a depth camera, etc., and the image pickup device may be used to recognize actions of a user. The robot system acquires a target image of a target user through the image pickup device, determines action information of the target user according to the target image, and sends the action information to the server. After receiving the action information, the server determines growth information of the robot system according to the action information and sends the growth information to the robot system. After the robot system receives the growth information, the robot system performs growth adjustment according to the growth information to obtain an adjusted robot system. Therefore, the robot system can be adjusted according to the action information of the target user, the adjusted robot system is obtained, and the interestingness and convenience of the robot system are improved.
Referring to fig. 2, fig. 2 is an interaction schematic diagram of an interaction method according to an embodiment of the present application. As shown in fig. 2, the method is applied to a robot system and a server, and specifically comprises the following steps:
201. the robotic system acquires a target image of a target user.
The robotic system may be a system including a physical robot, such as an intelligent robot, an AGV cart carrying a robotic arm, or the like. The robot system may also include a virtual robot configured identically to the robot system. The virtual robot is used to present all the features of the robotic system, which are identical to the robotic system.
The robotic system may acquire the target image through the camera device. The robotic system may also acquire the target image by other means, such as by way of an electronic device input by the user, a target image received from a server, etc.
202. And the robot system determines the action information of the target user according to the target image.
The robot system can perform action recognition on the target image to determine the action information of the target user. The motion information may include facial motion information, hand motion information, leg motion information, and the like.
203. The robot system transmits the motion information to a server.
The robot system may transmit the motion information to the server through a wired network or a wireless network.
204. The server receives the action information sent by the robot system.
The server may receive the motion information transmitted by the robot system through a wired network or a wireless network.
205. And the server determines the growth information of the robot system according to the action information.
The growth information may include a plurality of sub-growth information, for example, including first sub-growth information, second sub-growth information, and third sub-growth information. Different sub-growth information may correspond to different growth categories.
206. And the server sends the growth information to the robot system.
207. And the robot system receives the growth information sent by the server.
208. And the robot system performs growth adjustment according to the growth information to obtain an adjusted robot system.
The robot system may determine a target growth category corresponding to the sub-growth information in the growth information, and adjust the target growth category according to the sub-growth information to obtain an adjusted robot system.
The target growing category may be a system element in the robotic system, e.g., a system level of the robotic system, a use level of the robotic system, a game level of the robotic system, a mental level of the robotic system, etc.
The server may perform simulation training on the motion information transmitted by the robot system to obtain a motion related to the motion information, for example, the motion information is motion information of drinking tea, and after performing simulation training on the motion information, a tea making motion, a tea pouring motion, and the like may be obtained. Therefore, the action information can be enriched, the sample space of the action information is improved, and the interestingness is further improved.
In this example, the target image of the target user is obtained, the motion information of the target user is determined according to the target image, the motion information is sent to the server, the growth information sent by the server is received, the growth information is determined by the server according to the motion information, and growth adjustment is performed according to the growth information, so that an adjusted robot system is obtained.
In one possible implementation manner, the growth information includes first sub-growth information, second sub-growth information and third sub-growth information, in this example, the growth information may be a growth value, for example, the first sub-growth information may be the first sub-growth value, and the method for determining the growth information of the robot system according to the action information by using the possible server includes:
a1, acquiring face motion information, hand motion information and leg motion information in the motion information;
a2, determining the mood value of the target user according to the facial motion information;
a3, determining the first sub-growth information according to the mood value;
a4, determining the second sub-growth information according to the hand motion information and the leg motion information;
a5, determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
Wherein the categories corresponding to the first sub-growth information, the second sub-growth information and the third sub-growth information are different. The category corresponding to the third sub-growth information may characterize growth of the category corresponding to the first sub-growth information and the second sub-growth information. For example, the class corresponding to the third sub-growing information is a large class, the class corresponding to the first sub-growing information and the class corresponding to the second sub-growing information are respectively a small class in the large class, specifically, for example, the class corresponding to the third sub-growing information is a class of a robot system, the class corresponding to the second sub-growing information may be a class of a middle motion system of the robot system, and the class corresponding to the first sub-growing information may be a class of a middle intelligence system of the robot system.
Facial motion information, hand motion information, and leg motion information may be acquired according to the information category in the motion information. The face motion information, the hand motion information, and the leg motion information are associated with different information categories, and the face motion information, the hand motion information, and the leg motion information can be determined based on the information categories.
Determining expression information of a target user according to the facial motion information; and determining the mood value of the target user according to the expression information.
Different facial action information corresponds to different expression information, and different expression information corresponds to different mood values. For example, if the facial motion information is an squint-eye-closing motion, the expression information may be a heart injury or the like. The expression information comprises a heart injury, happiness, a common expression and the like, and the common expression can be understood as expression information when the user does not have expression fluctuation. The mood value corresponding to the injury is lower than the mood value corresponding to the common expression, and the mood value corresponding to the common expression is lower than the mood value corresponding to the happiness. The higher the mood value, the higher the growth value; the lower the mood value, the lower the growth value. The mood of the injury can also have different grades, the higher the injury grade, the lower the mood value, and the lower the injury grade, the higher the mood value. The happy moods can also have different grades, the higher the grade, the higher the mood value, and the lower the grade, the lower the mood value. The mood value corresponding to the mood of the injury is negative, the mood value corresponding to the common mood is 0, and the mood value corresponding to the happiness is positive.
The method for determining the first sub-growth information according to the mood value may be: the larger the absolute value of the difference value between the mood value and the preset mood value is, the higher the first sub-growth information is; the smaller the absolute value of the difference between the mood value and the preset mood value, the lower the first sub-growth information. For example, if the preset mood value is a mood value corresponding to a common expression, the lower the mood value corresponding to the injury is, the higher the first sub-growth information is.
Corresponding second sub-growing information can be determined according to the mapping relation among the hand motion information, the leg motion information and the second sub-growing information.
The average value of the second sub-growth information and the first sub-growth information may be determined as third sub-growth information for characterizing a system growth value of the robot system, etc.
In one possible implementation, the growth information includes first sub-growth information, second sub-growth information and third sub-growth information, and the method for obtaining the adjusted robot system according to the growth adjustment performed by the robot system includes:
b1, obtaining a growing category of the robot system, and obtaining at least one reference growing category;
B2, determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
b3, adjusting the first target growth category according to the first sub-growth information to obtain an adjusted first target growth category, adjusting the second target growth category according to the second sub-growth information to obtain an adjusted second target growth category, and adjusting the third target growth category according to the third sub-growth information to obtain an adjusted third target growth category.
The robot system may include a plurality of reference growth categories, and different growth categories may correspond to different growth categories, e.g., a system growth category, a module growth category, a function growth category, a skill growth category, etc. of the robot system. A skill growth class is understood to mean a machine action or the like of a robot system, and different classes have different machine actions.
The sub-growth information accumulated before the target category may be superimposed with the corresponding sub-growth information to obtain an adjusted target growth category. For example, the sub-growth information previously accumulated in the first target growth category is superimposed with the first sub-growth information to obtain sub-growth information superimposed in the first target growth category. The stacking may be performed by adding the growth values.
In one possible implementation manner, the robot system may further push the target growth category to a terminal device of the target user, specifically as follows:
and sending the adjusted target growth category to a target terminal, and indicating the target terminal to display the target growth category, wherein the target terminal is equipment associated with the target user.
The device associated with the target user may be, for example, the target user's cell phone, tablet computer, smart watch, etc.
The target terminal displays the target category, so that the growth value of the target category corresponding to the target user can be prompted, and the user experience is improved.
In one possible implementation, the robot system may further include a projection device through which projection may be performed to demonstrate what the robot may perform, etc. The projection device can be a projector or the like, the projector comprises a rotating device, and focusing operation can be carried out on the projector by controlling the rotating device, so that the definition of projection is adjusted. One possible projection approach may be:
C1, acquiring a first projection definition when the rotating device is adjusted to a first position from the initial position, and acquiring a second projection definition when the rotating device is adjusted to a second position from the initial position, wherein a second rotating step from the initial position to the second position is longer than a first rotating step from the initial position to the first position;
c2, acquiring a definition deviation value between the first projection definition and the second projection definition;
c3, determining the first target position according to the definition deviation value and the initial position;
c4, adjusting the rotating device to the first target position;
c5, acquiring third projection definition of the first target position;
if the third projection definition is lower than a preset definition threshold, acquiring a first distance value and a first angle value between the projection device and a projection curtain when the rotation device is adjusted from the initial position to a first position, and acquiring a second distance value and a second angle value between the projection device and the projection curtain when the rotation device is adjusted from the initial position to a second position;
C7, determining a first distance deviation value according to the first distance value and the second distance value, and determining a first angle deviation value according to the first angle value and the second angle value;
c8, determining a first reference correction parameter according to the first distance deviation value, and determining a second reference correction parameter according to the first angle deviation value;
c9, determining a first position offset according to the first reference correction parameter and the second reference correction parameter;
c10, acquiring expression information of a target user;
c11, determining position satisfaction according to the expression information;
c12, determining a second position offset according to the position satisfaction degree and the first position offset;
c13, determining a second target position according to the second position offset and the first target position;
c14, adjusting the rotating device to the second target position.
The method for acquiring the first projection definition when the rotating device is adjusted to the first position may be that the first projection definition is acquired from a server or acquired from an instruction input by a user, and the method for acquiring the second projection definition is the same as the method for acquiring the first projection definition.
The first rotation step is smaller than the second rotation step, the difference between the second rotation step and the first rotation step is smaller than a preset threshold, for example, the difference between the second rotation step and the first rotation step is smaller than 10 steps, etc. Because the difference value between the second rotation step length and the first rotation step length is smaller than the preset threshold value, the first target position can be more accurately determined, and the accuracy of the first target position is improved.
The method for obtaining the first projection definition from the server may be: shooting a projection image when the rotating device rotates to a first position by using electronic equipment such as a mobile phone and the like with a camera; and uploading the projection image to a server by the electronic equipment, and processing the projection image by the server to obtain a first projection definition. Of course, the server may also acquire the projection image through other approaches, for example, the projection image acquired through a monitoring camera within a preset range of the projection device, which is not limited herein. The preset range is set by an empirical value or historical data.
The first target position may be determined from the first projection definition, the deviation value between the second projection definition, and the initial position.
The rotation device may be adjusted from the initial position to the first target position, may be adjusted from the second position to the first target position, may be adjusted from the first position to the first target position, and may be adjusted from other positions to the first target position, which is not particularly limited herein.
The first target position may be a position of the rotating device when the projection device performs projection after auto-focusing. The projection device comprises a rotation device.
The method for obtaining the third projection definition of the first target position may refer to the method for obtaining the first projection definition in the foregoing embodiment, which is not described herein. The preset sharpness threshold may be set by empirical values or historical data.
The first distance value may be a distance between a center of a projection lens of the projection device and a plane where the projection curtain is located, and the first angle may be an included angle between the plane where a mirror surface of the projection lens is located and the plane where the projection curtain is located.
The first distance value can be obtained through the distance measuring device, and the first included angle can extend the plane, so that the first angle is obtained.
The first distance deviation value is understood to be the fluctuation value between the second distance value and the first distance value, and the deviation value can be a positive value or a negative value.
The method for determining the first reference correction parameter according to the first distance deviation value may be: different distance deviation values correspond to different reference correction parameters, and then a first reference correction parameter can be determined according to the first distance deviation value. Different angle deviation values correspond to different reference correction parameters, and a second reference correction parameter can be determined according to the first angle deviation value. The reference correction parameter may be used to determine the positional offset.
A mean of the first and second correction parameters may be obtained, and a first positional offset may be determined based on the mean. Different reference correction values correspond to different positional offsets, and the larger the reference correction value is, the larger the first positional offset is, and the smaller the reference correction value is, the smaller the first positional offset is. The first position offset is determined by the mean value of the first reference first correction parameter and the second reference correction parameter, so that the accuracy of the first position offset can be improved.
The method for obtaining the expression information of the target user may be that a face image of the target user is obtained through an electronic device of the user, and the expression information of the user is determined according to the face image. The target user may be a user who debugs the projection device, or may be a user who views the projection screen using the projection device, or may be another related user, which is merely illustrative.
Different expression information has different satisfaction, and the expression indicated by the expression information can be smiling face, crying face, face shape of normal expression, etc., and the face shape of normal expression can be understood as face shape without expression fluctuation. The satisfaction of smiling face is higher than that of face shape of normal expression, and the satisfaction of face shape of normal expression is higher than that of crying face.
The correction value of the first positional deviation amount may be determined according to the satisfaction, the higher the satisfaction, the smaller the correction value, and the lower the satisfaction, the larger the correction value. The product of the first position offset and the correction value is determined as the second position offset, and the first position offset may be corrected by using the correction value in other manners to obtain the second position offset, for example, the sum of the first position offset and the correction value, etc.
The first target position may be shifted by a second position offset to obtain a second target position, and the shifting may be performed by adding the offset to the first target position if the second position offset is positive to obtain the second target position, and subtracting the offset to the first target position if the second position offset is negative to obtain the second target position. The first target position addition and subtraction offset amount is understood to be that the second position offset amount can be added and subtracted on the position offset amount between the first target position and the initial position.
In this example, when the projection definition at the first target position reaches the preset standard, the projection device is adjusted again, so that accuracy in adjusting the projection device can be improved.
In one possible implementation manner, another possible method for determining the motion amplitude value of the target user according to the hand motion information and the leg motion information may be:
d1, determining a plurality of keywords according to the hand motion information and the leg motion information;
d2, determining the criticality of each keyword in the plurality of keywords to obtain a plurality of criticality;
and D3, determining a motion amplitude value according to the plurality of criticality.
The keyword extraction can be performed on the hand motion information and the leg motion information to obtain a plurality of keywords.
The motion amplitude can be determined according to the average value of the criticality, and the larger the average value is, the larger the motion amplitude is, the smaller the average value is, and the smaller the motion amplitude is.
One possible determination of the criticality of each of the plurality of keywords, resulting in a plurality of criticalities, may include the steps of:
e1, determining a target position of a keyword a in the text information and a reference keyword level of the keyword a, wherein the keyword a is any keyword in the plurality of keywords;
E2, determining a reference criticality corresponding to the reference criticality according to a mapping relation between a preset criticality level and the criticality;
e3, determining a target first optimization factor corresponding to the target position according to a mapping relation between a preset position and the first optimization factor;
e4, acquiring volume parameters of the keyword a;
e5, determining a target second optimization factor corresponding to the volume parameter of the keyword a according to a mapping relation between the preset volume parameter and the second optimization factor;
and E6, optimizing the reference criticality according to the target first optimization factor and the target second optimization factor to obtain the criticality of the keyword a.
The preset threshold value can be set by a user or default by the system, and the preset threshold value can be an experience value. The mapping relation between the preset key level and the key degree, the mapping relation between the preset position and the first optimization factor and the mapping relation between the preset volume parameter and the second optimization factor can be stored in the intelligent robot in advance.
In a specific implementation, taking a keyword a as an example, the keyword a is any keyword in a plurality of keywords, the intelligent robot can determine a target position of the keyword a in text information and a reference keyword level of the keyword a, determine a reference keyword corresponding to the reference keyword level according to a mapping relation between a preset keyword level and the keyword level, determine a target first optimization factor corresponding to the target position according to a mapping relation between the preset position and the first optimization factor, and the value range of the first optimization factor can be-1 to 1, for example, the first optimization factor can be-0.08 to 0.08.
Further, because the volume parameters of the keywords are different, it is indicated that the user importance degree is different, the volume parameters can be volume or tone, according to the mapping relation between the preset volume parameters and the second optimization factors, the target second optimization factor corresponding to the volume parameters of the keyword a is determined, the value range of the second optimization factor can be-1 to 1, for example, the second optimization factor can be-0.032 to 0.032, and the reference keyword is adjusted according to the target first optimization factor and the target second optimization factor, so as to obtain the keyword a, and the specific calculation formula is as follows:
keyword a criticality = keyword a reference criticality (1 + target first optimization factor) and (1 + target second optimization factor)
Furthermore, the key degree of the key words can be accurately determined according to the position and the volume parameters of the key words, and the command recognition accuracy can be improved.
Referring to fig. 3, fig. 3 is a flow chart of another interaction method according to an embodiment of the present application. As shown in fig. 3, the method is applied to a robot system, and the method includes:
301. acquiring a target image of a target user;
302. determining action information of the target user according to the target image;
303. Transmitting the action information to a server;
304. receiving growth information sent by the server, wherein the growth information is determined by the server according to the action information;
the growth information includes first sub-growth information, second sub-growth information, and third sub-growth information.
305. Obtaining a growth category of the robot system, and obtaining at least one reference growth category;
306. determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
307. the first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
In this example, by adjusting the target class according to the child growth information, the adjusted target growth class is obtained, and accuracy in adjusting the target class can be improved.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that, in order to achieve the above-mentioned functions, the terminal includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the functional units of the terminal according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
In accordance with the foregoing, referring to fig. 4, fig. 4 is a schematic structural diagram of an interaction device according to an embodiment of the present application. As shown in fig. 4, the apparatus is applied to a robot system, and the apparatus includes:
an acquisition unit 401 for acquiring a target image of a target user;
a determining unit 402, configured to determine motion information of the target user according to the target image;
a transmitting unit 403, configured to transmit the action information to a server;
a receiving unit 404, configured to receive growth information sent by the server, where the growth information is determined by the server according to the action information;
and an adjusting unit 405 for performing growth adjustment according to the growth information to obtain an adjusted robot system.
In one possible implementation manner, the growth information includes first sub-growth information, second sub-growth information, and third sub-growth information, and the adjusting unit 405 is configured to:
obtaining a growth category of the robot system, and obtaining at least one reference growth category;
determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
The first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
In one possible implementation, the apparatus is further configured to:
and sending the adjusted target growth category to a target terminal, and indicating the target terminal to display the target growth category, wherein the target terminal is equipment associated with the target user.
In accordance with the foregoing, referring to fig. 5, fig. 5 is a schematic structural diagram of an interaction device according to an embodiment of the present application. As shown in fig. 5, the apparatus is applied to a server, and the apparatus includes:
a receiving unit 501, configured to receive action information sent by a robot system;
a determining unit 502, configured to determine growth information of the robot system according to the motion information;
a transmitting unit 503 configured to transmit the growth information to the robot system.
In a possible implementation manner, the growth information includes first sub-growth information, second sub-growth information, and third sub-growth information, and the determining unit 502 is configured to:
acquiring face motion information, hand motion information and leg motion information in the motion information;
determining the mood value of the target user according to the facial motion information;
determining the first sub-growth information according to the mood value;
determining the second sub-growing information according to the hand motion information and the leg motion information;
and determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the interaction methods described in the method embodiments above.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program that causes a computer to perform some or all of the steps of any one of the interaction methods described in the method embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software program modules.
The integrated units, if implemented in the form of software program modules, may be stored in a computer-readable memory for sale or use as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: a U-disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-only memory, random access memory, magnetic or optical disk, etc.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (8)

1. An interaction method, applied to a robotic system, the method comprising:
acquiring a target image of a target user;
determining action information of the target user according to the target image;
transmitting the action information to a server;
receiving growth information sent by the server, wherein the growth information is determined by the server according to the action information;
Performing growth adjustment according to the growth information to obtain an adjusted robot system;
the growth information includes a first sub-growth information, a second sub-growth information, and a third sub-growth information, and the growth adjustment is performed according to the growth information, so as to obtain an adjusted robot system, including:
obtaining a growth category of the robot system, and obtaining at least one reference growth category;
determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
the first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
2. The method according to claim 1, wherein the method further comprises:
and sending the adjusted target growth category to a target terminal, and indicating the target terminal to display the target growth category, wherein the target terminal is equipment associated with the target user.
3. An interaction method, wherein the method is applied to a server, the method comprising: :
receiving action information sent by a robot system;
determining growth information of the robot system according to the action information; the growth information comprises first sub-growth information, second sub-growth information and third sub-growth information;
transmitting the growth information to the robot system so that the robot system performs growth adjustment according to the growth information to obtain an adjusted robot system;
wherein the robot system performs growth adjustment according to the growth information to obtain an adjusted robot system, comprising:
acquiring a growing category of the robot system by the robot system to obtain at least one reference growing category;
determining, by the robotic system, a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
The robot system adjusts the first target growth category according to the first sub-growth information to obtain an adjusted first target growth category, adjusts the second target growth category according to the second sub-growth information to obtain an adjusted second target growth category, and adjusts the third target growth category according to the third sub-growth information to obtain an adjusted third target growth category.
4. A method according to claim 3, wherein said determining growth information of the robotic system based on the motion information comprises:
acquiring face motion information, hand motion information and leg motion information in the motion information;
determining the mood value of the target user according to the facial motion information;
determining the first sub-growth information according to the mood value;
determining the second sub-growing information according to the hand motion information and the leg motion information;
and determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
5. An interactive device, the device being applied to a robotic system, the device comprising:
An acquisition unit configured to acquire a target image of a target user;
a determining unit, configured to determine motion information of the target user according to the target image;
a sending unit, configured to send the action information to a server;
the receiving unit is used for receiving the growth information sent by the server, wherein the growth information is determined by the server according to the action information;
the adjusting unit is used for carrying out growth adjustment according to the growth information to obtain an adjusted robot system;
the growth information includes first sub-growth information, second sub-growth information and third sub-growth information, and the adjustment unit performs growth adjustment according to the growth information, and the manner of obtaining the adjusted robot system specifically includes:
obtaining a growth category of the robot system, and obtaining at least one reference growth category;
determining a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, determining a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and determining a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
The first target growth category is adjusted according to the first sub-growth information to obtain an adjusted first target growth category, the second target growth category is adjusted according to the second sub-growth information to obtain an adjusted second target growth category, and the third target growth category is adjusted according to the third sub-growth information to obtain an adjusted third target growth category.
6. An interactive apparatus, the apparatus being applied to a server, the apparatus comprising: :
the receiving unit is used for receiving the action information sent by the robot system;
a determining unit configured to determine growth information of the robot system according to the motion information; the growth information comprises first sub-growth information, second sub-growth information and third sub-growth information;
the sending unit is used for sending the growth information to the robot system so that the robot system can perform growth adjustment according to the growth information to obtain an adjusted robot system;
wherein the robot system performs growth adjustment according to the growth information, and the method for obtaining the adjusted robot system specifically includes:
Acquiring a growing category of the robot system by the robot system to obtain at least one reference growing category;
determining, by the robotic system, a first target growth category corresponding to the first sub-growth information from the at least one reference growth category, a second target growth category corresponding to the second sub-growth information from the at least one reference growth category, and a third target growth category corresponding to the third sub-growth information from the at least one reference growth category;
the robot system adjusts the first target growth category according to the first sub-growth information to obtain an adjusted first target growth category, adjusts the second target growth category according to the second sub-growth information to obtain an adjusted second target growth category, and adjusts the third target growth category according to the third sub-growth information to obtain an adjusted third target growth category.
7. The apparatus according to claim 6, wherein the determining unit is configured to:
acquiring face motion information, hand motion information and leg motion information in the motion information;
Determining the mood value of the target user according to the facial motion information;
determining the first sub-growth information according to the mood value;
determining a motion amplitude value of the target user according to the hand motion information and the leg motion information;
determining the second sub-growth information according to the motion amplitude value;
and determining the third sub-growth information according to the first sub-growth information and the second sub-growth information.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-4.
CN202011642650.8A 2020-12-30 2020-12-30 Interaction method and related device Active CN114762981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642650.8A CN114762981B (en) 2020-12-30 2020-12-30 Interaction method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642650.8A CN114762981B (en) 2020-12-30 2020-12-30 Interaction method and related device

Publications (2)

Publication Number Publication Date
CN114762981A CN114762981A (en) 2022-07-19
CN114762981B true CN114762981B (en) 2024-03-15

Family

ID=82363541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642650.8A Active CN114762981B (en) 2020-12-30 2020-12-30 Interaction method and related device

Country Status (1)

Country Link
CN (1) CN114762981B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004170756A (en) * 2002-11-21 2004-06-17 Sony Corp Unit and method for robot control, recording medium, and program
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment
CN106695839A (en) * 2017-03-02 2017-05-24 青岛中公联信息科技有限公司 Bionic intelligent robot for toddler education
CN107030691A (en) * 2017-03-24 2017-08-11 华为技术有限公司 A kind of data processing method and device for nursing robot
CN207824902U (en) * 2018-02-05 2018-09-07 深圳市金大精密制造有限公司 Multifunctional children growth robot
CN109446303A (en) * 2018-10-09 2019-03-08 深圳市三宝创新智能有限公司 Robot interactive method, apparatus, computer equipment and readable storage medium storing program for executing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004170756A (en) * 2002-11-21 2004-06-17 Sony Corp Unit and method for robot control, recording medium, and program
CN104951077A (en) * 2015-06-24 2015-09-30 百度在线网络技术(北京)有限公司 Man-machine interaction method and device based on artificial intelligence and terminal equipment
CN106695839A (en) * 2017-03-02 2017-05-24 青岛中公联信息科技有限公司 Bionic intelligent robot for toddler education
CN107030691A (en) * 2017-03-24 2017-08-11 华为技术有限公司 A kind of data processing method and device for nursing robot
CN207824902U (en) * 2018-02-05 2018-09-07 深圳市金大精密制造有限公司 Multifunctional children growth robot
CN109446303A (en) * 2018-10-09 2019-03-08 深圳市三宝创新智能有限公司 Robot interactive method, apparatus, computer equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN114762981A (en) 2022-07-19

Similar Documents

Publication Publication Date Title
US20210383586A1 (en) Artificial intelligence-based animation character drive method and related apparatus
CN109361865B (en) Shooting method and terminal
CN103942021B (en) Content presenting method, the method for pushing and intelligent terminal of content presentation mode
US10486312B2 (en) Robot, robot control method, and robot system
CN110263213B (en) Video pushing method, device, computer equipment and storage medium
WO2018040785A1 (en) Method and device for controlling photography of unmanned aerial vehicle, and wearable device
CN108848313B (en) Multi-person photographing method, terminal and storage medium
EP3973453A1 (en) Real-world object recognition for computing device
CN107832836A (en) Model-free depth enhancing study heuristic approach and device
CN110741377A (en) Face image processing method and device, storage medium and electronic equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN106507178A (en) Video broadcasting method and device
CN109752951B (en) Control system processing method and device, storage medium and electronic device
CN111308707B (en) Picture display adjusting method and device, storage medium and augmented reality display equipment
CN107948743A (en) Video pushing method and its device, storage medium
CN117083641A (en) Real-time experience real-size eye wear device
CN110478903B (en) Control method and device for virtual camera
KR20230157494A (en) Actual size eyewear in real time
KR102229034B1 (en) Apparatus and method for creating information related to facial expression and apparatus for creating facial expression
CN112533070B (en) Video sound and picture adjusting method, terminal and computer readable storage medium
CN114762981B (en) Interaction method and related device
CN113342229A (en) Method and device for adjusting font size, terminal equipment and computer readable storage medium
CN109788227A (en) A kind of image processing method and its device, equipment and storage medium
US10922825B2 (en) Image data processing method and electronic device
CN112712175B (en) Robot learning method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant