CN114789470A - Method and device for adjusting simulation robot - Google Patents

Method and device for adjusting simulation robot Download PDF

Info

Publication number
CN114789470A
CN114789470A CN202210087499.9A CN202210087499A CN114789470A CN 114789470 A CN114789470 A CN 114789470A CN 202210087499 A CN202210087499 A CN 202210087499A CN 114789470 A CN114789470 A CN 114789470A
Authority
CN
China
Prior art keywords
simulation robot
key points
robot
simulation
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210087499.9A
Other languages
Chinese (zh)
Inventor
王红光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mengtebo Intelligent Robot Technology Co ltd
Original Assignee
Beijing Mengtebo Intelligent Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mengtebo Intelligent Robot Technology Co ltd filed Critical Beijing Mengtebo Intelligent Robot Technology Co ltd
Priority to CN202210087499.9A priority Critical patent/CN114789470A/en
Publication of CN114789470A publication Critical patent/CN114789470A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0095Means or methods for testing manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the disclosure provides a method and a device for adjusting a simulation robot. The method comprises the following steps: testing the simulation robot and the tester for a unified test project; respectively carrying out image recognition on the test processes of the simulation robot and the tester according to the test items; comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information; and adjusting the simulation robot according to the difference information so as to reduce the difference with human beings, improve the simulation effect of the simulation robot and optimize the user experience.

Description

Method and device for adjusting simulation robot
Technical Field
The disclosure relates to the field of robots, and in particular, to a method and an apparatus for adjusting a simulation robot.
Background
At present, various simulation robots and the like are continuously appeared, but the current simulation robots, such as super-simulation robots, are intelligent robots with the same appearance and functions as people, not only have the characteristics of five sense organs and limbs, but also can be highly simulated, and have no difference with real people even the actions and the forms of human bodies.
However, the simulation robot can only complete corresponding actions according to specific instructions, and cannot autonomously respond to various external signals, so that the actions are rigid, and the simulation effect needs to be improved. Therefore, how to improve the simulation effect becomes a problem to be solved urgently at present.
Disclosure of Invention
The present disclosure provides a method and an apparatus for adjusting a simulation robot, which can improve the simulation effect of the simulation robot.
In a first aspect, an embodiment of the present disclosure provides a method for adjusting a simulation robot, where the method includes:
testing the simulation robot and the tester for a unified test project;
respectively carrying out image recognition on the test processes of the simulation robot and the tester according to the test items;
comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information;
and adjusting the simulation robot according to the difference information.
In some implementations of the first aspect, the testing the simulation robot and the tester for the unified test item includes:
sending a test item to the simulation robot for the simulation robot to execute the test item;
the test items are displayed to the tester for the tester to execute the test items.
In some realizations of the first aspect, the test items include a human motion test and/or a human facial expression test;
respectively carrying out image recognition on the test processes of the simulation robot and the tester according to the test items, comprising the following steps of:
carrying out target detection on videos of the simulation robot and the testing personnel during testing to obtain human body images and/or human face images of the simulation robot and the testing personnel;
and respectively carrying out human body action recognition and/or human face expression recognition on the simulation robot and the testing personnel according to the human body images and/or the human face images of the simulation robot and the testing personnel.
In some implementation manners of the first aspect, the image recognition result includes human body motion data at multiple moments in a motion process and/or facial expression data at multiple moments in an expression process, where the human body motion data includes motion data of multiple human body key points, and the facial expression data includes motion data of multiple human face key points;
comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information, wherein the difference information comprises the following steps:
and comparing the action data of the human key points of the simulation robot and the testing personnel and/or the action data of the human face key points point by point moment by moment to obtain difference information.
In some implementations of the first aspect, adjusting the simulated robot according to the difference information includes:
carrying out information analysis on the difference information to determine the reason for generating the difference;
and adjusting the simulation robot according to the difference generation reason.
In some implementations of the first aspect, adjusting the simulation robot according to the cause of the difference includes:
if the difference generation reason is that the simulation robot lacks a response execution mechanism of the human body key points and/or the human face key points, the response execution mechanism of the human body key points and/or the human face key points is added to the simulation robot.
In some implementations of the first aspect, the human key points include multi-level human key points, and the face key points include multi-level face key points;
the response execution mechanism for adding human body key points and/or human face key points to the simulation robot comprises:
and gradually increasing response execution mechanisms of the human body key points and/or the human face key points for the simulation robot according to the levels of the human body key points and/or the human face key points.
In some implementations of the first aspect, adjusting the simulation robot according to the cause of the difference includes:
if the difference generation reason is that the corresponding response is not set for the trigger signal received by the simulation robot, the response corresponding to the trigger signal is set for the simulation robot;
and if the difference generation reason is that an error exists in the response corresponding to the trigger signal received by the simulation robot, adjusting the response corresponding to the trigger signal.
In some implementations of the first aspect, the method further comprises:
subjective scoring is carried out on the test process of the simulation robot for executing the test items;
and determining whether to stop adjusting according to the grading result.
In a second aspect, an embodiment of the present disclosure provides an adjusting apparatus for a simulation robot, including:
the test module is used for testing the unified test items of the simulation robot and the tester;
the identification module is used for respectively carrying out image identification on the simulation robot and the test process of a tester according to the test items;
the comparison module is used for comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information;
and the adjusting module is used for adjusting the simulation robot according to the difference information.
In the disclosure, the simulation robot and the tester can be tested in a unified test project, then the test processes of the simulation robot and the tester are respectively subjected to image recognition according to the test project, the image recognition results of the simulation robot and the tester are compared to obtain difference information, and then the simulation robot is adjusted according to the difference information to reduce the difference with human beings and improve the simulation effect of the simulation robot.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the present disclosure, and are not intended to limit the disclosure thereto, and the same or similar reference numerals will be used to indicate the same or similar elements, where:
FIG. 1 illustrates a schematic diagram of an exemplary operating environment in which embodiments of the present disclosure can be implemented;
fig. 2 shows a flowchart of an adjustment method of a simulation robot according to an embodiment of the present disclosure;
fig. 3 is a structural diagram illustrating an adjusting apparatus of a simulation robot according to an embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship.
In order to solve the problems occurring in the background art, the embodiments of the present disclosure provide a method and an apparatus for adjusting a simulation robot. Specifically, the simulation robot and the tester can be tested in a unified test project, then the test processes of the simulation robot and the tester are respectively subjected to image recognition according to the test project, the image recognition results of the simulation robot and the tester are compared to obtain difference information, and then the simulation robot is adjusted according to the difference information to reduce the difference between the simulation robot and the human being and improve the simulation effect of the simulation robot.
The following describes in detail an adjustment method and an adjustment device for a simulation robot according to an embodiment of the present disclosure with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an exemplary runtime environment 100 in which embodiments of the present disclosure can be implemented, as shown in FIG. 1, the runtime environment 100 may include an electronic device 110 and a simulated robot 120.
The electronic device 110 may be a mobile electronic device or a non-mobile electronic device. For example, the Mobile electronic device may be a tablet Computer, a notebook Computer, a palmtop Computer, an Ultra-Mobile Personal Computer (UMPC), or the like, and the non-Mobile electronic device may be a Personal Computer (PC), a server, or the like. The simulation robot 120 looks almost the same as a human, can move like a human in addition to having the appearance of a human, has the expression like a human, and can respond correspondingly to a meeting.
As an example, the electronic device 110 may perform a test on the unified test items on the simulation robot 120 and the tester, perform image recognition on the test processes of the simulation robot 120 and the tester according to the test items, compare the image recognition results of the simulation robot 120 and the tester to obtain difference information, and adjust the simulation robot 120 according to the difference information. In this way, the simulation robot 120 can be continuously adjusted according to the difference between the simulation robot 120 and the tester, so as to reduce the difference between the simulation robot and the human, and improve the simulation effect of the simulation robot.
The adjusting method of the simulation robot according to the embodiment of the present disclosure will be described in detail below, wherein the main executing body of the adjusting method of the simulation robot may be the electronic device 110 shown in fig. 1.
Fig. 2 shows a flowchart of an adjusting method of a simulation robot according to an embodiment of the present disclosure, and as shown in fig. 2, the adjusting method 200 may include the following steps:
and S210, testing the unified test items of the simulation robot and the tester.
In some embodiments, the test items may be sent to the simulated robot for the simulated robot to quickly execute the test items. Meanwhile, the test items may be displayed to the tester for the tester to quickly execute the test items.
Illustratively, the test items may be human settings, including a human motion test and/or a facial expression test, wherein the human motion test may be a knee reaction, a fright reaction, a grip reaction, and the like, and the facial expression test may be a joy, anger, a worry, a sight line following an object movement, and the like.
And S220, respectively carrying out image recognition on the test processes of the simulation robot and the tester according to the test items.
Specifically, when the simulation robot and the tester execute the test, the simulation robot and the tester are respectively photographed by the cameras, and the video of the simulation robot and the tester when executing the test is obtained.
And then, carrying out target detection on the video of the simulation robot and the testing personnel during the test to obtain human body images and/or human face images of the simulation robot and the testing personnel.
And then respectively carrying out human body action recognition and/or human face expression recognition on the simulation robot and the testing personnel according to the human body images and/or the human face images of the simulation robot and the testing personnel, and quickly obtaining the image recognition results of the simulation robot and the testing personnel.
For example, the image recognition result may include human body motion data at multiple moments during the motion process and/or facial expression data at multiple moments during the expression process, where the human body motion data may include motion data of multiple human body key points (e.g., bone key points, etc.), and the facial expression data includes motion data of multiple human face key points (e.g., facial five-sense organ key points, etc.).
And S230, comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information.
Specifically, the difference information can be obtained by comparing the motion data of the human key points of the simulation robot and the tester and/or the motion data of the human face key points point by point at any moment. That is, the motion data of the simulation robot and the tester at the same time and at the same key point can be continuously and sequentially compared to obtain difference information (for example, skeleton key point trajectory difference information, facial feature key point trajectory difference information, skeleton key point speed difference information, facial feature key point speed difference information, skeletal key point acceleration difference information, facial feature key point acceleration difference information, and the like), so that the accuracy of the difference information is improved.
And S240, adjusting the simulation robot according to the difference information.
In some embodiments, the difference information may be analyzed to determine a cause of the difference.
For example, the difference information may be manually analyzed to determine the cause of the difference. The difference information may be analyzed by a pre-trained information analysis model to determine the cause of the difference.
And then, according to the difference generation reason, the simulation robot is accurately adjusted to reduce the difference with the human.
For example, if the difference generation reason is that the simulation robot lacks a response execution mechanism of the human body key point and/or the human face key point, the response execution mechanism of the human body key point and/or the human face key point is added to the simulation robot. Therefore, the simulation robot can be accurately adjusted by continuously enriching the response execution mechanisms of the simulation robot.
It can be understood that the human key points may include multiple levels of human key points, and the human face key points include multiple levels of human face key points, that is, the human key points and the human face key points have different levels, the human key points at different levels are associated with human body actions at different levels, and the human face key points at different levels are associated with human face expressions at different levels. For example, the first-level human body key points are basic motions, the second-level human body key points are subtle motions, and the like, the first-level human face key points are basic expressions, the second-level human face key points are subtle expressions, and the like. Correspondingly, response execution mechanisms of the human key points and/or the human face key points can be added to the simulation robot step by step according to the levels of the human key points and/or the human face key points, and the adjustment efficiency of the simulation robot is further improved.
In addition, if the difference generation reason is that a corresponding response is not set for the trigger signal received by the simulation robot, a response corresponding to the trigger signal is set for the simulation robot.
And if the difference generation reason is that an error exists in the response corresponding to the trigger signal received by the simulation robot, adjusting the response corresponding to the trigger signal. Therefore, the simulation robot can be accurately adjusted by continuously enriching the response of the simulation robot.
According to the embodiment of the disclosure, the simulation robot and the tester can be tested in a unified test project, then the test processes of the simulation robot and the tester are respectively subjected to image recognition according to the test project, the image recognition results of the simulation robot and the tester are compared to obtain the difference information, and then the simulation robot is adjusted according to the difference information to reduce the difference with human beings, so that the simulation effect of the simulation robot is improved, and the user experience is optimized.
In some embodiments, the test process of the simulation robot for executing the test item may be subjectively scored, and whether to stop the adjustment may be determined according to the scoring result. Therefore, the similarity between the simulation robot and the human can be evaluated in real time, the adjustment is finished under the condition that the difference between the robot and the human is few, the adjustment is continued under the condition that the difference between the robot and the human is large, and the resource consumption can be reduced on the basis of improving the simulation effect of the simulation robot.
It should be noted that for simplicity of description, the above-mentioned method embodiments are described as a series of acts, but those skilled in the art should understand that the present disclosure is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present disclosure. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are described below to further illustrate the aspects of the disclosure.
Fig. 3 illustrates a block diagram of an adjusting apparatus of a simulation robot according to an embodiment of the present disclosure, and as shown in fig. 3, the adjusting apparatus 300 may include:
and the test module 310 is used for testing the simulation robot and the tester for the unified test project.
And the recognition module 320 is used for respectively carrying out image recognition on the simulation robot and the testing process of the tester according to the testing items.
And the comparison module 330 is configured to compare the image recognition results of the simulation robot and the tester to obtain difference information.
And an adjusting module 340, configured to adjust the simulation robot according to the difference information.
In some embodiments, the test module 310 is specifically configured to:
and sending the test items to the simulation robot for the simulation robot to execute the test items.
The test items are displayed to the tester for the tester to execute the test items.
In some embodiments, the test items include a human movement test and/or a facial expression test.
The identification module 320 is specifically configured to:
and carrying out target detection on the video of the simulation robot and the testing personnel during the test to obtain human body images and/or human face images of the simulation robot and the testing personnel.
And respectively carrying out human body action recognition and/or human face expression recognition on the simulation robot and the testing personnel according to the human body images and/or the human face images of the simulation robot and the testing personnel.
In some embodiments, the image recognition result includes human body motion data at multiple moments in a motion process and/or facial expression data at multiple moments in an expression process, wherein the human body motion data includes motion data of multiple human body key points, and the facial expression data includes motion data of multiple human face key points.
The alignment module 330 is specifically configured to:
and comparing the action data of the human key points of the simulation robot and the testing personnel and/or the action data of the human face key points point by point moment by moment to obtain difference information.
In some embodiments, the adjusting module 340 is specifically configured to:
and carrying out information analysis on the difference information to determine the difference generation reason.
And adjusting the simulation robot according to the difference generation reason.
In some embodiments, the adjusting module 340 is specifically configured to:
if the difference generation reason is that the simulation robot lacks a response execution mechanism of the human body key points and/or the human face key points, the response execution mechanism of the human body key points and/or the human face key points is added to the simulation robot.
In some embodiments, the human keypoints comprise multiple levels of human keypoints and the face keypoints comprise multiple levels of face keypoints.
The adjusting module 340 is specifically configured to:
and gradually increasing response execution mechanisms of the human key points and/or the human face key points for the simulation robot according to the levels of the human key points and/or the human face key points.
In some embodiments, the adjusting module 340 is specifically configured to:
and if the difference generation reason is that a corresponding response is not set for the trigger signal received by the simulation robot, setting a response corresponding to the trigger signal for the simulation robot.
And if the difference generation reason is that an error exists in the response corresponding to the trigger signal received by the simulation robot, adjusting the response corresponding to the trigger signal.
In some embodiments, the adjustment apparatus 300 further comprises:
and the scoring module is used for subjectively scoring the test process of the test item executed by the simulation robot.
And the determining module is used for determining whether to stop adjusting according to the grading result.
It can be understood that each module/unit in the adjusting apparatus 300 shown in fig. 3 has a function of implementing each step in the adjusting method 200 provided in the embodiment of the disclosure, and can achieve the corresponding technical effect, and for brevity, no detailed description is provided herein.
FIG. 4 illustrates a block diagram of an electronic device that may be used to implement embodiments of the present disclosure. Electronic device 400 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic device 400 may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the electronic device 400 may include a computing unit 401 that may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 can also be stored. The calculation unit 401, the ROM402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in the electronic device 400 are connected to the I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the electronic device 400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 401 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer program product comprising a computer program tangibly embodied in a computer-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM402 and/or the communication unit 409. When the computer program is loaded into RAM403 and executed by computing unit 401, one or more steps of method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the method 200 by any other suitable means (e.g., by means of firmware).
The various embodiments described herein above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the present disclosure also provides a non-transitory computer readable storage medium storing computer instructions, where the computer instructions are used to enable a computer to execute the method 200 and achieve the corresponding technical effects achieved by the method according to the embodiments of the present disclosure, and for brevity, the detailed description is omitted here.
Additionally, the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the method 200.
To provide for interaction with a user, the above-described embodiments may be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The embodiments described above may be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user may interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. A method for adjusting a simulation robot, the method comprising:
testing the simulation robot and the tester for a unified test project;
respectively carrying out image recognition on the simulation robot and the testing process of the tester according to the testing items;
comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information;
and adjusting the simulation robot according to the difference information.
2. The method of claim 1, wherein the testing of the unified test items for the simulated robots and the testers comprises:
sending the test items to the simulation robot for the simulation robot to execute the test items;
and displaying the test items to the tester for the tester to execute the test items.
3. The method of claim 1, wherein the test items comprise a human action test and/or a facial expression test;
the image recognition of the test processes of the simulation robot and the tester according to the test items comprises the following steps:
carrying out target detection on the video of the simulation robot and the testing personnel during the test to obtain human body images and/or human face images of the simulation robot and the testing personnel;
and respectively carrying out human body action recognition and/or human face expression recognition on the simulation robot and the testing personnel according to the human body images and/or the human face images of the simulation robot and the testing personnel.
4. The method of claim 3, wherein the image recognition result comprises human body motion data at a plurality of moments in a motion process and/or facial expression data at a plurality of moments in an expression process, wherein the human body motion data comprises motion data of a plurality of human body key points, and the facial expression data comprises motion data of a plurality of facial key points;
the comparing the image recognition results of the simulation robot and the testing personnel to obtain difference information comprises the following steps:
and comparing the action data of the human key points and/or the action data of the human face key points of the simulation robot and the testing personnel point by point at any moment to obtain difference information.
5. The method of claim 4, wherein said adjusting the simulated robot based on the difference information comprises:
performing information analysis on the difference information to determine the reason for generating the difference;
and adjusting the simulation robot according to the difference generation reason.
6. The method of claim 5, wherein said adjusting said simulated robot based on said cause of said discrepancy comprises:
and if the difference generation reason is that the simulation robot lacks a response execution mechanism of the human body key points and/or the human face key points, adding the response execution mechanism of the human body key points and/or the human face key points for the simulation robot.
7. The method of claim 6, wherein the human keypoints comprise multi-level human keypoints, and the face keypoints comprise multi-level face keypoints;
the response execution mechanism for adding the human body key points and/or the human face key points to the simulation robot comprises:
and gradually increasing response execution mechanisms of the human body key points and/or the human face key points for the simulation robot according to the levels of the human body key points and/or the human face key points.
8. The method of claim 5, wherein said adjusting said simulated robot based on said cause of said discrepancy comprises:
if the difference generation reason is that a corresponding response is not set for the trigger signal received by the simulation robot, setting a response corresponding to the trigger signal for the simulation robot;
and if the difference generation reason is that the response corresponding to the trigger signal received by the simulation robot has an error, adjusting the response corresponding to the trigger signal.
9. The method according to any one of claims 1-8, further comprising:
subjective scoring is carried out on the test process of the simulation robot for executing the test items;
and determining whether to stop adjusting according to the grading result.
10. An adjustment device for a simulation robot, the device comprising:
the test module is used for testing the simulation robot and the tester for unified test items;
the identification module is used for respectively carrying out image identification on the simulation robot and the testing process of the tester according to the testing items;
the comparison module is used for comparing the image recognition results of the simulation robot and the image recognition results of the testing personnel to obtain difference information;
and the adjusting module is used for adjusting the simulation robot according to the difference information.
CN202210087499.9A 2022-01-25 2022-01-25 Method and device for adjusting simulation robot Pending CN114789470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210087499.9A CN114789470A (en) 2022-01-25 2022-01-25 Method and device for adjusting simulation robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210087499.9A CN114789470A (en) 2022-01-25 2022-01-25 Method and device for adjusting simulation robot

Publications (1)

Publication Number Publication Date
CN114789470A true CN114789470A (en) 2022-07-26

Family

ID=82459830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210087499.9A Pending CN114789470A (en) 2022-01-25 2022-01-25 Method and device for adjusting simulation robot

Country Status (1)

Country Link
CN (1) CN114789470A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326980A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Robot and method for simulating human facial movements by robot
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN109521927A (en) * 2017-09-20 2019-03-26 阿里巴巴集团控股有限公司 Robot interactive approach and equipment
CN109676583A (en) * 2018-12-03 2019-04-26 深圳市越疆科技有限公司 Based on targeted attitude deep learning vision collecting method, learning system and storage medium
CN109773807A (en) * 2019-03-04 2019-05-21 昆山塔米机器人有限公司 Motion control method, robot
CN110559639A (en) * 2019-08-02 2019-12-13 焦作大学 Robot teaching method for gymnastics movement
US20200290203A1 (en) * 2019-03-13 2020-09-17 Sony Interactive Entertainment Inc. Motion Transfer of Highly Dimensional Movements to Lower Dimensional Robot Movements
CN112454390A (en) * 2020-11-27 2021-03-09 中国科学技术大学 Humanoid robot facial expression simulation method based on deep reinforcement learning
CN113156892A (en) * 2021-04-16 2021-07-23 西湖大学 Four-footed robot simulated motion control method based on deep reinforcement learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN106326980A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Robot and method for simulating human facial movements by robot
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN109521927A (en) * 2017-09-20 2019-03-26 阿里巴巴集团控股有限公司 Robot interactive approach and equipment
CN109676583A (en) * 2018-12-03 2019-04-26 深圳市越疆科技有限公司 Based on targeted attitude deep learning vision collecting method, learning system and storage medium
CN109773807A (en) * 2019-03-04 2019-05-21 昆山塔米机器人有限公司 Motion control method, robot
US20200290203A1 (en) * 2019-03-13 2020-09-17 Sony Interactive Entertainment Inc. Motion Transfer of Highly Dimensional Movements to Lower Dimensional Robot Movements
CN110559639A (en) * 2019-08-02 2019-12-13 焦作大学 Robot teaching method for gymnastics movement
CN112454390A (en) * 2020-11-27 2021-03-09 中国科学技术大学 Humanoid robot facial expression simulation method based on deep reinforcement learning
CN113156892A (en) * 2021-04-16 2021-07-23 西湖大学 Four-footed robot simulated motion control method based on deep reinforcement learning

Similar Documents

Publication Publication Date Title
CN111259751B (en) Human behavior recognition method, device, equipment and storage medium based on video
EP3907666A2 (en) Method, apparatus, electronic device, readable storage medium and program for constructing key-point learning model
US11816915B2 (en) Human body three-dimensional key point detection method, model training method and related devices
CN111259671A (en) Semantic description processing method, device and equipment for text entity
CN112579909A (en) Object recommendation method and device, computer equipment and medium
KR20210090122A (en) Distributed model training apparatus, methods and computer program
EP3944132A1 (en) Active interaction method and apparatus, electronic device and readable storage medium
CN111966361A (en) Method, device and equipment for determining model to be deployed and storage medium thereof
CN112529180A (en) Method and apparatus for model distillation
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
US20230401799A1 (en) Augmented reality method and related device
KR20230007998A (en) Multi-task recognition method, training method, device, electronic device, storage medium and computer program
CN112507833A (en) Face recognition and model training method, device, equipment and storage medium
CN113902956A (en) Training method of fusion model, image fusion method, device, equipment and medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN113409898A (en) Molecular structure acquisition method and device, electronic equipment and storage medium
CN112784102B (en) Video retrieval method and device and electronic equipment
CN116092120B (en) Image-based action determining method and device, electronic equipment and storage medium
CN114220163B (en) Human body posture estimation method and device, electronic equipment and storage medium
CN115481594B (en) Scoreboard implementation method, scoreboard, electronic equipment and storage medium
CN114789470A (en) Method and device for adjusting simulation robot
CN113128436B (en) Method and device for detecting key points
CN115795025A (en) Abstract generation method and related equipment thereof
CN112200169B (en) Method, apparatus, device and storage medium for training a model
CN113961765A (en) Searching method, device, equipment and medium based on neural network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Shi Xuan

Inventor after: Shao Ming

Inventor after: Wang Hongguang

Inventor before: Wang Hongguang