WO2021016807A1 - Context awareness device simulation method, device, and system - Google Patents

Context awareness device simulation method, device, and system Download PDF

Info

Publication number
WO2021016807A1
WO2021016807A1 PCT/CN2019/098188 CN2019098188W WO2021016807A1 WO 2021016807 A1 WO2021016807 A1 WO 2021016807A1 CN 2019098188 W CN2019098188 W CN 2019098188W WO 2021016807 A1 WO2021016807 A1 WO 2021016807A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
context
test
environment
target
Prior art date
Application number
PCT/CN2019/098188
Other languages
French (fr)
Chinese (zh)
Inventor
李婧
徐蔚峰
李明
卢超
Original Assignee
西门子股份公司
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子股份公司, 西门子(中国)有限公司 filed Critical 西门子股份公司
Priority to PCT/CN2019/098188 priority Critical patent/WO2021016807A1/en
Priority to CN201980096782.4A priority patent/CN113874844A/en
Publication of WO2021016807A1 publication Critical patent/WO2021016807A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Definitions

  • the present invention relates to the field of simulation, in particular to a simulation method, device and system of a context sensing device.
  • Context sensing devices are also called autonomous systems, such as smart vision devices or smart robots.
  • the context sensing device can perceive the environment or the situation, recognize the target that needs to be interacted, and then perform actions accordingly.
  • More and more context-aware devices are used in the industry, such as intelligent transportation, advanced manufacturing or building technology.
  • context sensing devices need to be evaluated in different contexts, so it is difficult to manually generate all possible contexts in a simulator.
  • generating test cases in a specific way can lead to simulation blind spots or extreme cases.
  • current simulators need to add features to model the equipment context and to automatically generate test cases.
  • the prior art provides two solutions.
  • the first solution provides a simulation platform for the autonomous driving system, which is limited to the evaluation of the autonomous driving field.
  • UML language is used to model the system scenario, but the raw data of the synthetic sensor is generated based on a custom algorithm, which cannot be used for Other cases.
  • the second solution provides robot training in a simulation environment. It obtains a comprehensive demonstration of packing to form an algorithm management in the simulator, and then it learns strategies through synthetic data from the simulator, which is also based on the simulation The acquired data trains a robot controller. Although this solution can simulate intelligent robots and obtain simulation data, they are not concerned about simulators that have the ability to automatically evaluate and verify intelligent robots.
  • the first aspect of the present invention provides a simulation method of a context sensing device, which includes the following steps: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, based on testing The example performs action planning and action simulation on the emotion perception device.
  • step S1 also includes the following steps: S11, describing the basic objects and their relationships in the environment of the context-aware device by using ontology to model the environment of the context-aware device and based on the customer Need to mark the environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
  • the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
  • the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
  • the context sensing device is a grasping robot
  • the step S2 includes the following steps: step S22, planning a motion trajectory for the grasping robot for the scene target that the grasping robot needs to grasp, wherein, The motion trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the mechanical arm of the grasping robot in a time sequence.
  • step S23 the grasping robot is driven to move along the motion trajectory planned by the motion planning module 222 and returns the simulation result based on the simulation requirement.
  • step S2 includes step S21: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
  • the method further includes a step S3: performing a simulation on the context sensing device based on the test example.
  • the second aspect of the present invention provides a simulation device of a context perception device, which includes: an environment management device that models the environment of the context perception device based on customer needs, and traverses the environment model to generate test cases; a test simulation device, which Perform action planning and action simulation on the emotion perception device based on test cases.
  • the environment management device further includes: a modeling device, which describes basic objects and their relationships in the environment of the context perception device using ontology to model the environment of the context perception device, and Annotate the environmental model based on customer requirements; a test data generation device that traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case.
  • a modeling device which describes basic objects and their relationships in the environment of the context perception device using ontology to model the environment of the context perception device, and Annotate the environmental model based on customer requirements
  • a test data generation device that traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case.
  • the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
  • the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
  • the context sensing device is a grasping robot
  • the test simulation device further includes: a motion planning module, which plans a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, Wherein, the motion trajectory includes the posture representing the grasping robot, and the posture includes the rotation angles of the multiple joints of the robot arm of the grasping robot in a time series; the motion simulation module drives the grasping robot to Move along the motion trajectory and return simulation results based on simulation requirements.
  • test simulation device further includes a perception module, which maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on a visual algorithm.
  • the simulation device of the context sensing device further includes a test evaluation device that performs simulation on the context sensing device based on the test example.
  • a third aspect provides a simulation system of a context-aware device, which includes: a processor; and a memory coupled with the processor, the memory having instructions stored therein, and the instructions cause all of them when executed by the processor.
  • the electronic device executes actions, and the actions include: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, performing action planning on the emotion sensing device based on the test cases, and Action simulation.
  • the action S1 also includes: S11, describing the basic objects and their relationships in the environment of the context sensing device using ontology to model the environment of the context sensing device, and labeling based on customer needs The environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
  • the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
  • the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
  • the context sensing device is a grasping robot
  • the action S2 includes the following actions: S22, planning a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, wherein the movement
  • the trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the robotic arm of the grasping robot in a time series.
  • S23 Drive the grasping robot to move along the motion track and return a simulation result based on simulation requirements.
  • the action S22 also includes an action S21 before the action S22: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
  • an action S3 is further included: performing a simulation on the context sensing device based on the test example.
  • the fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
  • the fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
  • the present invention can better support the simulation of the context sensing device.
  • the modeling process provided by the present invention categorizes and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes.
  • the modeling process provided by the present invention is simpler, and the next test scenario is tested by establishing a semantic model. Generate support.
  • the present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements.
  • the present invention can also map test data to a simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.
  • Figure 1 is a system framework diagram of a simulation device of a context sensing device according to a specific embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of the environment model of the context sensing device in the simulation mechanism of the context sensing device according to a specific embodiment of the present invention
  • FIG. 3 is a schematic diagram of the target attribute structure of the contextual target of the simulation mechanism of the contextual sensing device according to a specific embodiment of the present invention
  • Fig. 4 is a schematic diagram of a simulation of a grabbing robot grabbing parts from a box according to the simulation mechanism of the context sensing device according to a specific embodiment of the present invention
  • Fig. 5 is a graph showing the posture of the robot arm of the grasping robot B of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention.
  • the invention provides a simulation mechanism of a situational perception device, the simulation function is extended, the simulation efficiency is higher and more accurate, and it does not rely on manpower.
  • the present invention provides environmental management functions and test scenario generation functions.
  • the present invention is suitable for scene sensing devices, such as automatic driving, intelligent robots and unmanned aerial vehicles, etc.
  • the present invention is also especially suitable for visual grasping robots.
  • the simulation device of the context perception device includes an environment management device 100 and a simulation device 200, wherein the simulation device 200 further includes a scenario generation device 210, a test simulation device 220 and a test evaluation device 230.
  • the environmental management device 100 generates a test case based on the input customer requirements, and sends the test case to the scenario generation device 210.
  • the test simulation device 220 is used to map the test case data to a simulation scene in the simulation device 200, for example, to generate 3D objects and environments in a virtual scene.
  • the test evaluation device 230 simulates the context sensing device based on the input scene.
  • the first aspect of the present invention provides a simulation method of a situational device.
  • step S1 is performed.
  • the environment management device 100 models the environment of the context-aware device based on customer requirements, and traverses the environment model to generate test cases. Among them, the environment management device 100 is used to generate context data for the context perception device based on customer needs.
  • the environment management device 100 includes an environment modeling device 110 and a test data generating device 120, which can automatically generate test cases using parameter values provided by the simulation device 200, and then execute and evaluate them.
  • the environmental management device 100 can also generate noise factors for different modules of the system under test (SUT, System Under Test).
  • customer requirement A is to verify the ability of visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 ⁇ 200, the probability of generating a cylinder in multiple targets is m, and the probability of generating a cube in multiple targets is n.
  • the environment modeling device 110 is used to describe the environment of the context sensing device in a formal language/method.
  • the environment refers to the basic objects that the context sensing device may "see”.
  • it refers to the basic objects that the visual grasping robot B can "see”. That is, the environment modeling device 110 is used to describe the basic objects that can be “seen” by the visual grasping robot B and their relationships with formal language/methods.
  • step S1 further includes sub-step S11 and sub-step S12.
  • the environment modeling device 110 uses the ontology to describe the basic objects and their relationships in the environment of the context perception device, so as to model the environment of the context perception device, and mark all objects based on customer needs.
  • the environmental model is described in sub-step S11.
  • the environment modeling device 110 describes the basic objects and their relationships in the environment of the visual grasping robot B using an ontology.
  • the environmental model includes objects, and the objects include scenario targets, systems under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and location of the target.
  • the scene target includes boxes and parts.
  • the box includes a cube box, which has parts. Parts include cube parts, cylindrical parts and spherical parts. Among them, the relationship between the box and the component is that the position of the component is inside the box.
  • Noise includes visual noise and mechanical noise.
  • the context perception device includes a grasping robot
  • the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper
  • the vision system includes visual noise
  • the robotic arm includes a mechanical noise.
  • the relationship between the visual system and visual noise is that the visual system has noise
  • the relationship between the mechanical arm and mechanical noise is that the mechanical arm has noise.
  • test case 1 is the test case corresponding to customer requirement A.
  • the environmental model shown in Figure 2 is labeled, for example, the number of 10 to 200 is labeled on the component, the generation probability m is labeled on the cylindrical component, and the generation probability n is labeled on the spherical component. .
  • Each category in the model shown in Figure 2 also includes multiple attributes and has a data range.
  • the test data generating device 120 traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case. Specifically, the test data generating device 120 automatically generates the test case 1 based on the environmental model shown in FIG. 2, and the environmental model uses ontology knowledge and serves as the input of the test data generating device 120. Moreover, in order to derive details from the test scenario and data, the test data generating device 120 provides an algorithm based on specific rules to traverse the ontology of the environment model shown in FIG. 2. The test data generating device 120 can also generate noise factors for different environmental models.
  • the test data generating device 120 implements the traversal process through algorithms, and outputs test cases and saves the test cases in JSON format for further analysis.
  • customer requirement A is to verify the ability of the visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 to 200, and the number of multiple targets is cylindrical
  • the generation probability of a volume is m
  • the generation probability of a cube in multiple targets is n.
  • the test data generating device 120 first reads the type in the test case generated based on the customer demand A, and then detects all connected types, and for each type, randomly initializes the attributes of the type (for example, including color) within the value range defined by the ontology. , Size, location). After that, determine whether each type meets the quantity requirements.
  • the quantity requirement here includes "the number of multiple targets is 10 to 200".
  • the secondary type (its subcategory) has generation Probability attribute, if it has a generation probability attribute, an instance of this subclass is generated according to the generation probability. If the secondary type (its sub-category) does not generate a probability attribute, an instance of the sub-category is directly generated, and it is judged whether it meets the quantity requirement of "the number of multiple targets is 10 to 200".
  • judge whether the test instance meets the constraint relationship for example, whether the position generated by the target component is in the box
  • the constraint relationship rule if the constraint relationship rule is satisfied, output the test instance, if not, reset part of the instance value. After that, the constraint relationship is judged again until the constraint relationship is satisfied.
  • step S2 is executed, and the simulation device 200 performs action planning and action simulation on the emotion perception device based on the test example.
  • the simulation device 200 includes a scene generation device 210, a test simulation device 220, and a test evaluation device 230.
  • the scenario generating device 210 is used to map test case data to a simulation scenario.
  • the input of the scenario generating device 210 is a test case script, which maps test case data values to a target of a specific simulator to generate and propose a 3D target in simulation, and the output of the scenario generating device 210 is a simulation scenario described by the test case script.
  • customer requirement A is: to verify the ability of the visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 to 200, and the generation of cylinders in multiple targets The probability is m, and the probability of generating cubes in multiple targets is n.
  • Fig. 3 is a schematic diagram of the target attribute structure of the context target of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention.
  • the present invention defines the target attributes of each scene target, and the target attributes of the scene target include but are not limited to material, shape, position, color, center point, length, width, and height. As shown in Figure 3, specifically, the target attributes of the box include material, shape, color, length, width, and height.
  • the material of the box is paper, the shape of the box is cube, the center point of the box is (x11, y11, z11), the color of the box is white, the length of the box is l1, the width of the box is w1, and the height of the box is h1 .
  • the target attributes of the cube part include material, shape, position, color, length, width, and height. Among them, the material of the cube part is rubber, the shape of the cube part is a cube, the center point of the cube part is (x21, y21, z21), the color of the cube part is blue, the length of the cube part is l2, and the width of the cube part is w2 , The height of the cube part is h2.
  • the target properties of the spherical part include material, shape, color, center point, radius, and height.
  • the material of the sphere part is rubber
  • the shape of the sphere part is a sphere
  • the color of the sphere part is red
  • the center point of the sphere part is (x32, y32, z32)
  • the radius of the sphere part is r3
  • the height of the sphere part is h3 .
  • each cube part and each sphere part has target attributes.
  • the scene generating device 210 is based on the target of each cube part and each sphere part.
  • the attribute maps the data of test case 1 to a simulation scene, such as a 3D scene.
  • the test simulation device 220 is used to simulate the action of the context sensing device and provide an input scene.
  • the test simulation device 220 includes a perception module 221, a motion planning module 222, and a motion simulation module 223. Among them, the motion planning module 222 and the motion simulation module 223 are necessary modules, and the perception module 221 is an optional module.
  • the step S2 includes sub-step S21, sub-step S22 and sub-step S23.
  • the perception module 221 maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on the visual algorithm.
  • the input of the perception module 221 is a synthetic scene. Whether the sensing module 221 is necessary depends on the specific implementation, and it may be integrated in the test simulation device 220 or independently exist outside the test simulation device 220.
  • the perception module 221 uses the image of the synthesized scene for analysis, that is, the 3D scene output above.
  • the perception module 221 uses multiple vision algorithms to recognize the target objects in the scene, namely, boxes, cube parts and sphere parts. Among them, vision algorithms include traditional vision algorithms or vision algorithms based on deep learning. In this embodiment, for example, the scene target object is a cube part in a box, and the grasping robot B grasps the cube part in a box.
  • the output of the perception module 221 is the recognized component and includes its posture information, for example, in the following format:
  • x, y, z are the position coordinates of the part
  • roll is the angle between the part and the x axis
  • pitch is the angle between the part and the y axis
  • yaw is the angle between the part and z.
  • the sensing module 221 is not a necessary module, and step S21 is not a necessary step.
  • the sensing module 221 and step S21 are optional.
  • the scene generating device 210 can also output one or more scene target objects and their posture information for the grabbing robot B to grab in a box. The difference is that the scene target object and its posture information are used to serve as the motion planning module 222.
  • the motion planning module 222 plans a motion trajectory for the grasping robot according to the scene target that the grasping robot needs to grasp, where the motion track includes the posture of the grasping robot, and the posture includes The rotation angle of the multiple joints of the robot arm of the grasping robot in a time sequence.
  • the motion planning module 222 automatically plans a motion trajectory for the identified scene target, which automatically generates a motion trajectory, and specifies a starting position for the scene target plan posture and the robot arm of the grasping robot B. In the process of generating the motion trajectory, the motion planning module 222 uses a planning algorithm to define several points.
  • the output of the motion planning module 222 is the joint angle matrix of the robotic arm of the grasping robot B:
  • j represents attitude
  • m represents time series
  • n is a natural number.
  • the posture j represents the angle of the robot arm of the grasping robot B at the time point m.
  • the grasping robot B shown in FIG. 4 has three joints, namely a first joint j 1 , a second joint j 2 and a third joint j 3 . Therefore, the motion planning module 222 can plan the rotation angle of the first joint j 1 , the second joint j 2 and the third joint j 3 at a specific time point, and output a joint angle matrix M of the robot arm of the grasping robot B j .
  • Fig. 5 is a graph showing the posture of the manipulator arm of the grasping robot B of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention, wherein the abscissa represents time and the ordinate represents angle.
  • the robotic arm has 8 joints, where j 1 represents the posture of the first joint, j 2 represents the posture of the second joint, j 3 represents the posture of the third joint, and j 4 represents The posture of the fourth joint, j 5 represents the posture of the fifth joint, j 6 represents the posture of the sixth joint, j 7 represents the posture of the seventh joint, and j 8 represents the posture of the eighth joint. Therefore, in the curve shown in FIG.
  • the present invention can obtain the postures of the above-mentioned eight joints in a time series for simulating the grasping robot B. It should be noted that each joint has its ability to rotate, that is, the rotation angle has a numerical range, so the simulation of the above joints should be based on the ability of each joint.
  • the motion simulation module 223 drives the grasping robot to move along the motion trajectory planned by the motion planning module 222 and returns a simulation result based on the simulation requirements.
  • the motion simulation module 223 drives the context sensing device to move along the path planned by the motion planning module 222 and returns simulation results based on simulation requirements.
  • FIG. 4 is a schematic diagram of a simulation diagram of a grasping robot grasping parts from a box according to a simulation mechanism of a scene sensing device according to a specific embodiment of the present invention. Therefore, the motion simulation module 223 performs a path planned by the grasping robot B based on the motion planning module 222
  • the process of moving to rotate the first joint j 1 , the second joint j 2 and the third joint j 3 to grab multiple parts from the box B' is simulated in multiple possible 3D scenarios, including the first part p 1 , The second part p 2 and the third part p 3 .
  • the results returned by the motion simulation module 223 may include grasping success and failure results, simulation time, collision detection, energy loss in various stages, and so on.
  • the simulation time at different stages is:
  • step S3 is executed, and the test evaluation device 230 performs simulation on the context sensing device based on the test example.
  • test evaluation device 230 is used to summarize the simulation result based on the test case, the output of which is the simulation result, and generates evaluation results in different aspects, such as reliability, safety, efficiency, and robustness.
  • reliability is related to success rate
  • the average success rate can be calculated as follows:
  • the simulator's engine utilizes a physics engine, the execution of the simulation itself can find out whether the target object has been captured and successfully moved to the predetermined position. If the simulator's engine is a geometry engine, this metric cannot be evaluated.
  • the collision check result can be expressed as follows:
  • Time refers to the time when the simulation collision occurs
  • Object_1 and Object_1 represent two target objects that collide with each other, and the distance between the two is less than the threshold
  • Distance refers to the distance between Object_1 and Object_1. If Distance is less than 0 or negative, it means that two target objects have collided.
  • the efficiency represents the number of grabs per hour in the motor of the machine system and the average time and energy consumption of each grab when performing a movement.
  • the crawling efficiency per hour can be calculated as follows:
  • the platform can also provide test coverage measurement methods, which represent the comprehensiveness of the coverage of automatically generated test cases.
  • the test coverage measurement method can be evaluated based on the retrieval rules applied in the test data generating device 120. For example, scene-related coverage measures whether part of the context model and its combination are covered in the test.
  • the present invention has a more comprehensive and objective method that can be quantified to automatically generate test cases.
  • the second aspect of the present invention provides a simulation device of a context sensing device, which includes:
  • the environmental management device 100 models the environment of the context-aware device based on customer needs, and traverses the environmental model to generate test cases;
  • the test simulation device 220 performs action planning and action simulation on the emotion perception device based on the test example.
  • the environmental management device 100 further includes:
  • Modeling device 110 which describes basic objects and their relationships in the environment of the context-aware device using ontology to model the environment of the context-aware device, and annotates the environment model based on customer needs;
  • the test data generating device 120 traverses the environment model based on the test case, and extracts data from the environment model to generate a test case.
  • the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the target's material, color, shape, position, generation probability, and the like.
  • the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
  • the context sensing device is a grasping robot, wherein the test simulation device 220 further includes:
  • a motion planning module 222 which plans a motion trajectory for the grasping robot according to the situational target that the grasping robot needs to grasp, wherein the motion track includes a posture representing the grasping robot, and the posture includes the grasping The rotation angles of the multiple joints of the robot arm in the time series;
  • the motion simulation module 223 drives the grasping robot to move along the motion trajectory and returns simulation results based on simulation requirements.
  • test simulation device 220 further includes a perception module 221, which maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on a visual algorithm.
  • the simulation device of the context sensing device further includes a test evaluation device 230, which performs simulation on the context sensing device based on the test example.
  • a third aspect provides a simulation system of a context-aware device, which includes: a processor; and a memory coupled with the processor, the memory having instructions stored therein, and the instructions cause all of them when executed by the processor.
  • the electronic device performs actions, and the actions include: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, performing action planning on the emotion perception device based on the test cases, Motion simulation and system evaluation.
  • the action S1 also includes: S11, describing the basic objects and their relationships in the environment of the context sensing device using ontology to model the environment of the context sensing device, and labeling based on customer needs The environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
  • the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
  • the scene target also includes target attributes and noise, and the target attributes include the target's material, color, shape, position, and generation probability.
  • the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
  • the context sensing device is a grasping robot
  • the action S2 includes the following actions: S22, planning a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, wherein the movement
  • the trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the robotic arm of the grasping robot in a time series.
  • S23 Drive the grasping robot to move along the motion track and return a simulation result based on simulation requirements.
  • the action S22 also includes an action S21 before the action S22: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
  • an action S3 is further included: performing a simulation on the context sensing device based on the test example.
  • the fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
  • the fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
  • the present invention can better support the simulation of the context sensing device.
  • the modeling process provided by the present invention classifies and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes.
  • the modeling process provided by the present invention is more clear and accurate, and avoids confusion and
  • the next test scenario is generated to provide support.
  • the present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements.
  • the present invention can also map test data to a simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.
  • the present invention can better support the simulation of the context sensing device.
  • the modeling process provided by the present invention classifies and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes.
  • the modeling process provided by the present invention is simpler, and the next test scenario is tested by establishing a semantic model. Generate support.
  • the present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements.
  • the present invention can also map the test data to the simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.

Abstract

Provided are a context awareness device simulation method, device, and system, comprising the following steps: S1: modeling the environment of a context awareness device on the basis of customer requirements, and traversing the environment model to generate a test case; S2: performing action planning and action simulation on the context awareness device on the basis of said test case. By means of the described method, device, and system, it is possible to better support the simulation of a context awareness device. In the described modeling process, the environment of the context awareness device and a target object in the environment and their relationships and attributes are classified and described, and the modeling process is more clear and precise, providing support for a subsequent-step test scenario generation by means of establishing a semantic model. It is also possible to retrieve the graphical structure of the environment model and generate a test case for each traversal, the context awareness system having transformed into a list of test cases to fulfill customer requirements. The test data can be mapped to a simulator which interacts with the context awareness device to automatically generate a simulated 3-D scene.

Description

情景感知装置的仿真方法、装置和系统Simulation method, device and system of context sensing device 技术领域Technical field
本发明涉及仿真领域,尤其涉及一种情景感知装置的仿真方法、装置和系统。The present invention relates to the field of simulation, in particular to a simulation method, device and system of a context sensing device.
背景技术Background technique
情景感知装置或被称为自主系统,例如智能视觉装置或者智能机器人。情景感知装置能够感知环境或者情景,识别需要交互的目标,然后相应地执行动作。越来越多的情景感知装置在工业界被采用,例如智能运输、先进制造或者楼宇科技。Context sensing devices are also called autonomous systems, such as smart vision devices or smart robots. The context sensing device can perceive the environment or the situation, recognize the target that needs to be interacted, and then perform actions accordingly. More and more context-aware devices are used in the industry, such as intelligent transportation, advanced manufacturing or building technology.
和传统自动系统不同,更大和复杂的输入空间使得难以验证情景感知装置。现在的仿真工具擅长于评估事件驱动和确定的装置。例如,有的工业仿真软件允许用户定义事件和3D场景来对设备在生产中的输入和动作进行仿真。Unlike traditional automatic systems, the larger and complex input space makes it difficult to verify context-aware devices. Current simulation tools are good at evaluating event-driven and deterministic devices. For example, some industrial simulation software allows users to define events and 3D scenes to simulate the inputs and actions of equipment in production.
然而,情景感知装置需要在不同情景场景下评估,因此在一个仿真器中人工生成所有可能的情景场景是难以实现的。另一方面,用特定方式生成测试案例会导致仿真盲区或者极端案例。为了更好地支持情景感知装置的仿真,现在的仿真器需要增加对设备情景建模和足够多自动产生测试案例的特征。However, context sensing devices need to be evaluated in different contexts, so it is difficult to manually generate all possible contexts in a simulator. On the other hand, generating test cases in a specific way can lead to simulation blind spots or extreme cases. In order to better support the simulation of context-aware devices, current simulators need to add features to model the equipment context and to automatically generate test cases.
为了解决上述问题,现有技术提供了两种方案。第一种方案提供了一个自动驾驶系统的仿真平台,其局限于自动驾驶领域的评估,用UML语言来对系统情景建模,但是合成传感器原始数据是基于定制算法生成的,其并不能用于其他案例。In order to solve the above-mentioned problems, the prior art provides two solutions. The first solution provides a simulation platform for the autonomous driving system, which is limited to the evaluation of the autonomous driving field. UML language is used to model the system scenario, but the raw data of the synthetic sensor is generated based on a custom algorithm, which cannot be used for Other cases.
第二种方案提供了仿真环境中的机器人训练,其获取装箱的综合演示以在仿真器中形成一个算法管理,然后其通过从仿真器来的合成数据学习策略,其也基于在仿真器中获取的数据训练一个机器人控制器。虽然这个方案可以仿真智能机器人并获取仿真数据,他们并不关注具有自动评估验证智能机器人的能力的仿真器。The second solution provides robot training in a simulation environment. It obtains a comprehensive demonstration of packing to form an algorithm management in the simulator, and then it learns strategies through synthetic data from the simulator, which is also based on the simulation The acquired data trains a robot controller. Although this solution can simulate intelligent robots and obtain simulation data, they are not concerned about simulators that have the ability to automatically evaluate and verify intelligent robots.
发明内容Summary of the invention
本发明第一方面提供了一种情景感知装置的仿真方法,其中,包括如下步骤:S1,基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;S2,基于测试实例对所述情感感知装置进行动作规划和动作仿真。The first aspect of the present invention provides a simulation method of a context sensing device, which includes the following steps: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, based on testing The example performs action planning and action simulation on the emotion perception device.
进一步地,所述步骤S1还包括如下步骤:S11,将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;S12,基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。Further, the step S1 also includes the following steps: S11, describing the basic objects and their relationships in the environment of the context-aware device by using ontology to model the environment of the context-aware device and based on the customer Need to mark the environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
进一步地,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。Further, the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
进一步地,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。Further, the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
进一步地,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。Further, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
进一步地,所述情景感知装置为抓取机器人,其中,所述步骤S2包括如下步骤:步骤S22,针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度。步骤S23,驱动所述抓取机器人来沿着运动规划模块222规划的运动轨迹运动并基于仿真需求返回仿真结果。Further, the context sensing device is a grasping robot, wherein, the step S2 includes the following steps: step S22, planning a motion trajectory for the grasping robot for the scene target that the grasping robot needs to grasp, wherein, The motion trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the mechanical arm of the grasping robot in a time sequence. In step S23, the grasping robot is driven to move along the motion trajectory planned by the motion planning module 222 and returns the simulation result based on the simulation requirement.
进一步地,所述步骤S2包括步骤S21:映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。Further, the step S2 includes step S21: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
进一步地,在所述步骤S2之后还包括步骤S3:基于所述测试实例对所述情景感知装置执行仿真。Further, after the step S2, the method further includes a step S3: performing a simulation on the context sensing device based on the test example.
本发明第二方面提供了情景感知装置的仿真装置,其中,包括:环境管理装置,其基于客户需求对所述情景感知装置的环境建模,并遍 历环境模型生成测试实例;测试仿真装置,其基于测试实例对所述情感感知装置进行动作规划和动作仿真。The second aspect of the present invention provides a simulation device of a context perception device, which includes: an environment management device that models the environment of the context perception device based on customer needs, and traverses the environment model to generate test cases; a test simulation device, which Perform action planning and action simulation on the emotion perception device based on test cases.
进一步地,所述环境管理装置还包括:建模装置,其将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;测试数据产生装置,其基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。Further, the environment management device further includes: a modeling device, which describes basic objects and their relationships in the environment of the context perception device using ontology to model the environment of the context perception device, and Annotate the environmental model based on customer requirements; a test data generation device that traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case.
进一步地,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。Further, the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
进一步地,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。Further, the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
进一步地,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。Further, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
进一步地,所述情景感知装置为抓取机器人,其中,所述测试仿真装置还包括:运动规划模块,其针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度;运动仿真模块,其驱动所述抓取机器人来沿着所述运动轨迹运动并基于仿真需求返回仿真结果。Further, the context sensing device is a grasping robot, wherein the test simulation device further includes: a motion planning module, which plans a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, Wherein, the motion trajectory includes the posture representing the grasping robot, and the posture includes the rotation angles of the multiple joints of the robot arm of the grasping robot in a time series; the motion simulation module drives the grasping robot to Move along the motion trajectory and return simulation results based on simulation requirements.
进一步地,所述测试仿真装置还包括感知模块,其映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。Further, the test simulation device further includes a perception module, which maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on a visual algorithm.
进一步地,所述情景感知装置的仿真装置还包括测试评估装置,其基于所述测试实例对所述情景感知装置执行仿真。Further, the simulation device of the context sensing device further includes a test evaluation device that performs simulation on the context sensing device based on the test example.
第三方面提供了情景感知装置的仿真系统,其中,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:S1,基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;S2,基于测试实例对所述情感感知装置进行动作规划和动作仿真。A third aspect provides a simulation system of a context-aware device, which includes: a processor; and a memory coupled with the processor, the memory having instructions stored therein, and the instructions cause all of them when executed by the processor. The electronic device executes actions, and the actions include: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, performing action planning on the emotion sensing device based on the test cases, and Action simulation.
进一步地,所述动作S1还包括:S11,将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;S12,基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。Further, the action S1 also includes: S11, describing the basic objects and their relationships in the environment of the context sensing device using ontology to model the environment of the context sensing device, and labeling based on customer needs The environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
进一步地,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。Further, the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
进一步地,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。Further, the scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
进一步地,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。Further, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
进一步地,所述情景感知装置为抓取机器人,其中,所述动作S2包括如下动作:S22,针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度。S23,驱动所述抓取机器人来沿着所述运动轨迹运动并基于仿真需求返回仿真结果。Further, the context sensing device is a grasping robot, wherein the action S2 includes the following actions: S22, planning a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, wherein the movement The trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the robotic arm of the grasping robot in a time series. S23: Drive the grasping robot to move along the motion track and return a simulation result based on simulation requirements.
进一步地,所述动作S22之前还包括动作S21:映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。Further, the action S22 also includes an action S21 before the action S22: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
进一步地,在所述动作S2之后还包括动作S3:基于所述测试实例对所述情景感知装置执行仿真。Further, after the action S2, an action S3 is further included: performing a simulation on the context sensing device based on the test example.
本发明第四方面提供了计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。The fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
本发明第五方面提供了计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。The fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
本发明能够更好地支持情景感知装置的仿真。本发明提供的建模过 程对情景感知装置的环境及其环境中的目标对象及其关系、属性进行了分类和描述,本发明提供的建模过程更加简单,通过建立语义模型对下一步测试场景产生提供支持。本发明还可以检索环境模型的图状结构并对每一次遍历产生测试实例,在这个过程中情景感知系统转换为了测试案例列表来事先客户需求。本发明还可以将测试数据映射到与情景感知装置交互的模拟器中,能够自动生成仿真立体场景。The present invention can better support the simulation of the context sensing device. The modeling process provided by the present invention categorizes and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes. The modeling process provided by the present invention is simpler, and the next test scenario is tested by establishing a semantic model. Generate support. The present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements. The present invention can also map test data to a simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.
附图说明Description of the drawings
图1是根据本发明一个具体实施例的情景感知装置的仿真装置的系统框架图;Figure 1 is a system framework diagram of a simulation device of a context sensing device according to a specific embodiment of the present invention;
图2是根据本发明一个具体实施例的情景感知装置的仿真机制中对情景感知装置的环境模型的结构示意图;2 is a schematic structural diagram of the environment model of the context sensing device in the simulation mechanism of the context sensing device according to a specific embodiment of the present invention;
图3是根据本发明一个具体实施例的情景感知装置的仿真机制的情景目标的目标属性结构示意图;3 is a schematic diagram of the target attribute structure of the contextual target of the simulation mechanism of the contextual sensing device according to a specific embodiment of the present invention;
图4是根据本发明一个具体实施例的情景感知装置的仿真机制的抓取机器人从盒子中抓取部件的仿真示意图;Fig. 4 is a schematic diagram of a simulation of a grabbing robot grabbing parts from a box according to the simulation mechanism of the context sensing device according to a specific embodiment of the present invention;
图5是根据本发明一个具体实施例的情景感知装置的仿真机制的抓取机器人B的机械臂姿态曲线图。Fig. 5 is a graph showing the posture of the robot arm of the grasping robot B of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图,对本发明的具体实施方式进行说明。The specific embodiments of the present invention will be described below with reference to the accompanying drawings.
本发明提供了一种情景感知装置的仿真机制,仿真功能得到了延展,仿真效率更高更准确,并且不借助于人力。本发明提供了环境管理功能和测试情景产生功能。本发明适用于情景感知装置,例如自动驾驶、智能机器人以及无人机领域等,本发明还尤其适用于视觉抓取机器人。The invention provides a simulation mechanism of a situational perception device, the simulation function is extended, the simulation efficiency is higher and more accurate, and it does not rely on manpower. The present invention provides environmental management functions and test scenario generation functions. The present invention is suitable for scene sensing devices, such as automatic driving, intelligent robots and unmanned aerial vehicles, etc. The present invention is also especially suitable for visual grasping robots.
如图1所示,本发明提供的情景感知装置的仿真装置包括环境管理装置100和仿真装置200,其中,所述仿真装置200还包括场景产生装置210、测试仿真装置220和测试评估装置230。其中,环境管理装置100基于输入的客户需求产生测试案例,并将所述测试案例发送给场景产生装置210。测试仿真装置220用于把所述测试案例数据映射到所述仿真装置200中的仿真场景,例如在虚拟场景下生成3D物体和环境。 最后,测试评估装置230基于输入的场景对情景感知装置进行仿真。As shown in FIG. 1, the simulation device of the context perception device provided by the present invention includes an environment management device 100 and a simulation device 200, wherein the simulation device 200 further includes a scenario generation device 210, a test simulation device 220 and a test evaluation device 230. Wherein, the environmental management device 100 generates a test case based on the input customer requirements, and sends the test case to the scenario generation device 210. The test simulation device 220 is used to map the test case data to a simulation scene in the simulation device 200, for example, to generate 3D objects and environments in a virtual scene. Finally, the test evaluation device 230 simulates the context sensing device based on the input scene.
本发明第一方面提供了一种情景感装置的仿真方法。The first aspect of the present invention provides a simulation method of a situational device.
首先执行步骤S1,环境管理装置100基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例。其中,环境管理装置100用于基于客户需求为情景感知装置产生情景数据。具体地,环境管理装置100包括环境建模装置110和测试数据产生装置120,其能够用仿真装置200提供的参数值自动产生测试案例,然后执行和评估。环境管理装置100也能够为被测系统(SUT,System Under Test)的不同模块产生噪音因素。Firstly, step S1 is performed. The environment management device 100 models the environment of the context-aware device based on customer requirements, and traverses the environment model to generate test cases. Among them, the environment management device 100 is used to generate context data for the context perception device based on customer needs. Specifically, the environment management device 100 includes an environment modeling device 110 and a test data generating device 120, which can automatically generate test cases using parameter values provided by the simulation device 200, and then execute and evaluate them. The environmental management device 100 can also generate noise factors for different modules of the system under test (SUT, System Under Test).
具体地,如图1所示,例如,在本实施例中,客户需求A为:验证视觉抓取机器人B在一个盒子中抓取多个目标的能力,其中,多个目标的数量为10~200个,多个目标中圆柱体的生成概率为m,多个目标中立方体的生成概率为n。Specifically, as shown in Fig. 1, for example, in this embodiment, customer requirement A is to verify the ability of visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10~ 200, the probability of generating a cylinder in multiple targets is m, and the probability of generating a cube in multiple targets is n.
其中,环境建模装置110用于将情景感知装置的环境用形式化语言/方法描述出来。环境是指情景感知装置可能“看到”的基本物体,在本实施例中,是指视觉抓取机器人B能够“看到”的基本物体。也就是,环境建模装置110用于将是指视觉抓取机器人B能够“看到”的基本物体及其关系用形式化语言/方法描述出来。Among them, the environment modeling device 110 is used to describe the environment of the context sensing device in a formal language/method. The environment refers to the basic objects that the context sensing device may "see". In this embodiment, it refers to the basic objects that the visual grasping robot B can "see". That is, the environment modeling device 110 is used to describe the basic objects that can be "seen" by the visual grasping robot B and their relationships with formal language/methods.
进一步地,所述步骤S1还包括子步骤S11和子步骤S12。Further, the step S1 further includes sub-step S11 and sub-step S12.
在子步骤S11中,环境建模装置110将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型。In sub-step S11, the environment modeling device 110 uses the ontology to describe the basic objects and their relationships in the environment of the context perception device, so as to model the environment of the context perception device, and mark all objects based on customer needs. The environmental model.
如图2所示,根据本发明一个优选实施例,环境建模装置110将视觉抓取机器人B的环境中的基本物体及其关系利用知识本体(Ontology)描述出来。其中,环境模型包括物,所述物包括情景目标、被测系统和测试案例。情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。例如,在本实施例中,情景目标包括盒子和部件。盒子包括立方体盒子,其具有部件。部件包括立方体部件、圆柱体部件和球体部件。其中,盒子和部件的关系是部件的位置在盒子内。噪音包括视觉噪音和机械噪音。As shown in FIG. 2, according to a preferred embodiment of the present invention, the environment modeling device 110 describes the basic objects and their relationships in the environment of the visual grasping robot B using an ontology. Among them, the environmental model includes objects, and the objects include scenario targets, systems under test, and test cases. The scene target also includes target attributes and noise, and the target attributes include the material, color, shape, and location of the target. For example, in this embodiment, the scene target includes boxes and parts. The box includes a cube box, which has parts. Parts include cube parts, cylindrical parts and spherical parts. Among them, the relationship between the box and the component is that the position of the component is inside the box. Noise includes visual noise and mechanical noise.
并且,所述情景感知装置包括抓取机器人,基于所述抓取机器人的 所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。视觉系统和视觉噪音的关系是视觉系统具有噪音,机械臂和机械噪音的关系是机械臂具有噪音。In addition, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes a mechanical noise. The relationship between the visual system and visual noise is that the visual system has noise, and the relationship between the mechanical arm and mechanical noise is that the mechanical arm has noise.
并且,如图2所示,测试案例包括测试案例1。测试案例1就是对应于客户需求A的测试案例。并且,基于客户需求A对图2所示的环境模型进行标注,例如,将10~200个数量标注在部件上,将生成概率m标注在圆柱体部件上,将生成概率n标注在球体部件上。And, as shown in Figure 2, the test case includes test case 1. Test case 1 is the test case corresponding to customer requirement A. And, based on customer demand A, the environmental model shown in Figure 2 is labeled, for example, the number of 10 to 200 is labeled on the component, the generation probability m is labeled on the cylindrical component, and the generation probability n is labeled on the spherical component. .
需要说明的是,利用本体知识对所述情景感知装置的环境建模是本发明的优选实施方式,并未排除其他实现方法。如图2所示的模型中每个类别还包括多个属性,并具有数据范围。It should be noted that using ontology knowledge to model the environment of the context sensing device is a preferred embodiment of the present invention, and other implementation methods are not excluded. Each category in the model shown in Figure 2 also includes multiple attributes and has a data range.
在子步骤S12中,测试数据产生装置120基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。具体地,测试数据产生装置120自动基于如图2所示的环境模型产生测试案例1,环境模型利用了本体知识论,并充当了测试数据产生装置120的输入。并且,为了在测试场景和数据中导出细节,测试数据产生装置120提供了基于特定规则的算法来遍历图2所示的环境模型的本体。测试数据产生装置120也可以为不同的环境模型产生噪音因素。In sub-step S12, the test data generating device 120 traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case. Specifically, the test data generating device 120 automatically generates the test case 1 based on the environmental model shown in FIG. 2, and the environmental model uses ontology knowledge and serves as the input of the test data generating device 120. Moreover, in order to derive details from the test scenario and data, the test data generating device 120 provides an algorithm based on specific rules to traverse the ontology of the environment model shown in FIG. 2. The test data generating device 120 can also generate noise factors for different environmental models.
测试数据产生装置120通过算法来实现遍历的过程,并输出测试实例并将测试实例存为JSON格式待下一步分析。The test data generating device 120 implements the traversal process through algorithms, and outputs test cases and saves the test cases in JSON format for further analysis.
具体地,在本实施例中,客户需求A为:验证视觉抓取机器人B在一个盒子中抓取多个目标的能力,其中,多个目标的数量为10~200个,多个目标中圆柱体的生成概率为m,多个目标中立方体的生成概率为n。测试数据产生装置120首先读取基于客户需求A产生的测试案例中的类型,然后探测所有连接的类型,对每个类型,在本体定义的数值范围内随机初始化该类型的属性(例如,包括颜色、尺寸、位置)。之后,判断每个类型是否满足数量要求。其中,这里的数量要求包括“多个目标的数量为10~200个”,如果此时返回的结果是5个,因此不满足数量要求,则进一步判断二级类型(其子类)是否具有生成概率属性,如果具有生成概率属性,则按照生成概率产生该子类的一个实例。如果二级类型(其子类)没有生成概率属性,则直接产生该子类的一个实例,并循环判断是否满足“多个目标的数量为10~200个”的数量要求。当判断结果为是, 再判断测试实例是否满足约束关系(例如,目标部件生成的位置是否在盒子内)规则,如果满足约束关系规则就输出测试实例,如果不满足则重置实例的部分数值,之后再次进行约束关系判断,直至约束关系满足为止。Specifically, in this embodiment, customer requirement A is to verify the ability of the visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 to 200, and the number of multiple targets is cylindrical The generation probability of a volume is m, and the generation probability of a cube in multiple targets is n. The test data generating device 120 first reads the type in the test case generated based on the customer demand A, and then detects all connected types, and for each type, randomly initializes the attributes of the type (for example, including color) within the value range defined by the ontology. , Size, location). After that, determine whether each type meets the quantity requirements. Among them, the quantity requirement here includes "the number of multiple targets is 10 to 200". If the result returned at this time is 5, so the quantity requirement is not met, then it is further judged whether the secondary type (its subcategory) has generation Probability attribute, if it has a generation probability attribute, an instance of this subclass is generated according to the generation probability. If the secondary type (its sub-category) does not generate a probability attribute, an instance of the sub-category is directly generated, and it is judged whether it meets the quantity requirement of "the number of multiple targets is 10 to 200". When the judgment result is yes, then judge whether the test instance meets the constraint relationship (for example, whether the position generated by the target component is in the box) rule, if the constraint relationship rule is satisfied, output the test instance, if not, reset part of the instance value. After that, the constraint relationship is judged again until the constraint relationship is satisfied.
最后执行步骤S2,所述仿真装置200基于测试实例对所述情感感知装置进行动作规划和动作仿真。其中,所述仿真装置200包括场景产生装置210、测试仿真装置220和测试评估装置230。Finally, step S2 is executed, and the simulation device 200 performs action planning and action simulation on the emotion perception device based on the test example. Wherein, the simulation device 200 includes a scene generation device 210, a test simulation device 220, and a test evaluation device 230.
场景产生装置210用于映射测试案例数据到仿真场景。场景产生装置210的输入为测试案例脚本,其映射测试案例数据值到特定仿真器的目标中来产生和提出仿真中的3D目标,场景产生装置210的输出为用测试案例脚本描述的仿真场景。The scenario generating device 210 is used to map test case data to a simulation scenario. The input of the scenario generating device 210 is a test case script, which maps test case data values to a target of a specific simulator to generate and propose a 3D target in simulation, and the output of the scenario generating device 210 is a simulation scenario described by the test case script.
在本实施例中,客户需求A为:验证视觉抓取机器人B在一个盒子中抓取多个目标的能力,其中,多个目标的数量为10~200个,多个目标中圆柱体的生成概率为m,多个目标中立方体的生成概率为n。图3是根据本发明一个具体实施例的情景感知装置的仿真机制的情景目标的目标属性结构示意图。本发明对每个情景目标的目标属性都进行了定义,情景目标的目标属性包括但不限于材料、形状、位置、颜色、中心点、长、宽、高。如图3所示,具体地,盒子的目标属性包括材料、形状、颜色、长、宽和高。其中,盒子的材料为纸,盒子的形状为正方体,盒子的中心点为(x11,y11,z11),盒子的颜色为白,盒子的长为l1,盒子的宽为w1,盒子的高为h1。立方体部件的目标属性包括材料、形状、位置、颜色、长、宽和高。其中,立方体部件的材料为橡胶,立方体部件的形状为立方体,立方体部件的中心点为(x21,y21,z21),立方体部件的颜色为蓝,立方体部件的长为l2,立方体部件的宽为w2,立方体部件的高为h2。球体部件的目标属性包括材料、形状、颜色、中心点、半径和高。其中,球体部件的材料为橡胶,球体部件的形状为球体,球体部件的颜色为红,球体部件的中心点为(x32,y32,z32),球体部件的半径为r3,球体部件的高为h3。In this embodiment, customer requirement A is: to verify the ability of the visual grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 to 200, and the generation of cylinders in multiple targets The probability is m, and the probability of generating cubes in multiple targets is n. Fig. 3 is a schematic diagram of the target attribute structure of the context target of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention. The present invention defines the target attributes of each scene target, and the target attributes of the scene target include but are not limited to material, shape, position, color, center point, length, width, and height. As shown in Figure 3, specifically, the target attributes of the box include material, shape, color, length, width, and height. The material of the box is paper, the shape of the box is cube, the center point of the box is (x11, y11, z11), the color of the box is white, the length of the box is l1, the width of the box is w1, and the height of the box is h1 . The target attributes of the cube part include material, shape, position, color, length, width, and height. Among them, the material of the cube part is rubber, the shape of the cube part is a cube, the center point of the cube part is (x21, y21, z21), the color of the cube part is blue, the length of the cube part is l2, and the width of the cube part is w2 , The height of the cube part is h2. The target properties of the spherical part include material, shape, color, center point, radius, and height. Among them, the material of the sphere part is rubber, the shape of the sphere part is a sphere, the color of the sphere part is red, the center point of the sphere part is (x32, y32, z32), the radius of the sphere part is r3, and the height of the sphere part is h3 .
需要说明的是,抓取机器人B在一个盒子中抓取多个目标的能力,其中,多个目标的数量为10~200个,多个目标中圆柱体的生成概率为m,多个目标中立方体的生成概率为n。因此,每一个立方体部件和每一 个球体部件都具有目标属性,在执行了步骤S22的遍历后采集了包括目标属性的情景数据后,场景产生装置210基于每一个立方体部件和每一个球体部件的目标属性映射测试案例1的数据到仿真场景,例如3D场景。It should be noted that the ability of the grasping robot B to grasp multiple targets in a box, where the number of multiple targets is 10 to 200, the generation probability of a cylinder in multiple targets is m, and the number of multiple targets is m. The probability of cube generation is n. Therefore, each cube part and each sphere part has target attributes. After the traversal of step S22 is performed and the scene data including the target attributes are collected, the scene generating device 210 is based on the target of each cube part and each sphere part. The attribute maps the data of test case 1 to a simulation scene, such as a 3D scene.
测试仿真装置220用于对情景感知装置的动作进行仿真,提供一个输入的场景。测试仿真装置220包括感知模块221、运动规划模块222和运动仿真模块223。其中,运动规划模块222和运动仿真模块223是必须模块,感知模块221是可选模块。The test simulation device 220 is used to simulate the action of the context sensing device and provide an input scene. The test simulation device 220 includes a perception module 221, a motion planning module 222, and a motion simulation module 223. Among them, the motion planning module 222 and the motion simulation module 223 are necessary modules, and the perception module 221 is an optional module.
所述步骤S2包括子步骤S21、子步骤S22和子步骤S23。The step S2 includes sub-step S21, sub-step S22 and sub-step S23.
在子步骤S21中,感知模块221映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。In the sub-step S21, the perception module 221 maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on the visual algorithm.
其中,为了探测一个或多个情景目标及其细节,感知模块221的输入为合成场景。感知模块221是否必要取决于具体执行,其既可以集成在测试仿真装置220中也可以独立地存在于测试仿真装置220之外。感知模块221利用了合成场景的图像以进行分析,也就是上文中输出的3D场景。感知模块221采用多个视觉算法来识别情景目标物体,即盒子、立方体部件和球体部件。其中,视觉算法包括传统视觉算法或者基于深度学习的视觉算法。在本实施例中,例如,情景目标物体是在盒子中的立方体部件,抓取机器人B在一个盒子中抓取该立方体部件。感知模块221的输出是识别到的部件,并包括其姿势信息,例如按照如下的格式:Among them, in order to detect one or more scene objects and their details, the input of the perception module 221 is a synthetic scene. Whether the sensing module 221 is necessary depends on the specific implementation, and it may be integrated in the test simulation device 220 or independently exist outside the test simulation device 220. The perception module 221 uses the image of the synthesized scene for analysis, that is, the 3D scene output above. The perception module 221 uses multiple vision algorithms to recognize the target objects in the scene, namely, boxes, cube parts and sphere parts. Among them, vision algorithms include traditional vision algorithms or vision algorithms based on deep learning. In this embodiment, for example, the scene target object is a cube part in a box, and the grasping robot B grasps the cube part in a box. The output of the perception module 221 is the recognized component and includes its posture information, for example, in the following format:
<姿势>x,y,z,roll,pitch,yaw</姿势><posture>x, y, z, roll, pitch, yaw</posture>
其中,x,y,z为部件的位置坐标,roll是部件和x轴的夹角,pitch是部件和y轴的夹角,yaw是部件和z的夹角。Among them, x, y, z are the position coordinates of the part, roll is the angle between the part and the x axis, pitch is the angle between the part and the y axis, and yaw is the angle between the part and z.
其中,感知模块221并非是必须模块,步骤S21也并非是必须步骤,感知模块221及步骤S21都是可选的。Among them, the sensing module 221 is not a necessary module, and step S21 is not a necessary step. The sensing module 221 and step S21 are optional.
如果没有感知模块221,场景产生装置210也可以输出一个或多个情景目标物体及其姿势信息,用以给抓取机器人B在一个盒子中抓取。不同的是,该情景目标物体及其姿势信息用于充当运动规划模块222。If there is no perception module 221, the scene generating device 210 can also output one or more scene target objects and their posture information for the grabbing robot B to grab in a box. The difference is that the scene target object and its posture information are used to serve as the motion planning module 222.
在子步骤S22中,运动规划模块222针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个 关节在时间序列中的转动角度。In sub-step S22, the motion planning module 222 plans a motion trajectory for the grasping robot according to the scene target that the grasping robot needs to grasp, where the motion track includes the posture of the grasping robot, and the posture includes The rotation angle of the multiple joints of the robot arm of the grasping robot in a time sequence.
其中,运动规划模块222自动为识别到的情景目标规划一个运动轨迹,其自动产生一个运动轨迹,并给该情景目标计划姿势以及给抓取机器人B的机械臂规定开始姿势。在运动轨迹产生过程中,运动规划模块222利用规划算法来定义几个点。运动规划模块222的输出为抓取机器人B的机械臂的关节角度矩阵:Wherein, the motion planning module 222 automatically plans a motion trajectory for the identified scene target, which automatically generates a motion trajectory, and specifies a starting position for the scene target plan posture and the robot arm of the grasping robot B. In the process of generating the motion trajectory, the motion planning module 222 uses a planning algorithm to define several points. The output of the motion planning module 222 is the joint angle matrix of the robotic arm of the grasping robot B:
Figure PCTCN2019098188-appb-000001
Figure PCTCN2019098188-appb-000001
其中,j表示姿态,m表示时间序列,n为自然数。其中姿态j表示抓取机器人B的机械臂在时间点m转的角度。例如,如图4所示的抓取机器人B具有三个关节,分别为第一关节j 1、第二关节j 2和第三关节j 3。因此,运动规划模块222就可以规划第一关节j 1、第二关节j 2和第三关节j 3在特定时间点时间点转动角度,并输出抓取机器人B的机械臂的一个关节角度矩阵M jAmong them, j represents attitude, m represents time series, and n is a natural number. The posture j represents the angle of the robot arm of the grasping robot B at the time point m. For example, the grasping robot B shown in FIG. 4 has three joints, namely a first joint j 1 , a second joint j 2 and a third joint j 3 . Therefore, the motion planning module 222 can plan the rotation angle of the first joint j 1 , the second joint j 2 and the third joint j 3 at a specific time point, and output a joint angle matrix M of the robot arm of the grasping robot B j .
图5是根据本发明一个具体实施例的情景感知装置的仿真机制的抓取机器人B的机械臂姿态曲线图,其中横坐标表示时间,纵坐标表示角度。在如图5所示的实施例中,机械臂具有8个关节,其中,j 1表示第一关节的姿态,j 2表示第二关节的姿态,j 3表示第三关节的姿态,j 4表示第四关节的姿态,j 5表示第五关节的姿态,j 6表示第六关节的姿态,j 7表示第七关节的姿态,j 8表示第八关节的姿态。因此,在如图5所示的曲线中,本发明能够得到上述8个关节在时间序列上的姿态,用以对抓取机器人B进行仿真。需要说明的是,每个关节都有其能够转动的能力,即转动角度具有数值范围,因此对上述关节的仿真都应当基于每个关节的能力。 Fig. 5 is a graph showing the posture of the manipulator arm of the grasping robot B of the simulation mechanism of the context sensing device according to a specific embodiment of the present invention, wherein the abscissa represents time and the ordinate represents angle. In the embodiment shown in Figure 5, the robotic arm has 8 joints, where j 1 represents the posture of the first joint, j 2 represents the posture of the second joint, j 3 represents the posture of the third joint, and j 4 represents The posture of the fourth joint, j 5 represents the posture of the fifth joint, j 6 represents the posture of the sixth joint, j 7 represents the posture of the seventh joint, and j 8 represents the posture of the eighth joint. Therefore, in the curve shown in FIG. 5, the present invention can obtain the postures of the above-mentioned eight joints in a time series for simulating the grasping robot B. It should be noted that each joint has its ability to rotate, that is, the rotation angle has a numerical range, so the simulation of the above joints should be based on the ability of each joint.
在子步骤S23中,运动仿真模块223驱动所述抓取机器人来沿着运动规划模块222规划的运动轨迹运动并基于仿真需求返回仿真结果。In sub-step S23, the motion simulation module 223 drives the grasping robot to move along the motion trajectory planned by the motion planning module 222 and returns a simulation result based on the simulation requirements.
运动仿真模块223驱动情景感知装置来沿着运动规划模块222规划的路径运动并基于仿真需求返回仿真结果。图4是根据本发明一个具体实施例的情景感知装置的仿真机制的抓取机器人从盒子中抓取部件的仿真示意图,因此,运动仿真模块223对抓取机器人B基于运动规划模块222规划的路径运动来转动第一关节j 1、第二关节j 2和第三关节j 3来从盒子B’中抓取多个部件的过程在多个可能3D场景下进行仿真,包括第一部件p 1、第二部件p 2和第三部件p 3The motion simulation module 223 drives the context sensing device to move along the path planned by the motion planning module 222 and returns simulation results based on simulation requirements. FIG. 4 is a schematic diagram of a simulation diagram of a grasping robot grasping parts from a box according to a simulation mechanism of a scene sensing device according to a specific embodiment of the present invention. Therefore, the motion simulation module 223 performs a path planned by the grasping robot B based on the motion planning module 222 The process of moving to rotate the first joint j 1 , the second joint j 2 and the third joint j 3 to grab multiple parts from the box B'is simulated in multiple possible 3D scenarios, including the first part p 1 , The second part p 2 and the third part p 3 .
运动仿真模块223返回的结果可能包括抓取成功和失败结果,仿真时间,碰撞检测,在各个阶段的能量损耗等。例如,不同阶段仿真时间为:The results returned by the motion simulation module 223 may include grasping success and failure results, simulation time, collision detection, energy loss in various stages, and so on. For example, the simulation time at different stages is:
Simulation time=[t 0 … t i] Simulation time=[t 0 … t i ]
其中,i为仿真中的不同阶段。Among them, i are different stages in the simulation.
最后,执行步骤S3,测试评估装置230基于所述测试实例对所述情景感知装置执行仿真。Finally, step S3 is executed, and the test evaluation device 230 performs simulation on the context sensing device based on the test example.
具体地,测试评估装置230用于基于测试案例概述仿真结果,其输出为仿真结果,并产生不同方面的评估结果,例如可靠性、安全性、效率和稳健性。例如,可靠性与成功率有关,平均成功率可以按照如下计算:Specifically, the test evaluation device 230 is used to summarize the simulation result based on the test case, the output of which is the simulation result, and generates evaluation results in different aspects, such as reliability, safety, efficiency, and robustness. For example, reliability is related to success rate, and the average success rate can be calculated as follows:
Figure PCTCN2019098188-appb-000002
Figure PCTCN2019098188-appb-000002
如果仿真器的引擎利用了一个物理引擎,仿真本身的执行能够找到是否目标物体被抓取到了并成功移动到了预定位置。如果仿真器的引擎是一个几何引擎,这个度量标准并不能被评估。If the simulator's engine utilizes a physics engine, the execution of the simulation itself can find out whether the target object has been captured and successfully moved to the predetermined position. If the simulator's engine is a geometry engine, this metric cannot be evaluated.
其中,安全性关于任意几何碰撞,其可能在抓取机器人操作过程中发生,例如抓爪碰撞到盒子的旁边。例如,碰撞检查结果能够用以下表示:Among them, safety is about arbitrary geometric collisions, which may occur during the operation of the gripping robot, for example, the gripper collides with the side of the box. For example, the collision check result can be expressed as follows:
Figure PCTCN2019098188-appb-000003
Figure PCTCN2019098188-appb-000003
其中,Time是指仿真发生碰撞时的时间,Object_1和Object_1表示互相碰撞的两个目标物体,并且两者距离Distance小于阈值,其中,Distance是指Object_1和Object_1之间的距离。如果Distance小于0或者为负数,则表示两个目标物体发生了碰撞。Among them, Time refers to the time when the simulation collision occurs, Object_1 and Object_1 represent two target objects that collide with each other, and the distance between the two is less than the threshold, and Distance refers to the distance between Object_1 and Object_1. If Distance is less than 0 or negative, it means that two target objects have collided.
其中,效率表示执行运动时在机器系统的电机中每个小时抓取的个数和每次抓取的平均时间以及能量损耗。例如,每个小时的抓取效率能够由如下计算:Among them, the efficiency represents the number of grabs per hour in the motor of the machine system and the average time and energy consumption of each grab when performing a movement. For example, the crawling efficiency per hour can be calculated as follows:
抓取效率=平均值#尝试抓取次数×成功率Crawling efficiency=average#fetch attempts × success rate
其中,鲁棒性是在极端情况和极端值下的性能标准的敏感度。其中,极端值由算法自动产生。此外,内容可能包括系统元件的噪音描述。因此,测试能够在应用噪音值到系统元件时评估对上述性能标准的定量影响。Among them, robustness is the sensitivity of performance standards under extreme conditions and extreme values. Among them, extreme values are automatically generated by algorithms. In addition, the content may include noise descriptions of system components. Therefore, the test can evaluate the quantitative impact on the above performance criteria when applying noise values to system components.
除了评估性能标准以外,平台也能够提供测试覆盖度量方法,其表示自动产生测试案例的覆盖全面性。供测试覆盖度量方法能够基于在测试数据产生装置120中应用的检索规则进行评估。例如,场景相关覆盖测量部分上下文模型以及其组合是否在测试中覆盖。In addition to evaluating performance standards, the platform can also provide test coverage measurement methods, which represent the comprehensiveness of the coverage of automatically generated test cases. The test coverage measurement method can be evaluated based on the retrieval rules applied in the test data generating device 120. For example, scene-related coverage measures whether part of the context model and its combination are covered in the test.
因此,基于此,本发明自动生成测试案例有更全面和客观的可以被量化的方法。Therefore, based on this, the present invention has a more comprehensive and objective method that can be quantified to automatically generate test cases.
本发明第二方面提供了情景感知装置的仿真装置,其中,包括:The second aspect of the present invention provides a simulation device of a context sensing device, which includes:
环境管理装置100,其基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;The environmental management device 100 models the environment of the context-aware device based on customer needs, and traverses the environmental model to generate test cases;
测试仿真装置220,其基于测试实例对所述情感感知装置进行动作规划和动作仿真。The test simulation device 220 performs action planning and action simulation on the emotion perception device based on the test example.
进一步地,所述环境管理装置100还包括:Further, the environmental management device 100 further includes:
建模装置110,其将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型; Modeling device 110, which describes basic objects and their relationships in the environment of the context-aware device using ontology to model the environment of the context-aware device, and annotates the environment model based on customer needs;
测试数据产生装置120,其基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。The test data generating device 120 traverses the environment model based on the test case, and extracts data from the environment model to generate a test case.
进一步地,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。Further, the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
进一步地,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置、生成概率等。Further, the scene target also includes target attributes and noise, and the target attributes include the target's material, color, shape, position, generation probability, and the like.
进一步地,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。Further, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
进一步地,所述情景感知装置为抓取机器人,其中,所述测试仿真装置220还包括:Further, the context sensing device is a grasping robot, wherein the test simulation device 220 further includes:
运动规划模块222,其针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度;A motion planning module 222, which plans a motion trajectory for the grasping robot according to the situational target that the grasping robot needs to grasp, wherein the motion track includes a posture representing the grasping robot, and the posture includes the grasping The rotation angles of the multiple joints of the robot arm in the time series;
运动仿真模块223,其驱动所述抓取机器人来沿着所述运动轨迹运动并基于仿真需求返回仿真结果。The motion simulation module 223 drives the grasping robot to move along the motion trajectory and returns simulation results based on simulation requirements.
进一步地,所述测试仿真装置220还包括感知模块221,其映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。Further, the test simulation device 220 further includes a perception module 221, which maps the data of the test instance to the simulation scene, and recognizes the scene target object and its position information in the simulation scene based on a visual algorithm.
进一步地,所述情景感知装置的仿真装置还包括测试评估装置230,其基于所述测试实例对所述情景感知装置执行仿真。Further, the simulation device of the context sensing device further includes a test evaluation device 230, which performs simulation on the context sensing device based on the test example.
第三方面提供了情景感知装置的仿真系统,其中,包括:处理器;以及与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:S1,基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;S2,基于测试实例对所述情感感知装置进行动作规划、动作仿真和系统评估。A third aspect provides a simulation system of a context-aware device, which includes: a processor; and a memory coupled with the processor, the memory having instructions stored therein, and the instructions cause all of them when executed by the processor. The electronic device performs actions, and the actions include: S1, modeling the environment of the context sensing device based on customer needs, and traversing the environment model to generate test cases; S2, performing action planning on the emotion perception device based on the test cases, Motion simulation and system evaluation.
进一步地,所述动作S1还包括:S11,将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;S12,基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。Further, the action S1 also includes: S11, describing the basic objects and their relationships in the environment of the context sensing device using ontology to model the environment of the context sensing device, and labeling based on customer needs The environment model; S12, traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
进一步地,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。Further, the environmental model includes objects, and the objects include scenario targets, a system under test, and test cases.
进一步地,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置、生成概率。Further, the scene target also includes target attributes and noise, and the target attributes include the target's material, color, shape, position, and generation probability.
进一步地,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。Further, the context perception device includes a grasping robot, and the system under test based on the grasping robot includes a vision system, a robotic arm, and a gripper, wherein the vision system includes visual noise, and the robotic arm includes Mechanical noise.
进一步地,所述情景感知装置为抓取机器人,其中,所述动作S2包括如下动作:S22,针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度。S23,驱动所述抓取机器人来沿着所述运动轨迹运动并基于仿真需求返回仿真结果。Further, the context sensing device is a grasping robot, wherein the action S2 includes the following actions: S22, planning a motion trajectory for the grasping robot according to the situation target that the grasping robot needs to grasp, wherein the movement The trajectory includes a posture representing the grasping robot, and the posture includes a rotation angle of a plurality of joints of the robotic arm of the grasping robot in a time series. S23: Drive the grasping robot to move along the motion track and return a simulation result based on simulation requirements.
进一步地,所述动作S22之前还包括动作S21:映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。Further, the action S22 also includes an action S21 before the action S22: mapping the data of the test instance to the simulation scene, and identifying the scene target object and its position information in the simulation scene based on a visual algorithm.
进一步地,在所述动作S2之后还包括动作S3:基于所述测试实例对所述情景感知装置执行仿真。Further, after the action S2, an action S3 is further included: performing a simulation on the context sensing device based on the test example.
本发明第四方面提供了计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。The fourth aspect of the present invention provides a computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute The method described in the first aspect of the present invention.
本发明第五方面提供了计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据本发明第一方面所述的方法。The fifth aspect of the present invention provides a computer-readable medium on which computer-executable instructions are stored, and when executed, the computer-executable instructions cause at least one processor to perform the method according to the first aspect of the present invention.
本发明能够更好地支持情景感知装置的仿真。本发明提供的建模过程对情景感知装置的环境及其环境中的目标对象及其关系、属性进行了分类和描述,本发明提供的建模过程更加明确和精确,并避免了混淆,以及对下一步测试场景产生提供支持。本发明还可以检索环境模型的图状结构并对每一次遍历产生测试实例,在这个过程中情景感知系统转换为了测试案例列表来事先客户需求。本发明还可以将测试数据映射到与情景感知装置交互的模拟器中,能够自动生成仿真立体场景。The present invention can better support the simulation of the context sensing device. The modeling process provided by the present invention classifies and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes. The modeling process provided by the present invention is more clear and accurate, and avoids confusion and The next test scenario is generated to provide support. The present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements. The present invention can also map test data to a simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.
本发明能够更好地支持情景感知装置的仿真。本发明提供的建模过 程对情景感知装置的环境及其环境中的目标对象及其关系、属性进行了分类和描述,本发明提供的建模过程更加简单,通过建立语义模型对下一步测试场景产生提供支持。本发明还可以检索环境模型的图状结构并对每一次遍历产生测试实例,在这个过程中情景感知系统转换为了测试案例列表来事先客户需求。本发明还可以将测试数据映射到与情景感知装置交互的模拟器中,能够自动生成仿真立体场景。The present invention can better support the simulation of the context sensing device. The modeling process provided by the present invention classifies and describes the environment of the context sensing device and the target objects in the environment and their relationships and attributes. The modeling process provided by the present invention is simpler, and the next test scenario is tested by establishing a semantic model. Generate support. The present invention can also retrieve the graphical structure of the environment model and generate test cases for each traversal. In this process, the context-aware system is converted into a test case list to advance customer requirements. The present invention can also map the test data to the simulator that interacts with the context sensing device, and can automatically generate a simulated stereo scene.
尽管本发明的内容已经通过上述优选实施例作了详细介绍,但应当认识到上述的描述不应被认为是对本发明的限制。在本领域技术人员阅读了上述内容后,对于本发明的多种修改和替代都将是显而易见的。因此,本发明的保护范围应由所附的权利要求来限定。此外,不应将权利要求中的任何附图标记视为限制所涉及的权利要求;“包括”一词不排除其它权利要求或说明书中未列出的装置或步骤;“第一”、“第二”等词语仅用来表示名称,而并不表示任何特定的顺序。Although the content of the present invention has been described in detail through the above preferred embodiments, it should be recognized that the above description should not be considered as limiting the present invention. After those skilled in the art have read the above content, various modifications and substitutions to the present invention will be apparent. Therefore, the protection scope of the present invention should be defined by the appended claims. In addition, any reference signs in the claims should not be regarded as limiting the involved claims; the word "comprising" does not exclude other claims or devices or steps not listed in the specification; "first", "section Words such as "two" are only used to indicate names, and do not indicate any specific order.

Claims (19)

  1. 情景感知装置的仿真方法,其中,包括如下步骤:The simulation method of the context sensing device includes the following steps:
    S1,基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;S1: Model the environment of the context-aware device based on customer requirements, and traverse the environment model to generate test cases;
    S2,基于测试实例对所述情感感知装置进行动作规划和动作仿真。S2: Perform action planning and action simulation on the emotion perception device based on the test example.
  2. 根据权利要求1所述的情景感知装置的仿真方法,其特征在于,所述步骤S1还包括如下步骤:The simulation method of the context sensing device according to claim 1, wherein said step S1 further comprises the following steps:
    S11,将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;S11, describing basic objects and their relationships in the environment of the context sensing device using ontology to model the environment of the context sensing device, and labeling the environment model based on customer needs;
    S12,基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。S12: Traverse the environment model based on the test case, and extract data from the environment model to generate a test case.
  3. 根据权利要求2所述的情景感知装置的仿真方法,其特征在于,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。The simulation method of the context sensing device according to claim 2, wherein the environment model includes objects, and the objects include a context target, a system under test, and a test case.
  4. 根据权利要求3所述的情景感知装置的仿真方法,其特征在于,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。The simulation method of the context sensing device according to claim 3, wherein the context target further includes target attributes and noise, and the target attributes include material, color, shape, and position of the target.
  5. 根据权利要求3所述的情景感知装置的仿真方法,其特征在于,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。The simulation method of the context perception device according to claim 3, wherein the context perception device comprises a grasping robot, and the system under test based on the grasping robot includes a vision system, a mechanical arm and a gripper, Wherein, the vision system includes visual noise, and the mechanical arm includes mechanical noise.
  6. 根据权利要求1所述的情景感知装置的仿真方法,其特征在于,所述情景感知装置为抓取机器人,其中,所述步骤S2包括如下步骤:The simulation method of the context perception device according to claim 1, wherein the context perception device is a grasping robot, wherein the step S2 includes the following steps:
    步骤S22,针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度。Step S22, planning a movement trajectory for the grabbing robot according to the scene target that the grabbing robot needs to grab, wherein the movement trajectory includes a posture representing the grabbing robot, and the posture includes the mechanical of the grabbing robot. The rotation angle of multiple joints of the arm in time series.
    步骤S23,驱动所述抓取机器人来沿着运动规划模块222规划的运动轨迹运动并基于仿真需求返回仿真结果。In step S23, the grasping robot is driven to move along the motion trajectory planned by the motion planning module 222 and returns the simulation result based on the simulation requirement.
  7. 根据权利要求6所述的情景感知装置的仿真方法,其特征在于, 所述步骤S2包括步骤S21:映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。The simulation method of the context sensing device according to claim 6, wherein the step S2 includes step S21: mapping the data of the test instance to the simulation scene, and identifying the scene target objects in the simulation scene based on a vision algorithm and Its location information.
  8. 根据权利要求1所述的情景感知装置的仿真方法,其特征在于,在所述步骤S2之后还包括步骤S3:基于所述测试实例对所述情景感知装置执行仿真。The simulation method of the context sensing device according to claim 1, characterized in that, after the step S2, it further comprises a step S3: performing a simulation on the context sensing device based on the test example.
  9. 情景感知装置的仿真装置,其中,包括:The simulation device of the context sensing device, including:
    环境管理装置(100),其基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;An environmental management device (100), which models the environment of the context-aware device based on customer needs, and traverses the environmental model to generate test cases;
    测试仿真装置(220),其基于测试实例对所述情感感知装置进行动作规划和动作仿真。A test simulation device (220), which performs action planning and action simulation on the emotion perception device based on a test example.
  10. 根据权利要求9所述的情景感知装置的仿真装置,其特征在于,所述环境管理装置(100)还包括:The simulation device of the context sensing device according to claim 9, characterized in that the environment management device (100) further comprises:
    建模装置(110),其将所述情景感知装置的环境中的基本物体及其关系利用知识本体描述出来,以对所述情景感知装置的环境建模,并基于客户需求标注所述环境模型;A modeling device (110), which describes the basic objects and their relationships in the environment of the context perception device using ontology to model the environment of the context perception device, and annotates the environment model based on customer needs ;
    测试数据产生装置(120),其基于所述测试案例遍历所述环境模型,并从所述环境模型中提取数据以产生测试实例。A test data generating device (120), which traverses the environmental model based on the test case, and extracts data from the environmental model to generate a test case.
  11. 根据权利要求10所述的情景感知装置的仿真装置,其特征在于,所述环境模型包括物,所述物包括情景目标、被测系统和测试案例。The simulation device of the context sensing device according to claim 10, wherein the environment model includes objects, and the objects include a context target, a system under test, and a test case.
  12. 根据权利要求11所述的情景感知装置的仿真装置,其特征在于,情景目标还包括目标属性和噪音,所述目标属性包括目标的材料、颜色、形状、位置。The simulation device of the context sensing device according to claim 11, wherein the context target further includes target attributes and noise, and the target attributes include the material, color, shape, and position of the target.
  13. 根据权利要求11所述的情景感知装置的仿真装置,其特征在于,所述情景感知装置包括抓取机器人,基于所述抓取机器人的所述被测系统包括视觉系统、机械臂和抓爪,其中,所述视觉系统包括视觉噪音,所述机械臂包括机械噪音。The simulation device of the context perception device according to claim 11, wherein the context perception device comprises a grasping robot, and the system under test based on the grasping robot includes a vision system, a mechanical arm and a gripper, Wherein, the visual system includes visual noise, and the mechanical arm includes mechanical noise.
  14. 根据权利要求9所述的情景感知装置的仿真装置,其特征在于,所述情景感知装置为抓取机器人,其中,所述测试仿真装置(220)还包括:The simulation device of the context perception device according to claim 9, wherein the context perception device is a grasping robot, wherein the test simulation device (220) further comprises:
    运动规划模块(222),其针对所述抓取机器人需要抓取的情景目标为所述抓取机器人规划运动轨迹,其中,运动轨迹包括表示所述抓取机 器人的姿态,所述姿态包括所述抓取机器人的机械臂的多个关节在时间序列中的转动角度;A motion planning module (222), which plans a motion trajectory for the grasping robot according to the scene target that the grasping robot needs to grasp, wherein the motion track includes the posture of the grasping robot, and the posture includes the Grasp the rotation angle of multiple joints of the robotic arm of the robot in time series;
    运动仿真模块(223),其驱动所述抓取机器人来沿着所述运动轨迹运动并基于仿真需求返回仿真结果。A motion simulation module (223), which drives the grabbing robot to move along the motion trajectory and returns a simulation result based on simulation requirements.
  15. 根据权利要求14所述的情景感知装置的仿真装置,其特征在于,所述测试仿真装置(220)还包括感知模块(221),其映射测试实例的数据到仿真场景,基于视觉算法来识别所述仿真场景中的情景目标物体及其位置信息。The simulation device of the context perception device according to claim 14, characterized in that the test simulation device (220) further comprises a perception module (221), which maps the data of the test instance to the simulation scene, and recognizes all data based on a visual algorithm. The scene target object and its position information in the simulation scene are described.
  16. 根据权利要求9所述的情景感知装置的仿真装置,其特征在于,所述情景感知装置的仿真装置还包括测试评估装置(230),其基于所述测试实例对所述情景感知装置执行仿真。The simulation device of the context perception device according to claim 9, wherein the simulation device of the context perception device further comprises a test evaluation device (230), which performs simulation on the context perception device based on the test example.
  17. 情景感知装置的仿真系统,其中,包括:The simulation system of the context sensing device, including:
    处理器;以及Processor; and
    与所述处理器耦合的存储器,所述存储器具有存储于其中的指令,所述指令在被处理器执行时使所述电子设备执行动作,所述动作包括:A memory coupled to the processor, the memory having instructions stored therein, the instructions, when executed by the processor, cause the electronic device to perform actions, and the actions include:
    S1,基于客户需求对所述情景感知装置的环境建模,并遍历环境模型生成测试实例;S1: Model the environment of the context-aware device based on customer requirements, and traverse the environment model to generate test cases;
    S2,基于测试实例对所述情感感知装置进行动作规划和动作仿真。S2: Perform action planning and action simulation on the emotion perception device based on the test example.
  18. 计算机程序产品,所述计算机程序产品被有形地存储在计算机可读介质上并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1至8中任一项所述的方法。A computer program product, which is tangibly stored on a computer-readable medium and includes computer-executable instructions, which when executed, cause at least one processor to execute any of claims 1 to 8 The method described in one item.
  19. 计算机可读介质,其上存储有计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1至8中任一项所述的方法。A computer-readable medium having computer-executable instructions stored thereon, the computer-executable instructions, when executed, cause at least one processor to execute the method according to any one of claims 1 to 8.
PCT/CN2019/098188 2019-07-29 2019-07-29 Context awareness device simulation method, device, and system WO2021016807A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/098188 WO2021016807A1 (en) 2019-07-29 2019-07-29 Context awareness device simulation method, device, and system
CN201980096782.4A CN113874844A (en) 2019-07-29 2019-07-29 Simulation method, device and system of context awareness device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/098188 WO2021016807A1 (en) 2019-07-29 2019-07-29 Context awareness device simulation method, device, and system

Publications (1)

Publication Number Publication Date
WO2021016807A1 true WO2021016807A1 (en) 2021-02-04

Family

ID=74228357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098188 WO2021016807A1 (en) 2019-07-29 2019-07-29 Context awareness device simulation method, device, and system

Country Status (2)

Country Link
CN (1) CN113874844A (en)
WO (1) WO2021016807A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113311724A (en) * 2021-04-23 2021-08-27 中电海康集团有限公司 Simulation system for robot AI algorithm training
CN113688496A (en) * 2021-07-05 2021-11-23 上海机器人产业技术研究院有限公司 Robot mapping algorithm precision simulation evaluation method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117301077B (en) * 2023-11-23 2024-03-26 深圳市信润富联数字科技有限公司 Mechanical arm track generation method and device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101168896B1 (en) * 2011-03-30 2012-08-06 홍익대학교 산학협력단 Method for automated test case generation and execution based on modeling and simulation, for the robot in a virtual environment
CN105446878A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Continuous program automated testing method
US9671777B1 (en) * 2016-06-21 2017-06-06 TruPhysics GmbH Training robots to execute actions in physics-based virtual environment
CN109446099A (en) * 2018-11-09 2019-03-08 贵州医渡云技术有限公司 Automatic test cases generation method, device, medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101168896B1 (en) * 2011-03-30 2012-08-06 홍익대학교 산학협력단 Method for automated test case generation and execution based on modeling and simulation, for the robot in a virtual environment
CN105446878A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Continuous program automated testing method
US9671777B1 (en) * 2016-06-21 2017-06-06 TruPhysics GmbH Training robots to execute actions in physics-based virtual environment
CN109446099A (en) * 2018-11-09 2019-03-08 贵州医渡云技术有限公司 Automatic test cases generation method, device, medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "How to Use Google SketchUp to Create Robot Modeling and Simulation Scenarios for LabVIEW Robotics", NI SUPPORT, 27 October 2014 (2014-10-27), pages 1 - 13, XP009525802 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113311724A (en) * 2021-04-23 2021-08-27 中电海康集团有限公司 Simulation system for robot AI algorithm training
CN113688496A (en) * 2021-07-05 2021-11-23 上海机器人产业技术研究院有限公司 Robot mapping algorithm precision simulation evaluation method
CN113688496B (en) * 2021-07-05 2024-04-12 上海机器人产业技术研究院有限公司 Precision simulation evaluation method for robot mapping algorithm

Also Published As

Publication number Publication date
CN113874844A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN109960880B (en) Industrial robot obstacle avoidance path planning method based on machine learning
WO2021016807A1 (en) Context awareness device simulation method, device, and system
CN109910018B (en) Robot virtual-real interaction operation execution system and method with visual semantic perception
CN109523629A (en) A kind of object semanteme and pose data set generation method based on physical simulation
Aleotti et al. Perception and grasping of object parts from active robot exploration
CN111383263A (en) System, method and device for grabbing object by robot
US20240033904A1 (en) Simulating multiple robots in virtual environments
Zhao et al. Towards robotic assembly by predicting robust, precise and task-oriented grasps
CN114051444A (en) Executing an application by means of at least one robot
WO2023123911A1 (en) Collision detection method and apparatus for robot, and electronic device and storage medium
Militaru et al. Object handling in cluttered indoor environment with a mobile manipulator
CN210115917U (en) Robot virtual-real interactive operation execution system with visual semantic perception
CN104715133B (en) A kind of kinematics parameters in-orbit identification method and apparatus of object to be identified
Shaw et al. Development of an AI-enabled AGV with robot manipulator
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network
JP2001250122A (en) Method for determining position and posture of body and program recording medium for the same
US20220288782A1 (en) Controlling multiple simulated robots with a single robot controller
Lv et al. A deep safe reinforcement learning approach for mapless navigation
Patzelt et al. Conditional stylegan for grasp generation
CN115249333B (en) Grabbing network training method, grabbing network training system, electronic equipment and storage medium
Hong et al. Research of robotic arm control system based on deep learning and 3D point cloud target detection algorithm
Mazlan et al. Robot arm system based on augmented reality approach
KR102537633B1 (en) Method and device for generating robot task planning from assembly instruction
Riordan et al. Fusion of LiDAR and Computer Vision for Autonomous Navigation in Gazebo
Quan et al. Simulation Platform for Autonomous Aerial Manipulation in Dynamic Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939268

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939268

Country of ref document: EP

Kind code of ref document: A1