CN114425155A - Data processing method and device, computer equipment and computer storage medium - Google Patents

Data processing method and device, computer equipment and computer storage medium Download PDF

Info

Publication number
CN114425155A
CN114425155A CN202210094470.3A CN202210094470A CN114425155A CN 114425155 A CN114425155 A CN 114425155A CN 202210094470 A CN202210094470 A CN 202210094470A CN 114425155 A CN114425155 A CN 114425155A
Authority
CN
China
Prior art keywords
target
position information
target operation
coordinate system
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210094470.3A
Other languages
Chinese (zh)
Inventor
周茗岩
杨阳
刘锦金
唐明勇
李若岱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202210094470.3A priority Critical patent/CN114425155A/en
Publication of CN114425155A publication Critical patent/CN114425155A/en
Priority to PCT/CN2022/127674 priority patent/WO2023142555A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F3/00Board games; Raffle games
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/003Manipulators for entertainment

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the disclosure provides a data processing method and device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring first position information of at least one object in a first coordinate system; converting first position information of each object in a first coordinate system into second position information in a second coordinate system based on the first conversion parameters; determining target operation information based on the second position information of each object and a preset operation rule, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in a second coordinate system; and executing the target operation instruction based on the target operation object and the target position information.

Description

Data processing method and device, computer equipment and computer storage medium
Technical Field
The present disclosure relates to, but not limited to, the field of human-computer interaction, and in particular, to a data processing method and apparatus, a computer device, and a computer storage medium.
Background
With the development of computers and robotics, robots capable of interacting with humans have come to have a great market demand. In the related art, the robot is affected by various factors during the braking process, and thus the deviation of the operation position (such as the grasping position or the drawing position) is easily caused.
Disclosure of Invention
The embodiment of the disclosure provides a data processing method and device, computer equipment and a computer storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the disclosure provides a data processing method, which comprises the following steps:
acquiring first position information of at least one object in a first coordinate system;
converting the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter;
determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system;
and executing the target operation instruction based on the target operation object and the target position information.
In some embodiments, the obtaining first position information of the at least one object in the first coordinate system includes: acquiring image information acquired by a camera module; based on the image information, first position information of each object in a first coordinate system is obtained.
In some embodiments, the method further comprises: and calibrating the camera module, and determining the first conversion parameter.
In some embodiments, the executing the target operation instruction based on the target operation object and the target position information includes: converting the target position information into third position information in a third coordinate system based on a second conversion parameter; and executing the target operation instruction based on the target operation object and the third position information.
In some embodiments, the method is applied to a robot, the method further comprising: and calibrating the mechanical arm of the robot, and determining the second conversion parameter.
In some embodiments, the method is applied to a playing robot, in a case where the preset operation rule includes a playing rule, the object includes a piece, the target operation object includes a target piece, and the determining target operation information based on the second position information and the preset operation rule of each of the objects includes: and determining target chessmen, target operation instructions and target position information in the second coordinate system based on the second position information of each chessman and the playing rules.
In some embodiments, in a case that the target operation instruction includes a chess playing instruction, the executing the target operation instruction based on the target operation object and the third position information includes: and controlling a mechanical arm of the playing robot to place the target chess pieces on the third position information.
In some embodiments, where the target maneuver instruction includes a chess-eating instruction, the target chess piece includes a first chess piece and a second chess piece, and the third position information includes a first sub-position of the first chess piece and a second sub-position of the second chess piece; the executing the target operation instruction based on the target operation object and the third position information comprises: controlling the mechanical arm to take the first chess piece away based on the first chess piece position; and controlling the mechanical arm to grab the second chess piece based on the second sub-position and move the second chess piece to the first sub-position.
In some embodiments, the method is applied to a drawing robot, and in the case that the preset operation rule includes a drawing rule, the object includes a graphic, and the target operation object includes a target graphic; the determining target operation information based on the second position information and the preset operation rule of each object comprises: and determining target graphics, target operation instructions and target position information in the second coordinate system based on the second position information and the drawing rule of each graphic.
In some embodiments, the executing the target operation instruction based on the target operation object and the third position information includes at least one of: under the condition that the target operation instruction comprises a drawing instruction, controlling a mechanical arm of the drawing robot to draw the target graph according to the target position information; controlling a mechanical arm of the drawing robot to erase the target graph at the target position under the condition that the target operation instruction comprises an erasing instruction; and under the condition that the target operation instruction comprises a moving instruction, acquiring a third sub-position and a fourth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph located at the third sub-position to the fourth sub-position.
The disclosed embodiment provides a data processing apparatus, the apparatus includes:
the first acquisition module is used for acquiring first position information of at least one object in a first coordinate system;
a first conversion module, configured to convert the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter;
the first determining module is used for determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system;
and the execution module is used for executing the target operation instruction based on the target operation object and the target position information.
In some embodiments, the first obtaining module is further configured to: acquiring image information acquired by a camera module; based on the image information, first position information of each object in a first coordinate system is obtained.
In some embodiments, the apparatus further comprises: and the second determining module is used for calibrating the camera module and determining the first conversion parameter.
In some embodiments, the execution module is further configured to: converting the target position information into third position information in a third coordinate system based on a second conversion parameter; and executing the target operation instruction based on the target operation object and the third position information.
In some embodiments, the apparatus further comprises: and the third determination module is used for calibrating the mechanical arm of the robot and determining the second conversion parameter.
In some embodiments, in a case where the preset operation rule includes a playing rule, the object includes a chess piece, the target operation object includes a target chess piece, and the first determining module is further configured to: and determining target chessmen, target operation instructions and target position information in the second coordinate system based on the second position information of each chessman and the playing rules.
In some embodiments, in a case that the target operation instruction includes a chess playing instruction, the execution module is further configured to: and controlling a mechanical arm of the playing robot to place the target chess pieces on the third position information.
In some embodiments, in a case that the target operation instruction includes a chess-eating instruction, the target chess piece includes a first chess piece and a second chess piece, the third position information includes a first sub-position of the first chess piece and a second sub-position of the second chess piece, and the execution module is further configured to: controlling the mechanical arm to take the first chess piece away based on the first chess piece position; and controlling the mechanical arm to grab the second chess piece based on the second sub-position, and moving the second chess piece to the first sub-position.
In some embodiments, in a case where the preset operation rule includes a drawing rule, the object includes a graphic, and the target operation object includes a target graphic; the first determining module is further configured to: and determining target graphics, target operation instructions and target position information in the second coordinate system based on the second position information and the drawing rule of each graphic.
In some embodiments, the execution module is further configured to at least one of: under the condition that the target operation instruction comprises a drawing instruction, controlling a mechanical arm of the drawing robot to draw the target graph according to the target position information; controlling a mechanical arm of the drawing robot to erase the target graph at the target position under the condition that the target operation instruction comprises an erasing instruction; and under the condition that the target operation instruction comprises a moving instruction, acquiring a third sub-position and a fourth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph located at the third sub-position to the fourth sub-position.
The embodiment of the present disclosure provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor implements the data processing method when executing the computer program.
The disclosed embodiments provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the above-described data processing.
In the embodiment of the disclosure, first position information of at least one object in a first coordinate system is obtained; converting the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter; determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system; and executing the target operation instruction based on the target operation object and the target position information. Therefore, the position in the first coordinate system can be converted into the position in the second coordinate system through the first conversion parameter, the unification of the positions of the objects in different coordinate systems is realized, more accurate target operation objects, target operation instructions and target position information are obtained, the execution accuracy of the target operation instructions is improved, and the human-computer interaction experience of users is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present disclosure;
FIG. 4A is a schematic diagram of a data processing system according to an embodiment of the present disclosure;
fig. 4B is a schematic diagram of electrical connections between components of a playing robot provided in the embodiment of the present disclosure;
fig. 4C is a top view of a playing robot provided in the embodiment of the present disclosure;
fig. 4D is a schematic view of an observation screen of a camera when a user is in front of the playing robot according to the embodiment of the present disclosure;
fig. 4E is a schematic flow chart of playing by the playing robot according to the embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present disclosure.
Detailed Description
For the purpose of making the purpose, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present disclosure, and all other embodiments obtained by a person of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present disclosure.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, references to the terms "first \ second \ third" are intended merely to distinguish similar objects and do not denote a particular order, but rather are to be understood that "first \ second \ third" may, where permissible, be interchanged in a particular order or sequence so that embodiments of the disclosure described herein can be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein is for the purpose of describing embodiments of the disclosure only and is not intended to be limiting of the disclosure.
With the development of computers and robotics, robots capable of interacting with humans have come to have a great market demand. The current interactive robot generally mainly uses an industrial robot as a mechanical body to combine with vision, and after the research and development of research and development personnel, the functions of playing chess, drawing and the like are realized. However, such an industrial robot is extremely expensive as an interactive robot of a machine body, and it is completely impossible to achieve a vision toward consumers. Meanwhile, in the human-computer interaction process, the robot is influenced by various factors such as external interference and abnormity in the braking process, so that the deviation of the grabbing position or the drawing position of the mechanical arm is easily caused, and the user experience effect is poor due to continuous superposition.
The embodiment of the disclosure provides a data processing method, which can convert a position in a first coordinate system into a position in a second coordinate system through a first conversion parameter, so as to realize the unification of positions of objects in different coordinate systems, and obtain more accurate target operation objects, target operation instructions and target position information, thereby improving the execution precision of the target operation instructions, and further improving the human-computer interaction experience of users. An exemplary application of the computer device set forth in the embodiments of the present disclosure is described below. The computer device provided by the embodiment of the disclosure can be implemented as a mobile phone terminal, a notebook computer, a tablet computer, an interactive robot (e.g., a game robot, a drawing robot, etc.), other online game platforms, a server, a desktop computer, a smart television, a vehicle-mounted device, an industrial device, and the like.
In the following, the technical solutions in the embodiments of the present disclosure will be clearly and completely described with reference to the drawings in the embodiments of the present disclosure.
Fig. 1 is a schematic flow chart of an implementation of a data processing method provided in an embodiment of the present disclosure, as shown in fig. 1, the method includes:
step S11, obtaining first position information of the at least one object in the first coordinate system.
Here, the object may be an object of any suitable scene. For example, in a playing scenario, the object may be a piece of gobang, a piece of chess, a piece of go, or the like. For another example, in drawing a scene, the object may be a brush, a graphic, an eraser, or the like.
In some embodiments, the first coordinate system may include, but is not limited to, a pixel coordinate system, an image coordinate system, a camera coordinate system, a world coordinate system, and the like. The pixel coordinate system may be a coordinate system with the upper left corner of the image as the origin, and the u-axis and the v-axis are respectively parallel to the two sides of the image, and the coordinate values are represented by (u, v), u represents the row where the image is located, and v represents the column where the image is located. The image coordinate system may be with the center of the image as the origin of coordinates, the X-axis and the Y-axis being parallel to both sides of the image, and the coordinate values of the object may be represented by (X, Y). The camera coordinate system takes the optical center of the camera as the center of a circle, and the optical axis of the camera is vertical to the image plane. The world coordinate system is introduced to describe the position of the camera and the rotation in any dimension can be expressed as a product of a coordinate vector and a suitable square matrix. In practice, a person skilled in the art may select a suitable first coordinate system according to practical situations, and the embodiments of the present disclosure are not limited.
For example, in a playing scenario, the first coordinate system may be a camera coordinate system.
In some embodiments, the first position information of each object may be acquired by a camera module, a sensor, infrared sensing, or the like.
In some embodiments, the robot acquires the first position information after receiving the operation instruction. The operation instruction is used for indicating that the robot is operated currently. For example, the operation instruction may be that the user presses a first key, and the first key may be a physical key or a virtual key. The first key may be any suitable key, such as a start key, a finish key, a play key. For example, the operation command may be a gesture operation, a voice operation, a body sensing operation, or the like performed by the user.
Step S12, converting the first position information of each object in the first coordinate system into the second position information in the second coordinate system based on the first conversion parameter.
Here, the first conversion parameter may be preset, or may be calculated by calibration, comparison, or the like.
In some embodiments, the second coordinate system may include, but is not limited to, an image coordinate system, a pixel coordinate system, and the like. In practice, a person skilled in the art may select a suitable second coordinate system according to practical situations, and the embodiments of the present disclosure are not limited.
For example, in a playing scenario, the second coordinate system may be a checkerboard image coordinate system.
In some embodiments, the first conversion parameter may be stored in the computer device in advance, or may be stored in the cloud or other terminals. In the case where the first conversion parameter is stored in the computer apparatus, the computer apparatus converts the first location information into the second location information according to the first conversion parameter. In case that the first conversion parameter is stored in the other terminal, the computer device transmits the first location information to the other terminal, so that the other terminal converts the first location information into the second location information according to the first conversion parameter and returns the second location information to the computer device.
Step S13, determining target operation information based on the second position information of each object and the preset operation rule, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in a second coordinate system.
Here, the preset operation rules may include, but are not limited to, playing rules, drawing rules, and the like. In implementation, the preset operation rule may be determined according to an actual interaction scenario, which is not limited in this disclosure.
In some implementations, the context information may be determined based on the first location information. For example, in a case where the first position information includes position information of a gobang, it may be determined that the scene is a gobang scene. For another example, in a case where the first position information includes a graphic position, it may be determined that the scene is a drawing scene.
In some embodiments, a mapping table of correspondence between the scene information and the preset operation rule may be established in advance, and the mapping table may be stored in the computer device or other terminals. In the case that the mapping table is stored in the computer device, the computer device determines a preset operation rule matching with the scene information in the mapping table according to the scene information. And under the condition that the mapping table is stored in other terminals, the computer equipment sends the scene information to other terminals, so that the other terminals determine a preset operation rule matched with the scene information in the mapping table according to the scene information, and the preset operation rule is returned to the computer equipment.
For example, in the case that the scene information is a gobang scene, the computer device searches the preset operation rule matched with the gobang scene in the mapping table according to the gobang scene as a gobang playing rule. For another example, when the scene information is a drawing scene, the computer device searches the mapping table for a preset operation rule matched with the drawing scene as a drawing rule according to the drawing scene.
The target operands may be objects of any suitable scene. For example, in a playing scenario, the target operation object may be a chess piece or the like. For another example, in the drawing scene, the target operation object may be a brush, a graphic, an eraser, or the like.
The target operation instructions may include, but are not limited to, chess playing instructions, chess eating instructions, drawing instructions, moving instructions, erasing instructions, and the like. In implementation, the target operation instruction may be determined according to an actual interaction scenario, which is not limited in this disclosure.
In some embodiments, in a case where the preset operation rule includes a playing rule, the object includes a chess piece, the target operation object includes a target chess piece, and the step 13 includes a step S131 in which:
and S131, determining target chess pieces, target operation instructions and target position information in the second coordinate system based on the second position information of each chess piece and the playing rules.
Here, the computer device may be a gaming robot. The target operation instructions may include, but are not limited to, chess playing instructions, chess eating instructions, and the like.
In some embodiments, in the case that the preset operation rule includes a drawing rule, the object includes a graphic, the target operation object includes a target graphic, and the step 13 includes a step S132 in which:
step S132, determining the target graph, the target operation instruction and the target position information in the second coordinate system based on the second position information and the drawing rule of each graph.
Here, the computer apparatus includes a drawing robot. The target operation instructions may include, but are not limited to, a draw operation, a move operation, an erase operation, and the like.
Step S14, based on the target operation object and the target position information, executes the target operation instruction.
Here, the computer device controls the corresponding hardware device such as a robot arm or a manipulator to execute the target operation command according to the application scenario. For example, in a playing scenario, the target operational object includes a target chess piece, and the computer device controls the mechanical arm to place the target chess piece at a target position. For another example, in a playing scenario, the target operational object includes a first chess piece and a second chess piece, and the computer device controls the mechanical arm to move the first chess piece to a position of the second chess piece. For another example, in a drawing scene, the target operation object includes a target figure, and the computer apparatus controls the robot arm to draw the target figure with the target position as a starting point.
In the embodiment of the disclosure, first position information of at least one object in a first coordinate system is obtained; converting the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter; determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system; and executing the target operation instruction based on the target operation object and the target position information. Therefore, the position in the first coordinate system can be converted into the position in the second coordinate system through the first conversion parameter, the unification of the positions of the objects in different coordinate systems is realized, more accurate target operation objects, target operation instructions and target position information are obtained, the execution accuracy of the target operation instructions is improved, and the human-computer interaction experience of users is improved.
Fig. 2 is a schematic diagram of an implementation flow of a data processing method provided in an embodiment of the present disclosure, and as shown in fig. 2, the data processing method includes:
and step S21, acquiring image information collected by the camera module.
Here, the computer device may have a camera module built therein or may have an external camera module. The camera module is used for collecting image information of a target area. The target area may include, but is not limited to, a playing area in a virtual board, a playing area in a physical board, a virtual drawing area, a physical drawing area, etc. The camera module may be any suitable type of device capable of capturing an image. In some embodiments, the camera module may be a general camera or a video camera, or may be a wide-angle module having an extreme wide-angle shooting.
The image information can be an unprocessed image directly acquired by the camera module, or can be an image acquired by the camera module and processed by computer equipment.
Step S22, based on the image information, obtains first position information of each object in the first coordinate system.
Here, the image information may be recognized using a suitable recognition algorithm to acquire the first position information of each object. For example, an Optical Character Recognition (OCR) algorithm, a deep learning based object detection algorithm, and the like. In implementation, a person skilled in the art may select a suitable recognition algorithm according to actual situations, and the embodiments of the present disclosure are not limited.
Step S23, converting the first position information of each object in the first coordinate system into the second position information in the second coordinate system based on the first conversion parameter.
Step S24, determining target operation information based on the second position information of each object and the preset operation rule, the target operation information including a target operation object, a target operation instruction and target position information in the second coordinate system.
Step S25, based on the target operation object and the target position information, executes the target operation instruction.
The steps S23 to S25 correspond to the steps S12 to S14, and in practice, reference may be made to specific embodiments of the steps S12 to S14.
In some embodiments, the method further comprises:
and step S26, calibrating the camera module and determining a first conversion parameter.
Here, the first transformation parameter may include, but is not limited to, an internal parameter, a distortion parameter, an external parameter, or other transformation matrix of the camera module, and the like. The internal parameter of the camera module is an inherent parameter of the camera module, and the inherent parameter can be obtained from a manufacturer or obtained by calibrating the camera module. The distortion parameter is used for correcting image distortion caused by the characteristics of the camera module. For example, the lens distortion is caused by the irregular refraction of the lens of the camera module due to the passing of the light.
In some embodiments, the camera module may be calibrated by using a linear calibration method, a nonlinear optimization calibration method, a zhangying calibration method, a chessboard calibration method, an angular point calibration method, a camera calibration method, or other calibration methods to obtain the first conversion parameter. The calibration method is not limited in the embodiments of the present disclosure, as long as the first conversion parameter can be obtained.
In the embodiment of the disclosure, image information acquired by a camera module is acquired; acquiring first position information of each object in a first coordinate system based on the image information; converting first position information of each object in a first coordinate system into second position information in a second coordinate system based on the first conversion parameters; determining target operation information based on the second position information of each object and a preset operation rule, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in a second coordinate system; and executing the target operation instruction based on the target operation object and the target position information. Therefore, after the camera module is used for obtaining accurate first position information, the position in the first coordinate system is converted into the position in the second coordinate system through the first conversion parameter, the positions of the objects in different coordinate systems are unified, more accurate target operation objects, target operation instructions and target position information are obtained, and the execution accuracy of the target operation instructions is further improved.
Fig. 3 is a schematic flow chart of an implementation of a data processing method according to an embodiment of the present disclosure, as shown in fig. 3, the method includes steps S31 to S35:
step S31, obtaining first position information of the at least one object in the first coordinate system.
Step S32, converting the first position information of each object in the first coordinate system into the second position information in the second coordinate system based on the first conversion parameter.
Step S33, determining target operation information based on the second position information of each object and the preset operation rule, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in a second coordinate system.
The steps S31 to S33 correspond to the steps S11 to S13, and in practice, reference may be made to specific embodiments of the steps S11 to S13.
Step S34 is to convert the target position information into third position information in a third coordinate system based on the second conversion parameter.
Here, the second conversion parameter may be preset, or may be calculated by calibration, comparison, or the like. The second transformation parameters may include, but are not limited to, a transformation matrix, and the like. In some embodiments, calibration may be performed using a hand-eye calibration method to obtain the second conversion parameter. The calibration method is not limited in the embodiments of the present disclosure, as long as the second conversion parameter can be obtained. The third coordinate system may include, but is not limited to, a spatial coordinate system, an image coordinate system, and the like.
For example, in a playing scenario, the third coordinate system may be a mechanical arm coordinate system.
And step S35, executing the target operation instruction based on the target operation object and the third position information.
Here, the target operation instruction may include, but is not limited to, a chess playing instruction, a chess eating instruction, a drawing instruction, an erasing instruction, a moving instruction, and the like.
In some embodiments, in case the target operation instruction comprises a chess playing instruction, the third position information comprises a third sub-position of the target chess piece, and the step S35 comprises a step S351, wherein:
and step S351, controlling the mechanical arm of the playing robot to place the target chess piece at the third position.
Here, the computer device may be a playing robot including at least a mechanical arm. In implementation, the playing robot controls the mechanical arm to grab the target chess piece and then moves to the third position information to release the target chess piece, so that the target chess piece is placed at the third position.
For example, in the case of playing weiqi, the playing robot may control the movement of the mechanical arm, grab a black chess from the frame containing the black chess, and move to the third child position for release.
In some embodiments, where the target operation instruction comprises a chess-eating instruction, the target chess piece comprises a first chess piece and a second chess piece, the third position information comprises a first sub-position of the first chess piece and a second sub-position of the second chess piece, and step S35 comprises steps S361-S362, wherein:
and step S361, controlling the mechanical arm to take the first chess piece away based on the first chess piece position.
Here, the computer device may be a playing robot that includes at least a mechanical arm. The first pawn may be a pawn of the opposite side.
And step S362, controlling the mechanical arm to grab the second chess piece based on the second sub-position and moving the second chess piece to the first sub-position.
Here, the second pawn may be a pawn of own.
In the implementation process, in the chess eating process, the playing robot can firstly take the chess pieces of the opposite side and then put the chess pieces of the own side, and can also firstly put the chess pieces of the own side and then take the chess pieces of the opposite side.
For example, in a chess game, the playing robot may control the motion of the mechanical arm to remove a "cannon" (first piece) from the first sub-position of the other side, then grab a "horse" (second piece) from the second sub-position, and move the "horse" (second piece) to the second sub-position.
In some embodiments, step S35, including at least one of step S371 to step S373:
and step S371, controlling the mechanical arm of the drawing robot to draw the target graph according to the target position information under the condition that the target operation instruction comprises a drawing instruction.
Here, the computer device may be an image drawing robot including at least a robot arm. The drawing instructions may include, but are not limited to, drawing circles, drawing lines, drawing curves, drawing points, and the like. The target location information may include, but is not limited to, at least one of a start location, a center location, an end location, and the like.
For example, in the case where the drawing instruction is to draw a circle and the target position information is a start position, the drawing robot controls the robot arm to draw a circle at the start position. For another example, in a case where the drawing instruction is a line segment and the target position information is a start position and an end position, the drawing robot controls the robot arm to draw a line segment from the start position to the end position.
And step 372, controlling a mechanical arm of the drawing robot to erase the target graph at the target position under the condition that the target operation instruction comprises an erasing instruction.
Here, the erase command may include, but is not limited to, an erase circle, an erase line segment, an erase curve, an erase point, and the like. The target location information may include, but is not limited to, at least one of a start location, a center location, an end location, and the like.
For example, in the case where the erasing command is an erasing curve and the target position information is a start position and an end position, the drawing robot controls the robot arm to erase the curve from the start position to the end position.
And S373, under the condition that the target operation instruction comprises a movement instruction, acquiring a fourth sub-position and a fifth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph located at the fourth sub-position to the fifth sub-position.
Here, the erase instruction may include, but is not limited to, moving a circle, moving a line segment, moving a curve, moving a point, and the like. The target location information includes at least a fourth sub-location and a fifth sub-location. Wherein the fourth sub-position may be the original position and the fifth sub-position may be the placement position. The fourth and fifth sub-positions may be at least one of a start position, a center position, an end position, etc.
For example, in the case where the movement command is to move a point, the fourth sub-position is the first start position, and the fifth sub-position is the second start position, the drawing robot controls the robot arm to move the point located at the first start position (fourth sub-position) to the second start position (fifth sub-position).
In some embodiments, the method further comprises:
and step S36, calibrating the mechanical arm of the robot and determining a second conversion parameter.
Here, the computer device may be a robot including at least a robot arm. The second transformation parameters may include, but are not limited to, a transformation matrix, and the like.
In some embodiments, the mechanical arm may be calibrated using a linear calibration method, a non-linear optimization calibration method, a hand-eye calibration method, or other calibration methods to obtain the second conversion parameter. The calibration method is not limited in the embodiments of the present disclosure, as long as the second conversion parameter can be obtained.
In the embodiment of the disclosure, first position information of at least one object in a first coordinate system is acquired; converting first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter; determining target operation information based on the second position information of each object and a preset operation rule, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in a second coordinate system; converting the target position information into third position information in a third coordinate system based on the second conversion parameter; and executing the target operation instruction based on the target operation object and the third position information. Therefore, after the second position information is obtained, the position in the second coordinate system is converted into the position in the third coordinate system through the second conversion parameter, the unification of the positions of the objects in different coordinate systems is realized, so that more accurate target operation objects, target operation instructions and target position information are obtained, and the execution accuracy of the target operation instructions is further improved.
Fig. 4A is a schematic diagram of a data processing system according to an embodiment of the disclosure, and as shown in fig. 4A, the data processing system 40 includes a calibration module 41, an acquisition module 42, a determination module 43, and an execution module 44.
The calibration module 41 is configured to determine a first conversion parameter and a second conversion parameter based on a calibration method.
The obtaining module 42 is configured to obtain first position information of at least one object in a first coordinate system.
The determining module 43 is configured to convert the first position information of each object in the first coordinate system into the second position information in the second coordinate system based on the first conversion parameter, and determine the target operation information based on the second position information of each object and the preset operation rule, where the target operation information includes the target operation object, the target operation instruction, and the target position information.
And the execution module 44 is configured to convert the target position information into third position information in a third coordinate system based on the second conversion parameter, and execute the target operation instruction based on the target operation object and the third position information.
In some embodiments, the calibration module 41 includes: the device comprises an initialization module and a conversion module. The initialization module is used for initializing a first coordinate system, a second coordinate system and a third coordinate system. The conversion module is used for determining a first conversion parameter between the first coordinate system and the second coordinate system, and a second conversion parameter of the second coordinate system and the third coordinate system based on the calibration method.
For example, in a game scenario, a visual module coordinate system (first coordinate system), a chessboard coordinate system (second coordinate system), and a mechanical arm coordinate system (third coordinate system) are respectively initialized, a first conversion parameter converted from the visual module coordinate system to the chessboard coordinate system is determined based on a hand-eye calibration method, and a second conversion parameter converted from the chessboard coordinate system to the mechanical arm coordinate system is determined. And based on the first conversion parameter and the second conversion parameter, realizing the unification of the coordinate system of the vision module and the coordinate system of the mechanical arm.
For example, a chessboard coordinate system dimension is defined with the width of the chessboard as a unit, and the chessboard is defined as a two-dimensional rectangular coordinate system. For another example, the mechanical arm adopts a three-axis mechanical arm, a position sensor is added to each degree of freedom of each mechanical arm, and calibration of the mechanical arm is realized through forward and backward solution of the three-axis mechanical arm.
In some embodiments, the obtaining module 42 is further configured to: acquiring image information acquired by a camera module; based on the image information, first position information of each object in a first coordinate system is acquired.
Fig. 4B is a schematic diagram of electrical connections among components of a playing robot 400 according to an embodiment of the present disclosure, as shown in fig. 4B, the playing robot includes at least a processing unit 401, a camera 402 and a mechanical arm 403, where the processing unit 401 may include, but is not limited to, a central processing unit, a controller, etc., and is electrically connected to the camera 402, and may receive chessboard information and object information captured by the camera 402, calculate and send an instruction or a signal for controlling the mechanical arm 403 to move according to the received chessboard information and object information; the processing unit 402 is electrically connected to the robot 403, and can send a command or a signal for controlling the operation of the robot 403 to a corresponding driving unit of the robot 403 to operate the robot 403. The opposite target object of the playing robot refers to an object which is located opposite to the playing robot and needs to interact with the playing robot, and the object may be a user, a doll, another playing robot, or the like, which is not limited in the embodiment of the disclosure.
Fig. 4C is a top view of a playing robot provided in an embodiment of the present disclosure, as shown in fig. 4C, the playing robot includes at least a camera 402 and a mechanical arm 403, the mechanical arm 403 can move within a range of a chessboard 404 to grab or release a chess piece 405, and the camera 402 can photograph information of the chessboard 404 and user information. In the case where the playing robot is provided with two mechanical arms 403, either one of the mechanical arms 403 may be used to grab and release the chess pieces 405, while the other mechanical arm 403 may perform some action for interacting information with the user during the vacant time.
Fig. 4D is a schematic view of an observation picture of a camera when a user is in front of a playing robot according to an embodiment of the present disclosure, and as shown in fig. 4D, a shooting area 406 of the camera 402 includes a first side 4061 and a second side 4062 that are perpendicular to each other, a length of the first side 4061 is greater than a length of the second side 4062, and the second side 4062 is parallel to a horizontal direction. Furthermore, the direction of the photographable area 406 of the camera 402 along the first side 4061 is a vertical direction, the direction along the second side 4062 is a horizontal direction, and the length of the photographable area 406 of the camera 402 along the first side 4061 is longer, so that the long-distance photography requirement of the chessboard 404 and the user 407 can be well met; correspondingly, the length of the shooting-enabled area 406 of the camera 402 along the direction of the second side 4062 is relatively short, so that the shooting resources are not wasted on the premise of ensuring the shooting requirement in the horizontal direction.
In some embodiments, the execution module 44 is further configured to: converting the target position information into third position information in a third coordinate system based on a second conversion parameter; and executing the target operation instruction based on the target operation object and the third position information.
In some embodiments, in a case where the preset operation rule includes a playing rule, the object includes a chess piece, the target operation object includes a target chess piece, and the determining module 43 is further configured to: and determining the target chessman, the target operation instruction and the target position information in the second coordinate system based on the second position information of each chessman and the playing rules.
In some embodiments, in a case that the target operation instruction includes a chess playing instruction, the third position information includes a third sub-position of the target chess piece, and the executing module 44 is further configured to: and controlling the mechanical arm of the playing robot to place the target chess piece at the third position.
In some embodiments, in a case that the target operation instruction includes a chess-eating instruction, the target chess piece includes a first chess piece and a second chess piece, the third position information includes a first sub-position of the first chess piece and a second sub-position of the second chess piece, and the executing module 44 is further configured to: controlling the mechanical arm to take the first chess piece away based on the first chess piece position; and controlling the mechanical arm to grab the second chess piece based on the second sub-position and move the second chess piece to the first sub-position.
In some embodiments, in a case where the preset operation rule includes a drawing rule, the object includes a graphic, and the target operation object includes a target graphic; the determining module 43 is further configured to: and determining the target graph, the target operation instruction and the target position information in the second coordinate system based on the second position information and the drawing rule of each graph.
In some embodiments, the execution module 44 is further configured to at least one of: under the condition that the target operation instruction comprises a drawing instruction, controlling a mechanical arm of the drawing robot to draw a target graph according to the target position information; under the condition that the target operation instruction comprises an erasing instruction, controlling a mechanical arm of the drawing robot to erase a target graph at a target position; and under the condition that the target operation instruction comprises a moving instruction, acquiring a fourth sub-position and a fifth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph positioned at the fourth sub-position to the fifth sub-position.
In some embodiments, the data processing system comprises a cloud comprising a calibration module 41 and a determination module 43, and a playing robot comprising an acquisition module 42 and an execution module 44.
For example, firstly, the playing robots collect image information through the camera modules and send the image information to the cloud. Then, the cloud acquires first position information of each chessman in a camera module coordinate system based on the image information; converting each first position information into second position information in a chessboard coordinate system based on the first conversion parameter; determining target chessmen, target operation instructions and target position information based on each piece of second position information and the playing rules; converting the target position information into third position information in a mechanical arm coordinate system based on the second conversion parameter; and returning the target chess pieces, the target operation instructions and the third position information to the playing robot. And finally, the playing robot executes the target operation instruction based on the target chess piece and the third position information.
For another example, first, the playing robot collects image information through the camera module, and sends the image information to the cloud. Then, the cloud acquires first position information of each chessman in a camera module coordinate system based on the image information; converting each first position information into second position information in a chessboard coordinate system based on the first conversion parameter; and determining the target chessman, the target operation instruction and the target position information based on each piece of second position information and the playing rules, and returning the target chessman, the target operation instruction and the target position information to the playing robot. Finally, the playing robot converts the target position information into third position information in a mechanical arm coordinate system based on the second conversion parameter; and executing the target operation instruction based on the target chess piece and the third position information.
Fig. 4E is a schematic view of a process of playing chess by a playing robot according to an embodiment of the present disclosure, and as shown in fig. 4E, the process includes the following steps:
step St 1: playing chess by a user;
step St 2: the user presses the chess playing key to send out a chess playing instruction of the chess playing robot;
step St 3: the chess playing robot starts a camera module to collect image information and sends the image information to the cloud;
step St 4: after image recognition, coordinate conversion and analysis are carried out by the cloud, operation information is returned to the chess-playing robot;
step St 5: the playing robot analyzes the operation information to obtain a first chess piece and a first sub position thereof, a second chess piece and a second sub position thereof and a chess-eating instruction;
step St 6: the playing robot controls the mechanical arm to take away the first chessman at the first position;
step St 7: the playing robot controls the mechanical arm to move to a second sub-position, takes away the second chessman and places the first sub-position;
step St 8: the playing robots are reset, the users are prompted to play chess in the modes of voice, indicator lights, keys and the like, and the step St1 is continuously executed.
In the embodiment of the disclosure, in the process of playing chess between human and machine, a person puts the chess pieces askew, the shooting module is used for realizing accurate grabbing based on the definition of a chessboard coordinate system, and the chess pieces are accurately placed at the intersection points of the chessboard grids when being placed, so that 'askew grabbing and putting right' is realized, and the playing experience of a user can be improved.
Based on the foregoing embodiments, an embodiment of the present disclosure provides a data processing apparatus, and fig. 5 is a schematic structural diagram of the data processing apparatus provided in the embodiment of the present disclosure, as shown in fig. 5, the data processing apparatus 50 includes a first obtaining module 51, a first converting module 52, a first determining module 53, and an executing module 53.
The first obtaining module 51 is configured to obtain first position information of at least one object in a first coordinate system;
the first conversion module 52 is configured to convert the first position information of each of the objects in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter;
the first determining module 53 is configured to determine target operation information based on the second position information of each object and a preset operation rule, where the target operation information includes a target operation object, a target operation instruction, and target position information in the second coordinate system;
the executing module 54 is configured to execute the target operation instruction based on the target operation object and the target location information.
In some embodiments, the first obtaining module 51 is further configured to: acquiring image information acquired by a camera module; based on the image information, first position information of each object in a first coordinate system is obtained.
In some embodiments, the apparatus further comprises: and the second determining module is used for calibrating the camera module and determining the first conversion parameter.
In some embodiments, the execution module 54 is further configured to: converting the target position information into third position information in a third coordinate system based on a second conversion parameter; and executing the target operation instruction based on the target operation object and the third position information.
In some embodiments, the apparatus further comprises: and the third determination module is used for calibrating the mechanical arm of the robot and determining the second conversion parameter.
In some embodiments, in a case where the preset operation rule includes a playing rule, the object includes a chess piece, the target operation object includes a target chess piece, and the first determining module 53 is further configured to: and determining target chessmen, target operation instructions and target position information in the second coordinate system based on the second position information of each chessman and the playing rules.
In some embodiments, in a case that the target operation instruction includes a chess playing instruction, the third position information includes a third sub-position of the target chess piece, and the executing module 54 is further configured to: and controlling a mechanical arm of the playing robot to place the target chess piece at the third sub-position.
In some embodiments, in a case that the target operation instruction includes a chess-eating instruction, the target chess piece includes a first chess piece and a second chess piece, the third position information includes a first sub-position of the first chess piece and a second sub-position of the second chess piece, and the executing module 54 is further configured to: controlling the mechanical arm to take the first chess piece away based on the first chess piece position; and controlling the mechanical arm to grab the second chess piece based on the second sub-position and move the second chess piece to the first sub-position.
In some embodiments, in a case where the preset operation rule includes a drawing rule, the object includes a graphic, and the target operation object includes a target graphic; the first determining module 53 is further configured to: and determining target graphics, target operation instructions and target position information in the second coordinate system based on the second position information and the drawing rule of each graphic.
In some embodiments, the execution module 54 is further configured to at least one of: under the condition that the target operation instruction comprises a drawing instruction, controlling a mechanical arm of the drawing robot to draw the target graph according to the target position information; controlling a mechanical arm of the drawing robot to erase the target graph at the target position under the condition that the target operation instruction comprises an erasing instruction; and under the condition that the target operation instruction comprises a moving instruction, acquiring a fourth sub-position and a fifth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph located at the fourth sub-position to the fifth sub-position.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
In the embodiment of the present disclosure, if the playing method is implemented in the form of a software functional module and sold or used as a separate product, the playing method may be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
The embodiment of the present disclosure provides a computer device, which includes a memory and a processor, where the memory stores a computer program that can run on the processor, and the processor implements the above method when executing the computer program.
Embodiments of the present disclosure provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method. The computer readable storage medium may be transitory or non-transitory.
The disclosed embodiments provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program that when read and executed by a computer performs some or all of the steps of the above method. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Here, it should be noted that: the above description of the storage medium and device embodiments, similar to the description of the method embodiments above, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that fig. 6 is a schematic diagram of a hardware entity of a computer device in an embodiment of the present disclosure, and as shown in fig. 6, the hardware entity of the computer device 600 includes: a processor 601, a communication interface 602, and a memory 603, wherein:
the processor 601 generally controls the overall operation of the computer device 600.
The communication interface 602 may enable the computer device to communicate with other terminals or servers via a network.
The Memory 603 is configured to store instructions and applications executable by the processor 601, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 601 and modules in the computer apparatus 600, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM). Data may be transferred between the processor 601, the communication interface 602, and the memory 603 via the bus 604.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, ROMs, magnetic or optical disks, etc.
The above description is only an embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present disclosure, and shall cover the scope of the present disclosure.

Claims (13)

1. A method of data processing, the method comprising:
acquiring first position information of at least one object in a first coordinate system;
converting the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter;
determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system;
and executing the target operation instruction based on the target operation object and the target position information.
2. The method of claim 1, wherein obtaining first position information of at least one object in a first coordinate system comprises:
acquiring image information acquired by a camera module;
based on the image information, first position information of each object in a first coordinate system is obtained.
3. The method of claim 2, further comprising:
and calibrating the camera module, and determining the first conversion parameter.
4. The method according to any one of claims 1 to 3, wherein the executing the target operation instruction based on the target operation object and the target position information comprises:
converting the target position information into third position information in a third coordinate system based on a second conversion parameter;
and executing the target operation instruction based on the target operation object and the third position information.
5. The method of claim 4, applied to a robot, further comprising:
and calibrating the mechanical arm of the robot, and determining the second conversion parameter.
6. The method according to claim 4 or 5, applied to a playing robot, wherein in the case where the preset operation rule includes a playing rule, the object includes a chess piece, and the target operation object includes a target chess piece;
the determining target operation information based on the second position information and the preset operation rule of each object comprises:
and determining target chessmen, target operation instructions and target position information in the second coordinate system based on the second position information of each chessman and the playing rules.
7. The method of claim 6, wherein in a case that the target operation instruction comprises a chess-eating instruction, the target chess piece comprises a first chess piece and a second chess piece, and the third position information comprises a first sub-position of the first chess piece and a second sub-position of the second chess piece;
the executing the target operation instruction based on the target operation object and the third position information comprises:
controlling the mechanical arm to take the first chess piece away based on the first chess piece position;
and controlling the mechanical arm to grab the second chess piece based on the second sub-position and move the second chess piece to the first sub-position.
8. The method according to claim 6 or 7, wherein in case the target operation instruction comprises a chess playing instruction, the third position information comprises a third sub-position of the target chess piece;
the executing the target operation instruction based on the target operation object and the third position information comprises:
and controlling a mechanical arm of the playing robot to place the target chess piece at the third sub-position.
9. The method according to claim 4 or 5, wherein, when the preset operation rule comprises a drawing rule, the object comprises a figure, and the target operation object comprises a target figure;
the determining target operation information based on the second position information and the preset operation rule of each object comprises:
and determining target graphics, target operation instructions and target position information in the second coordinate system based on the second position information and the drawing rule of each graphic.
10. The method of claim 9, wherein executing the target operation instruction based on the target operation object and the third location information comprises at least one of:
under the condition that the target operation instruction comprises a drawing instruction, controlling a mechanical arm of the drawing robot to draw the target graph according to the target position information;
controlling a mechanical arm of the drawing robot to erase the target graph at the target position under the condition that the target operation instruction comprises an erasing instruction;
and under the condition that the target operation instruction comprises a moving instruction, acquiring a fourth sub-position and a fifth sub-position in the target position information, and controlling a mechanical arm of the drawing robot to move the target graph located at the fourth sub-position to the fifth sub-position.
11. A data processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring first position information of at least one object in a first coordinate system;
a first conversion module, configured to convert the first position information of each object in the first coordinate system into second position information in a second coordinate system based on a first conversion parameter;
the first determining module is used for determining target operation information based on second position information and a preset operation rule of each object, wherein the target operation information comprises a target operation object, a target operation instruction and target position information in the second coordinate system;
and the execution module is used for executing the target operation instruction based on the target operation object and the target position information.
12. A computer device comprising a processor and a memory, the memory storing a computer program operable on the processor, wherein the processor implements the method of any one of claims 1 to 10 when executing the computer program.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of one of claims 1 to 10.
CN202210094470.3A 2022-01-26 2022-01-26 Data processing method and device, computer equipment and computer storage medium Withdrawn CN114425155A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210094470.3A CN114425155A (en) 2022-01-26 2022-01-26 Data processing method and device, computer equipment and computer storage medium
PCT/CN2022/127674 WO2023142555A1 (en) 2022-01-26 2022-10-26 Data processing method and apparatus, computer device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210094470.3A CN114425155A (en) 2022-01-26 2022-01-26 Data processing method and device, computer equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN114425155A true CN114425155A (en) 2022-05-03

Family

ID=81314042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210094470.3A Withdrawn CN114425155A (en) 2022-01-26 2022-01-26 Data processing method and device, computer equipment and computer storage medium

Country Status (2)

Country Link
CN (1) CN114425155A (en)
WO (1) WO2023142555A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142555A1 (en) * 2022-01-26 2023-08-03 上海商汤智能科技有限公司 Data processing method and apparatus, computer device, storage medium, and computer program product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0731533B2 (en) * 1986-09-03 1995-04-10 日産自動車株式会社 Three-dimensional coordinate transformation device
JP6529758B2 (en) * 2014-12-25 2019-06-12 株式会社キーエンス Image processing apparatus, image processing system, image processing method and computer program
JP2017170571A (en) * 2016-03-24 2017-09-28 セイコーエプソン株式会社 Robot, robot control apparatus, and robot system
CN110559077B (en) * 2018-06-05 2021-08-17 武汉联影智融医疗科技有限公司 Coordinate system registration method, robot control method, device, equipment and medium
CN111708366B (en) * 2020-06-29 2023-06-06 山东浪潮科学研究院有限公司 Robot, and method, apparatus and computer-readable storage medium for controlling movement of robot
CN113313755A (en) * 2021-04-16 2021-08-27 中科创达软件股份有限公司 Method, device and equipment for determining pose of target object and storage medium
CN113696178B (en) * 2021-07-29 2023-04-07 大箴(杭州)科技有限公司 Control method and system, medium and equipment for intelligent robot grabbing
CN114425155A (en) * 2022-01-26 2022-05-03 北京市商汤科技开发有限公司 Data processing method and device, computer equipment and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023142555A1 (en) * 2022-01-26 2023-08-03 上海商汤智能科技有限公司 Data processing method and apparatus, computer device, storage medium, and computer program product

Also Published As

Publication number Publication date
WO2023142555A1 (en) 2023-08-03

Similar Documents

Publication Publication Date Title
CN111880657B (en) Control method and device of virtual object, electronic equipment and storage medium
CN104249371B (en) Information processor and information processing method
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
CN112171666B (en) Pose calibration method and device for visual robot, visual robot and medium
CN106104198A (en) Messaging device, information processing method and program
US11048923B2 (en) Electronic device and gesture recognition method thereof
JP5387668B2 (en) Information processing terminal, information processing method, and program
CN111736709A (en) AR glasses control method, device, storage medium and apparatus
WO2023142555A1 (en) Data processing method and apparatus, computer device, storage medium, and computer program product
CN111652110A (en) Image processing method and device, electronic equipment and storage medium
CN110152288B (en) Game controller, game system and method for identifying game operation direction
Horak et al. Image processing on raspberry pi for mobile robotics
KR102565444B1 (en) Method and apparatus for identifying object
CN114706489A (en) Virtual method, device, equipment and storage medium of input equipment
EP3705982A1 (en) Apparatus and method for adaptively configuring user interface
US11403848B2 (en) Electronic device and method for generating augmented reality object
KR101016805B1 (en) System for configuring Human-Robot Interaction
CN115862074B (en) Human body pointing determination and screen control method and device and related equipment
WO2023202062A1 (en) Target docking method based on image recognition and terminal device and medium thereof
CN111452046A (en) Virtual reality-based explosive-handling robot system, control method and storage medium
CN115294213A (en) Calibration tower, camera calibration method and device, electronic equipment and storage medium
CN114740997A (en) Interaction control device and interaction control method
CN110598605B (en) Positioning method, positioning device, terminal equipment and storage medium
US20200167005A1 (en) Recognition device and recognition method
US20230133168A1 (en) Method for identifying human postures and gestures for interaction purposes and portable hand-held device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220503

WW01 Invention patent application withdrawn after publication