CN114862948A - Target object control method, device, equipment and readable storage medium - Google Patents

Target object control method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN114862948A
CN114862948A CN202210303634.9A CN202210303634A CN114862948A CN 114862948 A CN114862948 A CN 114862948A CN 202210303634 A CN202210303634 A CN 202210303634A CN 114862948 A CN114862948 A CN 114862948A
Authority
CN
China
Prior art keywords
target object
target
space
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210303634.9A
Other languages
Chinese (zh)
Inventor
薛孟辰
樊泽沛
黄舒婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202210303634.9A priority Critical patent/CN114862948A/en
Publication of CN114862948A publication Critical patent/CN114862948A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The embodiment of the application provides a target object control method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission and position information of the target object at a first moment; determining a predicted position of the target object in the spatial model at the target moment according to at least the position information, the network delay and the operation information; and sending an operation instruction to the target object according to the space model and the predicted position.

Description

Target object control method, device, equipment and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of remote control, and relates to but is not limited to a target object control method, a target object control device, target object control equipment and a readable storage medium.
Background
Currently, a method for remotely controlling a robot to perform work in a factory or other scenes is widely used in modern industry, and a technician remotely controls the robot to perform industrial operation in the factory through a picture returned by the robot. However, since the robot often needs to perform work during traveling, a certain difference exists between a picture returned by the robot and an environment where the robot is currently located, which is seen by a technician at a control end, and an operation picture remotely controlled by the technician is not synchronous with an actual picture of the robot, which may cause an accident.
Disclosure of Invention
Based on the problems in the related art, embodiments of the present application provide a target object control method, apparatus, device, and readable storage medium.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a target object control method, which comprises the following steps:
acquiring a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission and position information of the target object at a first moment;
determining a predicted position of the target object in the spatial model at the target moment according to at least the position information, the network delay and the operation information;
sending an operation instruction to the target object according to the space model and the predicted position
In some embodiments, the spatial model has spatial coordinate axes; the running information at least comprises a moving speed; the position information includes at least a first display image;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining a first position and a first orientation of the target object in the spatial model at the first moment according to the first display image and the spatial model;
determining a first coordinate and a first angle of the target object in the spatial coordinate axis according to the first position, the first orientation and the spatial coordinate axis;
determining the predicted position of the target object in the spatial model at the target time according to the first coordinate, the first angle, the network delay and the moving speed.
In some embodiments, the operation information includes at least an operation route and a moving speed of the target object; the position information includes at least a first display image;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining a second position of the target object in the operation route at the first moment according to the first display image and the space model;
determining the predicted position of the target object in the spatial model at the target time based on the second position, the network delay, the travel route, and the movement speed.
In some embodiments, the operation information includes at least an operation route and a moving speed of the target object; the position information comprises at least a first coordinate of the target object in the spatial model at the first time instant;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining the predicted position of the target object in the spatial model at the target time based on the first coordinate, the network delay, the travel route, and the movement speed.
In some embodiments, said sending an operation instruction to said target object according to said spatial model and said predicted position comprises:
determining a prediction coordinate and a prediction angle of the target object at the target moment according to the prediction position;
determining a second display image of the target object at the target moment in the spatial model according to the predicted coordinates and the predicted angle;
determining the target operation of the target object at the predicted position according to the second display image;
and sending the operation instruction corresponding to the target operation to the target object according to the target operation and the network delay.
In some embodiments, the spatial model is obtained by scanning the space based on the target object being at a preset position; or the like, or, alternatively,
and on the basis of the condition that the target object runs in the space, scanning the space once every time when the target object passes a preset time period to correspondingly obtain a space image, and reconstructing the images of the plurality of space images obtained by scanning.
In some embodiments, the space has a plurality of delayed detection positions; the operation information at least comprises moving speed; acquiring the network delay, including:
acquiring a second display image sent by the target object when the target object is positioned at each delay detection position and an actual position of the target object when the second display image is acquired;
determining the delay displacement of the target object according to each delay detection position and the actual position;
carrying out error analysis on the plurality of delay displacements to obtain a target delay displacement of the target object;
and determining the network delay of the target object according to the target delay displacement and the moving speed.
An embodiment of the present application provides a target object control apparatus, including:
the acquisition module is used for acquiring a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission and position information of the target object at a first moment;
a determining module, configured to determine, according to at least the location information, the network delay, and the operation information, a predicted location of the target object in the spatial model at the time of the target;
and the sending module is used for sending an operation instruction to the target object according to the space model and the predicted position and controlling the target object to operate according to the operation instruction.
The target object control device provided by the embodiment of the application comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor executes the program to realize the target object control method provided by the embodiment of the application.
The computer-readable storage medium provided by the embodiment of the present application stores executable instructions thereon, and is configured to cause a processor to execute the executable instructions to implement the target object control method provided by the embodiment of the present application.
According to the target object control method, the target object control device, the target object control equipment and the readable storage medium, the predicted position of the target object at the target moment is determined through the position of the target object at the first time, the operation information and the network delay, and the target object is remotely controlled according to the state of the predicted position in the space model.
Drawings
Fig. 1 is an alternative schematic flowchart of a target object control method provided in an embodiment of the present application;
fig. 2 is a schematic view of an application scenario of a target object control method provided in an embodiment of the present application;
fig. 3 is an alternative flow chart of a target object control method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an alternative target object control method provided in the embodiments of the present application;
fig. 5 is a schematic structural diagram of a target object control apparatus provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a target object control device according to an embodiment of the present application.
Detailed Description
For better clarity of the purpose, technical solutions and advantages of the embodiments of the present application, the embodiments of the present application will be described in detail below with reference to the accompanying drawings. It is to be understood that the following description of the embodiments is intended to illustrate and explain the general concepts of the embodiments of the application and should not be taken as limiting the embodiments of the application. In the specification and drawings, the same or similar reference numerals refer to the same or similar parts or components. The figures are not necessarily to scale and certain well-known components and structures may be omitted from the figures for clarity.
In some embodiments, unless defined otherwise, technical or scientific terms used in the embodiments of the present application shall have the ordinary meaning as understood by one of ordinary skill in the art to which the embodiments of the present application belong. The use of "first," "second," and similar terms in the embodiments of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. The word "a" or "an" does not exclude a plurality. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", "top" or "bottom", etc. are used merely to indicate relative positional relationships, which may change when the absolute position of the object being described changes. When an element such as a layer, film, region, or substrate is referred to as being "on" or "under" another element, it can be "directly on" or "under" the other element or intervening elements may be present.
In order to solve the problem that pictures are not synchronous when technicians remotely operate the robot at an operation end, two methods are generally adopted in the related technology, one method is to cache a far-end instruction to enable the operation interval of the robot to be larger than the network delay time so as to ensure that the environment of the robot for transmitting images back to the operation end and the environment of the robot for operation are as synchronous as possible, but when the problem of picture asynchronization is solved through the cache instruction, the working efficiency of the robot is reduced. The other method is to establish a network disturbance torque model, and compensate a dynamic model of the robot through the model, but the method is greatly influenced by the established network disturbance torque model, the model is difficult to establish, and meanwhile, the model has certain errors, so the method is not widely applied.
Based on the problems in the related art, the embodiment of the application provides a target object control method, the predicted position of a target object at a target moment is determined through the position of the target object at the first time, operation information and network delay, and remote control is realized on the target object according to the state of the predicted position in a space model.
An exemplary application of the target object control device provided in the embodiment of the present application is described below, and the target object control device provided in the embodiment of the present application may be implemented as various types of target object control terminals such as a notebook computer, a tablet computer, a desktop computer, and a mobile device, and may also be implemented as a server. Next, an exemplary application when the target object controlling apparatus is implemented as a server will be explained.
Referring to fig. 1, fig. 1 is an alternative flowchart of a target object control method provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 1.
Step S101, obtaining a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission, and position information of the target object at a first moment.
Here, the target object may be an industrial inspection robot that inspects equipment of a plant or a plant environment in place of a human being, and is capable of discriminating an equipment failure or performing a work in place of a human being. The space in which the target object is located may be an operation space in which the robot is located, for example, a factory in which the robot is located.
In some embodiments, the space model of the space where the target object is located refers to a three-dimensional model obtained by three-dimensionally modeling the space where the target object is located. In the embodiment of the application, the target object may carry an image scanning device, the space where the target object is located may be scanned at a fixed position to obtain an image of the space where the target object is located, the space where the target object is located may also be scanned in real time during the process of traveling of the target object to obtain images of different positions of the space where the target object is located, and a three-dimensional model of the space where the target object is located may be obtained through a model construction algorithm based on the images obtained through scanning.
In some embodiments, the operation information of the target object at least includes information such as a moving speed and an operation route of the target object in a space where the target object is located, the position information of the target object is used for representing the position of the target object at any time, and the predicted position of the target object at the target time can be predicted through the operation information and the position information.
In this embodiment of the present application, the network delay when the target object performs data transmission may refer to a time when the target object performs data transmission with the server, the data is transmitted from the target object to the server, or the data is transmitted from the server to the target object.
Step S102, determining the predicted position of the target object in the space model at the target moment according to at least the position information, the network delay and the operation information.
In some embodiments, the target time may be the current time, or any time in the future when the target object needs to perform an operation. The position information of the target object includes at least image information or coordinate information of the target object photographed at a certain time.
In the embodiment of the application, the target object sends the image information or the coordinate information to the server, and the server can calculate a future time or a time when the server sends an operation instruction to the target object and a predicted position of the target object in the space model according to information such as network delay, the running speed and the running route of the target object.
And S103, sending an operation instruction to the target object according to the space model and the predicted position.
In some embodiments, the server determines the actual position of the target object in the space at the target time according to the predicted position of the target object in the space model, the server determines the target operation of the target object at the position, and sends an operation instruction corresponding to the operation to the target object, and the target object executes the operation instruction at the target time.
According to the target object control method provided by the embodiment of the application, the predicted position of the target object at the target moment is determined through the position of the target object at the first time, the operation information and the network delay, and the target object is remotely controlled according to the state of the predicted position in the space model.
Referring to fig. 2, fig. 2 is a schematic view of an application scenario of the target object control method provided in the embodiment of the present application. The target object control system 20 provided in the embodiment of the present application includes a target object 100, a network 200, and a server 300, where the server 300 obtains operation information of the target object 100, location information of a first time, a spatial model of a located space, and a network delay through the network 200, determines a predicted location of the target object 100 in the spatial model at the target time according to the location information, the network delay, and the operation information, forms an operation instruction according to the spatial model and the predicted location, and sends the operation instruction to the target object 100 through the network 200, so as to implement remote control on the target object. When the target object 100 receives the operation instruction, the operation instruction is executed at the target time, and the job is performed in the space where the target object 100 is located.
In some embodiments, when performing three-dimensional space model reconstruction on one or more scanned images, the obtained three-dimensional space model has a space coordinate axis, and a technician can visually see the position and corresponding coordinates of a target object in the space model at a server or a control end.
In some embodiments, the operation information of the target object at least comprises a moving speed of the target object in the space, the position information may be a first display image acquired by the target object at a first time, and coordinates of the target object in the space model may be determined according to the first display image, so as to calculate the predicted position of the target object. Based on the foregoing embodiment, fig. 3 is an optional flowchart of the target object control method provided in this embodiment, and as shown in fig. 3, in some embodiments, step S102 may be implemented by the following steps:
step S301, determining a first position and a first orientation of the target object in the spatial model at the first time according to the first display image and the spatial model.
In some embodiments, in order to make the user experience better in remote operation, the user may view the display image transmitted from the target object to the console or the server on the console or the server in real time to determine the position of the target object. The target object may carry a Depth Camera (RGB-Depth Camera), a first display image of the target object at a first time is acquired by the Depth Camera, the first display image is an image carrying Depth information, each pixel point in the image represents a distance from an actual object corresponding to the pixel point to the Camera, and a position of the target object in a space where the target object is located may be determined by the first display image, that is, the first position of the target object in the space model may be known.
In the embodiment of the application, after receiving a first display image at a first time transmitted by a target object, a server determines a first position and a first orientation of the target object in a spatial model according to depth information carried by the first display image, where the first orientation is an orientation of the front of the target object, that is, an orientation of the target object when the target object is operated.
Step S302, determining a first coordinate and a first angle of the target object in the space coordinate axis according to the first position, the first orientation and the space coordinate axis.
In the embodiment of the present application, after the first position and the first orientation of the target object are determined, the position of the target object is calibrated according to a space coordinate axis of the space model, and a first coordinate and a first angle of the target object in the space coordinate axis are obtained, for example, the first coordinate is (x) 1 ,y 1 ,z 1 ) The first angle is theta 1
Step S303, determining the predicted position of the target object in the spatial model at the target time according to the first coordinate, the first angle, the network delay, and the moving speed.
In some embodiments, according to the first coordinate, the first angle, the network delay and the moving speed of the target object, the moving distance of the target object when the server receives the first display image can be known, so as to obtain the position of the target object when the server receives the first display image. Further, according to the target time, the moving speed of the target object and the position when the first display image is received, and according to the distance calculation formula, the predicted position of the target object in the space model at the target time can be obtained, and the predicted position of the target object in the space can also be obtained.
In some embodiments, the moving direction of the target object may be obtained according to the orientation of the target object, and then the displacement of the target object may be obtained according to the moving speed and the moving time of the target object.
In some embodiments, the operation information may further include an operation route of the target object, and in some scenarios, the travel route of the target object is known, and the predicted position of the target object may be quickly obtained through the operation route. In some embodiments, as shown in fig. 3, step S102 may also be implemented by:
step S304, determining a second position of the target object in the running route at the first moment according to the first display image and the space model.
Step S305, determining the predicted position of the target object in the spatial model at the target time according to the second position, the network delay, the operation route, and the moving speed.
In some embodiments, after receiving the first display image, the server determines the second position of the target object in the operation route according to the image information, determines the position of the target object when the server receives the first display image according to the moving speed and the network delay of the target object, and determines the predicted position of the target object in the operation route at the target moment.
In some embodiments, the position information may also be a first coordinate of the target object in the spatial model at a first time, and an actual position of the target object at any time may be obtained through the coordinate information and the operation route.
In some embodiments, as shown in fig. 3, step S102 may also be implemented by:
step S306, determining the predicted position of the target object in the space model at the target moment according to the first coordinate, the network delay, the running route and the moving speed.
In the embodiment of the application, after the server obtains the first coordinate of the target object, the position of the target object in the running route can be obtained, the position of the target object when the server receives the first coordinate is determined according to the moving speed and the network delay of the target object, and the predicted position of the target object in the running route at the target moment is determined.
According to the embodiment of the application, the predicted position of the target object at the target moment is determined through the information such as the network delay and the running information of the target object, the problem that when the server receives the information to perform remote control, due to the fact that the position is asynchronous caused by the network delay, misoperation is caused is avoided, and accuracy of the remote control is improved.
In some embodiments, when the predicted position of the target at the target time is determined, the state of the target object in the spatial model may be determined according to the predicted position, and the operation of the target object at the predicted position may be determined according to the state. Based on the foregoing embodiment, fig. 4 is an optional flowchart of the target object control method provided in the embodiment of the present application, and as shown in fig. 4, in some embodiments, step S103 may be implemented by the following steps:
step S401, according to the predicted position, determining the predicted coordinate and the predicted angle of the target object at the target time.
In some embodiments, after the predicted position is obtained, the coordinates of the predicted position of the target object may be obtained according to the spatial coordinate axis of the spatial model, and the direction of the target object at the predicted position may be obtained according to the travel route of the target object, so as to obtain the predicted angle of the target object at the predicted position.
Step S402, determining a second display image of the target object at the target moment in the space model according to the prediction coordinate and the prediction angle.
In some embodiments, when the second display image is a predicted position of the target object in the spatial model, the image corresponding to the predicted position is obtained according to a predicted angle of the target object, and since the spatial model is reconstructed from a space where the target object is located, the second display image may represent an image of the target object located in the space and located at the predicted position.
And S403, determining the target operation of the target object at the predicted position according to the second display image.
Step S404, according to the target operation and the network delay, the operation instruction corresponding to the target operation is sent to the target object.
In some embodiments, the target operation of the target object at the position may be determined according to the second display image, for example, when it is known that the object of the factory needs to be moved to another place when the position is predicted according to the second display image, the target operation of the target object at the predicted position is to grab the object and move the object. After the target operation is determined, an operation instruction corresponding to the target operation needs to be sent to the target object, but because the data is sent with network delay, when the operation instruction is sent, a time of network delay before the time when the target object reaches the predicted position needs to be determined as a sending time when the server sends the operation instruction to the target object.
According to the method, the operation of the target object can be determined more accurately according to the method for determining the target operation according to the image, and meanwhile, when the operation instruction is sent, the network delay of data transmission is considered, so that the problem that the target object is not at the predicted position when the operation instruction is received is avoided, and the accuracy of the operation is improved.
In some embodiments, the target object carries a scanning device, and before performing a job in the space, the target object may scan the space at a preset position to obtain one or more images, and perform three-dimensional reconstruction on the one or more images to obtain a space model. The preset positions can be positions of all corners or centers of the space where the scanning device is located, and a complete space model is reconstructed through scanning images obtained at all the positions.
In some embodiments, the spatial model may also be obtained by performing a scanning process on the space every time a preset time period elapses by the target object when the target object runs in the space, for example, performing a scanning every minute to obtain a spatial image, and performing image reconstruction on a plurality of spatial images obtained by the scanning.
In some embodiments, a plurality of delay detection positions are set in a space where a target object is located, a server acquires a second display image sent to the server when the target object is located at each delay detection position and an actual position where the target object is located when the second display image is acquired, determines delay displacement of the target object moving in the second display image during transmission according to each delay detection position and actual position, performs error analysis on the plurality of delay displacements to obtain target delay displacement of the target object, and determines network delay of the target object in the space where the target object is located according to the target delay displacement and the moving speed of the target object.
It should be noted that, the error analysis of the multiple delay displacements may be to obtain an average delay displacement of the multiple delay displacements, remove an outlier having a larger difference from the average delay displacement from the multiple delay displacements, for example, remove a delay displacement having a displacement difference larger than 1 meter, average the remaining multiple delay displacements, and then obtain a target delay displacement of the target object.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Technical personnel remotely operate and control a robot (a target object) at an operation and control end (namely a server), the robot carries a three-dimensional space reconstruction function scanning device, the robot carries out three-dimensional scanning reconstruction on a working space (namely a space where the target object is located) where the robot is located through the scanning device, space overall information is obtained, and a three-dimensional space model (namely a space model) is obtained. And establishing a motion model (namely operation information) of the robot, and calculating the real position of the robot in a working space through the information such as the position, the speed, the angle, the network delay and the like of the robot received by the control end. And according to the real position and the three-dimensional space model of the robot, obtaining the image information of the robot at the real position, and performing remote robot control and operation according to the obtained image information of the real position.
According to the method for using the three-dimensional scanning equipment in the remote robot control, the real position of the robot is calculated according to the state of the robot and network delay, the operation environment of the robot is modeled through three-dimensional space modeling, the real picture of the robot is obtained through the real position obtained through calculation, and remote control is carried out.
Based on the above target object control method, fig. 5 is a schematic structural diagram of a target object control apparatus provided in this embodiment of the present application, and as shown in fig. 5, the target object control apparatus 500 includes an obtaining module 501, a determining module 502, and a sending module 503, where the obtaining module 501 is configured to obtain a space model of a space where the target object is located, operation information of the target object, a network delay when the target object performs data transmission, and location information of the target object at a first time; a determining module 502, configured to determine, according to at least the location information, the network delay, and the operation information, a predicted location of the target object in the spatial model at the time of the target; a sending module 503, configured to send an operation instruction to the target object according to the spatial model and the predicted position, and control the target object to operate according to the operation instruction.
In some embodiments, the spatial model has spatial coordinate axes; the operation information at least comprises moving speed; the position information includes at least a first display image; the determining module 502 is further configured to determine, according to the first display image and the spatial model, a first position and a first orientation of the target object in the spatial model at the first time; determining a first coordinate and a first angle of the target object in the spatial coordinate axis according to the first position, the first orientation and the spatial coordinate axis; determining the predicted position of the target object in the spatial model at the target time according to the first coordinate, the first angle, the network delay and the moving speed.
In some embodiments, the operation information includes at least an operation route and a moving speed of the target object; the position information includes at least a first display image; the determining module 502 is further configured to determine a second position of the target object in the travel route at the first time according to the first display image and the spatial model; determining the predicted position of the target object in the spatial model at the target time based on the second position, the network delay, the travel route, and the movement speed.
In some embodiments, the operation information includes at least an operation route and a moving speed of the target object; the position information comprises at least a first coordinate of the target object in the spatial model at the first time instant; the determining module 502 is further configured to determine the predicted position of the target object in the spatial model at the target time according to the first coordinate, the network delay, the operation route, and the moving speed.
In some embodiments, the sending module 503 is further configured to determine a predicted coordinate and a predicted angle of the target object at the target time according to the predicted position; determining a second display image of the target object at the target moment in the spatial model according to the predicted coordinates and the predicted angle; determining the target operation of the target object at the predicted position according to the second display image; and sending the operation instruction corresponding to the target operation to the target object according to the target operation and the network delay.
In some embodiments, the target object control apparatus 500 further includes a scanning module, configured to scan the space based on the spatial model when the target object is at a preset position; or, based on the condition that the target object runs in the space, when the target object passes a preset time period, scanning the space once to obtain a corresponding space image, and performing image reconstruction on a plurality of space images obtained by scanning.
In some embodiments, the space has a plurality of delayed detection positions; the operation information at least comprises moving speed; the target object control device 500 further includes a first obtaining module, configured to obtain a second display image sent by the target object when the target object is located at each of the delay detection positions, and an actual position of the target object when the second display image is obtained; the first determining module is used for determining the delay displacement of the target object according to each delay detection position and the actual position; the error analysis module is used for carrying out error analysis on the plurality of delay displacements to obtain the target delay displacement of the target object; and the second determining module is used for determining the network delay of the target object according to the target delay displacement and the moving speed.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the target object control method is implemented in the form of a software functional module and sold or used as a standalone product, the target object control method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a terminal to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
An embodiment of the present application provides a target object control device, fig. 6 is a schematic structural diagram of a composition of the target object control device provided in the embodiment of the present application, and as shown in fig. 6, the target object control device 600 at least includes: a processor 601 and a computer-readable storage medium 602 configured to store executable instructions, wherein the processor 601 generally controls the overall operation of the target object controlling device. The computer-readable storage medium 602 is configured to store instructions and applications executable by the processor 601, and may also cache data to be processed or processed by each module in the processor 601 and the target object controlling device 600, and may be implemented by a flash Memory or a Random Access Memory (RAM).
The embodiment of the application provides a storage medium storing executable instructions, wherein the executable instructions are stored, and when being executed by a processor, the executable instructions cause the processor to execute the target object control method provided by the embodiment of the application, for example, the method shown in fig. 1.
In some embodiments, the storage medium may be a computer-readable storage medium, such as a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), among other memories; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application. It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a process, a method or an apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A target object control method, the method comprising:
acquiring a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission and position information of the target object at a first moment;
determining a predicted position of the target object in the spatial model at the target moment according to at least the position information, the network delay and the operation information;
and sending an operation instruction to the target object according to the space model and the predicted position.
2. The method of claim 1, the spatial model having spatial coordinate axes; the operation information at least comprises moving speed; the position information includes at least a first display image;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining a first position and a first orientation of the target object in the space model at the first moment according to the first display image and the space model;
determining a first coordinate and a first angle of the target object in the spatial coordinate axis according to the first position, the first orientation and the spatial coordinate axis;
determining the predicted position of the target object in the spatial model at the target time according to the first coordinate, the first angle, the network delay and the moving speed.
3. The method of claim 1, the operational information comprising at least a travel route and a movement speed of the target object; the position information includes at least a first display image;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining a second position of the target object in the operation route at the first moment according to the first display image and the space model;
determining the predicted position of the target object in the spatial model at the target time based on the second position, the network delay, the travel route, and the movement speed.
4. The method of claim 1, the operational information comprising at least a travel route and a movement speed of the target object; the position information comprises at least a first coordinate of the target object in the spatial model at the first time instant;
determining a predicted position of the target object in the spatial model at the target time based on at least the position information, the network latency, and the operational information, comprising:
determining the predicted position of the target object in the spatial model at the target time based on the first coordinate, the network delay, the travel route, and the movement speed.
5. The method of any of claims 2 to 4, wherein said sending an operating instruction to said target object based on said spatial model and said predicted position comprises:
determining a prediction coordinate and a prediction angle of the target object at the target moment according to the prediction position;
determining a second display image of the target object at the target moment in the spatial model according to the predicted coordinates and the predicted angle;
determining the target operation of the target object at the predicted position according to the second display image;
and sending the operation instruction corresponding to the target operation to the target object according to the target operation and the network delay.
6. The method according to claim 1, wherein the space model is obtained by scanning the space based on the target object being at a preset position; or the like, or, alternatively,
and on the basis of the condition that the target object runs in the space, scanning the space once every time when the target object passes a preset time period, correspondingly obtaining a space image, and performing image reconstruction on a plurality of scanned space images to obtain the target object.
7. The method of claim 1, the space having a plurality of delayed detection positions; the operation information at least comprises moving speed; acquiring the network delay, including:
acquiring a second display image sent by the target object when the target object is positioned at each delay detection position and an actual position of the target object when the second display image is acquired;
determining the delay displacement of the target object according to each delay detection position and the actual position;
carrying out error analysis on the plurality of delay displacements to obtain a target delay displacement of the target object;
and determining the network delay of the target object according to the target delay displacement and the moving speed.
8. A target object control apparatus, the apparatus comprising:
the acquisition module is used for acquiring a space model of a space where the target object is located, operation information of the target object, network delay of the target object during data transmission and position information of the target object at a first moment;
a determining module, configured to determine, according to at least the location information, the network delay, and the operation information, a predicted location of the target object in the spatial model at the time of the target;
and the sending module is used for sending an operation instruction to the target object according to the space model and the predicted position and controlling the target object to operate according to the operation instruction.
9. A target object control apparatus, the apparatus comprising:
a memory for storing executable instructions; a processor for implementing the target object control method of any one of claims 1 to 7 when executing executable instructions stored in the memory.
10. A computer-readable storage medium storing executable instructions for causing a processor to implement the target object control method of any one of claims 1 to 7 when the executable instructions are executed.
CN202210303634.9A 2022-03-24 2022-03-24 Target object control method, device, equipment and readable storage medium Pending CN114862948A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210303634.9A CN114862948A (en) 2022-03-24 2022-03-24 Target object control method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210303634.9A CN114862948A (en) 2022-03-24 2022-03-24 Target object control method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114862948A true CN114862948A (en) 2022-08-05

Family

ID=82628844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210303634.9A Pending CN114862948A (en) 2022-03-24 2022-03-24 Target object control method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114862948A (en)

Similar Documents

Publication Publication Date Title
EP3171236B1 (en) Simulator, simulation method, and simulation program
WO2021128787A1 (en) Positioning method and apparatus
JP4153528B2 (en) Apparatus, program, recording medium and method for robot simulation
JP7326911B2 (en) Control system and control method
JP2009006410A (en) Remote operation support device and remote operation support program
JP2009012106A (en) Remote operation supporting device and program
US20200279402A1 (en) Method and apparatus for determining rotation angle of engineering mechanical device
EP3889887A1 (en) Image generation device, robot training system, image generation method, and image generation program
CN114115277A (en) Inspection robot-based inspection management method and related equipment
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN114862948A (en) Target object control method, device, equipment and readable storage medium
CN109035303A (en) SLAM system camera tracking and device, computer readable storage medium
JP2020052032A (en) Imaging device and imaging system
US20200408640A1 (en) Methods And Systems For Testing Robotic Systems In An Integrated Physical And Simulated Environment
CN110675445B (en) Visual positioning method, device and storage medium
JP2021160037A (en) Calibration system, information processing system, robot control system, calibration method, information processing method, robot control method, calibration program, information processing program, calibration device, information processing device, and robot control device
CN113177327A (en) Simulation method, device, storage medium and processor
US20230341835A1 (en) Control device, control system, and program
Jagersand et al. Predictive display from computer vision models
CN117082235A (en) Evaluation system, evaluation method, electronic equipment and storage medium
CN116501608A (en) Information physical comprehensive test bed frame structure based on Dajiang machine armor master
Erich et al. Using Point Clouds for Automated Condition Verification in Physical Integration Testing
CN116901059A (en) Gesture sensor-based selection and solution method, device and system in teleoperation
US20210187746A1 (en) Task planning accounting for occlusion of sensor observations
Mikhailyuk et al. VIRTUAL REALITY TOOLS FOR COMPUTER MODELING OF A COSMONAUT'S INTERACTION WITH A GROUP OF AUTONOMOUS MOBILE ROBOTS ON THE LUNAR SURFACE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination