CN113276105A - Robot control method and device - Google Patents
Robot control method and device Download PDFInfo
- Publication number
- CN113276105A CN113276105A CN202010102598.0A CN202010102598A CN113276105A CN 113276105 A CN113276105 A CN 113276105A CN 202010102598 A CN202010102598 A CN 202010102598A CN 113276105 A CN113276105 A CN 113276105A
- Authority
- CN
- China
- Prior art keywords
- matrix
- target
- robot
- original
- position area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Abstract
The application provides a robot control method and device, after a target image is obtained, a target matrix is generated based on the target image, an original matrix is generated based on original position information of a robot, and then an intermediate matrix is generated according to the target matrix and the original matrix; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element, the value of each element in the target matrix represents whether the robot stays in the geographic position area corresponding to the element, the intermediate matrix is used as a transition between the original matrix and the target matrix, and the value of each element represents whether the robot stays in the geographic position area corresponding to the element in the process that the robot moves from the original position to the target position area indicated by the target matrix, and then the multiple robots can be controlled to move in two stages based on the original matrix, the intermediate matrix and the target matrix, so that the multiple robots can form a dot matrix corresponding to a target image with higher efficiency.
Description
Technical Field
The application relates to the technical field of computer application, in particular to a robot control method and device.
Background
In many scenes, a plurality of robots need to be controlled to automatically gather in a certain area and stay at the certain area according to a certain rule or position so as to form a certain graph or character dot matrix.
However, in practice, there are usually a plurality of robots forming a dot matrix, and even if the position to be stopped is determined for each of the plurality of robots, the original position of the robot is uncertain, and the robot may encounter a problem of crossing routes during moving, and if the plurality of robots are controlled to move synchronously, the problem of mutual blocking between the robots may be caused. In order to solve the problem, a method of controlling the mobile robot to move in batches is usually adopted at present to achieve the purpose of enabling a plurality of robots to form a certain pattern or character dot matrix, that is, to issue movement instructions for different robots at different times, so that different robots move to different positions at different times, and finally, a plurality of robots form the pattern or character dot matrix. This method is inefficient and in many cases does not meet the requirements of use.
Disclosure of Invention
The embodiment of the application at least provides a robot control method and device to solve the problems.
In a first aspect, an embodiment of the present application provides a robot control method, including:
acquiring a target image, and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element; generating an original matrix based on the original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area; generating an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix; and controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
In an alternative embodiment, the generating an object matrix based on the object image includes: and generating the target matrix based on the target image and the number of the robots to be deployed.
In an alternative embodiment, the generating the target matrix based on the target image and the actual number of the robots to be deployed comprises: converting the target image into a bitmap with a preset size; wherein pixel points in the bitmap correspond to elements in the target matrix one to one; determining the required quantity of the current robot based on the pixel value of each pixel point in the bitmap; under the condition that the required number of the current robots is different from the actual number of the robots to be deployed, adjusting the pixel value of at least one pixel point in the bitmap; generating the target matrix based on the pixel value of each pixel point in the bitmap after the pixel value is adjusted; and under the condition that the required number of the current robot is the same as the actual number of the robots to be deployed, generating the target matrix based on the pixel values of all the pixel points in the bitmap.
In an optional embodiment, the generating an intermediate matrix based on the original matrix and the target matrix includes: and generating the intermediate matrix based on a predetermined constraint condition and the original matrix and the target matrix.
In an alternative embodiment, the constraints include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix.
In an alternative embodiment, the constraints include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix.
In an optional embodiment, the controlling the robot to move to the target position area indicated by the target matrix to be moved into the robot based on the original matrix, the intermediate matrix, and the target matrix includes: controlling the robot to move to an intermediate position area to be moved into the robot, which is indicated by the intermediate matrix, based on the original matrix and the intermediate matrix; and controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix.
In an alternative embodiment, the constraint condition comprises: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and the sum of the element values of the respective elements in each column of the intermediate matrix is equal to the sum of the element values of the respective elements in the corresponding column of the target matrix:
the controlling the robot to move to the middle position area to be moved into the robot indicated by the middle matrix based on the original matrix and the middle matrix comprises: aiming at each row in the original matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the intermediate matrix; controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving from the original position area indicated by the row element in the original matrix to the middle position area indicated by the row element in the middle matrix;
the controlling the robot to move from the intermediate position area to the target position area indicated by the target matrix to be moved into the robot based on the intermediate matrix and the target matrix comprises: aiming at each column in the intermediate matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the target matrix; and controlling each robot in the geographic position area corresponding to the column to move to the target position area indicated by the column element in the target matrix from the intermediate position area indicated by the column element in the intermediate matrix based on the element mapping information corresponding to the column.
In an alternative embodiment, the constraint condition comprises: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix, the controlling the robot to move to the intermediate position area indicated by the intermediate matrix to be moved into the robot based on the original matrix and the intermediate matrix comprises: aiming at each column in the original matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the intermediate matrix; controlling each robot in the geographic position area corresponding to the column to move from the original position area indicated by the column of elements in the original matrix to the middle position area indicated by the column of elements in the middle matrix based on the element mapping information corresponding to the column;
the controlling the robot to move from the intermediate position area to the target position area indicated by the target matrix to be moved into the robot based on the intermediate matrix and the target matrix comprises: aiming at each row in the intermediate matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the target matrix; and controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving to the target position area indicated by the row element in the target matrix from the middle position area indicated by the row element in the middle matrix.
In a second aspect, an embodiment of the present application further provides a robot control device, including: the first generation module is used for acquiring a target image and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element; the second generation module is used for generating an original matrix based on the original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area; a third generation module, configured to generate an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix; and the control module is used for controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
In an alternative embodiment, the first generating module, when generating the object matrix based on the object image, is configured to: and generating the target matrix based on the target image and the number of the robots to be deployed.
In an alternative embodiment, the first module, when generating the target matrix based on the target image and the actual number of robots to be deployed, is configured to: converting the target image into a bitmap with a preset size; wherein pixel points in the bitmap correspond to elements in the target matrix one to one; determining the required quantity of the current robot based on the pixel value of each pixel point in the bitmap; under the condition that the required number of the current robots is different from the actual number of the robots to be deployed, adjusting the pixel value of at least one pixel point in the bitmap; generating the target matrix based on the pixel value of each pixel point in the bitmap after the pixel value is adjusted; and under the condition that the required number of the current robot is the same as the actual number of the robots to be deployed, generating the target matrix based on the pixel values of all the pixel points in the bitmap.
In an optional embodiment, the third generating module, when generating an intermediate matrix based on the original matrix and the target matrix, is configured to: and generating the intermediate matrix based on a predetermined constraint condition and the original matrix and the target matrix.
In an alternative embodiment, the constraints include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix.
In an alternative embodiment, the constraints include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix.
In an optional embodiment, the control module, when controlling the robot to move to the target location area indicated by the target matrix to be moved into the robot based on the original matrix, the intermediate matrix and the target matrix, is configured to: controlling the robot to move to an intermediate position area to be moved into the robot, which is indicated by the intermediate matrix, based on the original matrix and the intermediate matrix; and controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix.
In an alternative embodiment, the constraint condition comprises: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and the sum of the element values of the respective elements in each column of the intermediate matrix is equal to the sum of the element values of the respective elements in the corresponding column of the target matrix: the control module is used for controlling the robot to move to the middle position area to be moved into the robot indicated by the middle matrix based on the original matrix and the middle matrix, and is used for: aiming at each row in the original matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the intermediate matrix; controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving from the original position area indicated by the row element in the original matrix to the middle position area indicated by the row element in the middle matrix;
the control module, when controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix, is configured to: aiming at each column in the intermediate matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the target matrix; and controlling each robot in the geographic position area corresponding to the column to move to the target position area indicated by the column element in the target matrix from the intermediate position area indicated by the column element in the intermediate matrix based on the element mapping information corresponding to the column.
In an alternative embodiment, the constraint condition comprises: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and the sum of the element values of the respective elements in each column of the intermediate matrix is equal to the sum of the element values of the respective elements in the corresponding column of the original matrix: the control module is used for controlling the robot to move to the middle position area to be moved into the robot indicated by the middle matrix based on the original matrix and the middle matrix, and is used for: aiming at each column in the original matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the intermediate matrix; controlling each robot in the geographic position area corresponding to the column to move from the original position area indicated by the column of elements in the original matrix to the middle position area indicated by the column of elements in the middle matrix based on the element mapping information corresponding to the column;
the control module, when controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix, is configured to: aiming at each row in the intermediate matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the target matrix; and controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving to the target position area indicated by the row element in the target matrix from the middle position area indicated by the row element in the middle matrix.
In a third aspect, an embodiment of the present application further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the robot control method and device, after a target image is obtained, a target matrix is generated based on the target image, an original matrix is generated based on original position information of a robot, and then an intermediate matrix is generated according to the target matrix and the original matrix; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element, the value of each element in the target matrix represents whether the robot stays in the geographic position area corresponding to the element, the intermediate matrix is used as a transition between the original matrix and the target matrix, and the value of each element represents whether the robot stays in the geographic position area corresponding to the element in the process that the robot moves from the original position to the target position area indicated by the target matrix, and then the multiple robots can be controlled to move in two stages based on the original matrix, the intermediate matrix and the target matrix, so that the target position area indicated by the target matrix is adjusted, and the multiple robots can form a dot matrix corresponding to a target image with higher efficiency.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments and are incorporated in and constitute a part of the specification will be briefly described below, and the drawings illustrate the embodiments consistent with the present application and together with the description serve to explain the technical solutions of the present application. It is appreciated that the following drawings depict only certain embodiments of the application and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a robot control method according to an embodiment of the present disclosure;
fig. 2 shows an example of numbering M × N geographic position areas in the robot control method provided in the embodiment of the present application;
fig. 3 illustrates a setup P provided in an embodiment of the present applicationsAnd QsA schematic diagram of a mapping relationship between elements with a middle element value of 1;
fig. 4 shows a setup W provided in an embodiment of the present applicationlAnd UlSchematic diagram of mapping relation between elements with middle element value of 1;
FIG. 5 is a schematic diagram of a robot control apparatus provided in an embodiment of the present application;
fig. 6 shows a schematic diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The application provides a robot control method, after a target image is obtained, a target matrix is generated based on the target image, an original matrix is generated based on original position information of a robot, and then an intermediate matrix is generated according to the target matrix and the original matrix; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element, the value of each element in the target matrix represents whether the robot stays in the geographic position area corresponding to the element, the intermediate matrix is used as a transition between the original matrix and the target matrix, and the value of each element represents whether the robot stays in the geographic position area corresponding to the element in the process that the robot moves from the original position to the target position area indicated by the target matrix, and then the multiple robots can be controlled to move in two stages based on the original matrix, the intermediate matrix and the target matrix, so that the target position area indicated by the target matrix is adjusted, and the multiple robots can form a dot matrix corresponding to a target image with higher efficiency.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solution proposed by the present application to the above-mentioned problems in the following should be the contribution of the inventor to the present application in the process of the present application.
The technical solutions in the present application will be described clearly and completely with reference to the drawings in the present application, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a robot control method applied in the embodiments of the present application is described in detail, where an execution subject of the robot control method provided in the embodiments of the present application is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the robot control method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a robot control method provided in an embodiment of the present application, taking an execution body as a terminal device as an example.
Referring to fig. 1, a flowchart of a robot control method provided in an embodiment of the present application is shown, where the method includes steps S101 to S104, where:
s101: acquiring a target image, and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element;
s102: generating an original matrix based on original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area;
s103: generating an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position indicated by the original matrix to the target position area indicated by the target matrix;
s104: and controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
The following describes each of the above-mentioned steps S101 to S104 in detail.
I: in S101, the target image is, for example, a pattern formed by a dot matrix laid out by the control robot. For example, it is an image generated according to a character lattice to be put out by the robot to be controlled, or an image generated according to a figure to be put out by the robot to be controlled, such as an enterprise logo, a flower, an animal, etc. The target image can be directly generated in the terminal device or generated in other devices and transmitted to the terminal device.
The format of the target image is, for example, a bitmap; the bitmap is a dot-matrix diagram formed by a plurality of pixel points, each point corresponds to one pixel point, and the patterns in the bitmap are formed by different pixel values of the pixel points. The pattern includes at least one of a character and a figure, for example. In addition, the target image can be an image in other formats; when the target matrix is generated, the format of the target image may be converted into a bitmap, and then the target matrix is generated based on the target image converted into the bitmap.
When a target matrix is generated based on a target image, because a plurality of robots are controlled to put out a character or graphic dot matrix corresponding to the target image in a certain area based on the target image, the area can be firstly divided into M multiplied by N unit cells, each unit cell is a geographical position area, and only one robot stays in each geographical position area; when the target matrix is generated based on the target image, the dimension of the generated target matrix is an M × N dimensional matrix, and elements in the target matrix correspond to the respective geographic position areas one to one. And the value of each element in the target matrix represents whether the robot is to be stopped in the geographic position area corresponding to the element.
Under the condition that the size of the target image is M multiplied by N, a target matrix can be generated directly based on the pixel values of all the pixel points in the target image, namely, one pixel point corresponds to one geographical position area. When a target matrix is generated based on pixel values of all pixel points in a target image, under the condition that the number of robots forming a dot matrix is limited, for a certain pixel point, for example, in a gray bitmap, if the color corresponding to the pixel value representing the pixel point tends to be black, the geographic position area corresponding to the pixel point is considered to stop at the robot, and the element value of an element corresponding to the pixel point is set to be a first value; if the color corresponding to the pixel value representing the pixel point tends to be white, even under the condition of white, the geographic position area corresponding to the pixel point is considered not to stop the robot, and the element value of the element corresponding to the pixel point is set to be a second value; further, based on this principle, an object matrix is generated.
In the target matrix, a first value is 1 for example, and the representation of the geographic position area corresponding to the element requires the robot to stop; the second value is, for example, 0, characterizing that the geographic location area corresponding to the element does not require stopping the robot.
In addition, the first value and the second value may have other values, for example, the first value is 2, the second value is 1, and the like; specifically, the specific setting can be performed according to actual needs.
Under the condition that the size of the target image is not M multiplied by N, at the moment, the target image is divided into M multiplied by N sub-images according to pixel points of the target image, the sizes of the sub-images are the same, each sub-image corresponds to a geographical position area, and then a target matrix is generated according to pixel values of the pixel points in the sub-images.
In this case, for example, the average value of the pixel values of the pixel points in the sub-image may be used as a basis for determining whether the geographic location area corresponding to the sub-image is to be parked in the robot; for example, if the color represented by the average value of the pixel values of the pixel points in a certain sub-image is darker, the geographic position area corresponding to the sub-image is considered to be stopped by the robot; setting an element value of an element corresponding to the sub-image to a first value; and if the color represented by the average value of the pixel values of all the pixel points in a certain sub-image tends to be white, the geographic position area corresponding to the sub-image is considered not to stop the robot, and the element value of the element corresponding to the sub-image is set as a second value.
In another embodiment of the present application, in a case that the size of the target image is not M × N, the target image may be further converted into a bitmap of a preset size; and the pixel points in the bitmap correspond to the elements in the target matrix one by one. Here, the pixel value of each pixel point in the bitmap may be obtained, for example, by averaging the pixel values of at least one pixel point in the target image. When the target image is converted into a bitmap with a preset size, the pixel value of any pixel point in the bitmap is any one of the first value and the second value. The principle is similar to the principle of dividing the target image into a plurality of sub-images, and each sub-image corresponds to one pixel point in the generated bitmap, which is not described in detail herein.
And after the bitmap is generated, directly generating a target matrix based on the pixel value of each pixel point in the bitmap.
In another embodiment of the present application, since the number of robots is limited in many cases, in order to enable the limited number of robots to form a lattice corresponding to the target image, the number of robots to be deployed may be considered when generating the target matrix based on the target image.
For example, if, in the process of generating the target matrix based on the target image, it is found that the number of elements whose element values are the first values is greater than the number of robots to be deployed, the element values of some of the elements may be transformed from the first values to the second values, so that the number of elements whose element values are the first values in the finally formed target matrix is equal to the number of robots to be deployed.
Here, for example, an element whose element value is changed to the second value may be selected from a plurality of elements whose element values are the first values in such a manner that the element is uniformly selected from the target image.
In addition, an element whose element value is changed to a second value may be selected from a plurality of elements whose element values are the first values in a random selection manner.
If the number of element points of which the element values are the first values is found to be smaller than the number of robots to be deployed in the process of generating the target matrix based on the target image, the element values of some of the elements may be converted from the second values to the first values, so that the number of elements of which the element values are the first values in the finally formed target matrix is equal to the number of robots to be deployed.
Here, for example, the element values of the elements of the target matrix located at the edge may be converted from the second values to the first values, so that the number of elements of the finally formed target matrix whose element values are the first values is equal to the number of robots to be deployed.
Specifically, an embodiment of the present application provides a method for generating a target matrix based on a target image and the number of robots to be deployed, including:
converting the target image into a bitmap with a preset size; wherein pixel points in the bitmap correspond to elements in the target matrix one to one;
determining the required quantity of the current robot based on the pixel value of each pixel point in the bitmap;
under the condition that the number of the current robots required is different from the number of the robots to be deployed, adjusting the pixel value of at least one pixel point in the bitmap;
generating the target matrix based on the pixel value of each pixel point in the bitmap after the pixel value is adjusted;
and under the condition that the required number of the current robots is the same as the number of the robots to be deployed, generating the target matrix based on the pixel values of all the pixel points in the bitmap.
II: in S102, S102 and S103 are not executed in the order.
The original position information of the robot refers to the specific position of each robot in the M × N cells, and can be obtained by positioning the robot based on a positioning device installed on the robot, for example; the positioning device is, for example, a code scanning device; two-dimensional codes or bar codes are arranged in each cell, and the two-dimensional codes or bar codes in each cell are different; when the original position information of the robot is obtained, a code scanning device installed on the robot can be started to scan the two-dimensional code or the bar code in the cell where the robot is located, and the original position information of each drawer is determined based on the code scanning result. Further, the original position information of the robot may be acquired based on other methods.
When an original matrix is generated based on original position information of a robot, if a geographical position area corresponding to a certain cell stays with the robot, determining an element value of an element corresponding to the geographical position area as a first value; and if the robot does not stay in the geographic position area corresponding to a certain cell, determining the element value of the element corresponding to the geographic position area as a second value, and further generating an original matrix.
It should be noted here that the robot control method provided in the embodiments of the present application is directed to a target image.
If there are a plurality of acquired target images, the original matrix of any target image except the first target image is actually the target matrix formed by the previous target image.
For example. If 4 target images are available, a1, a2, a3 and; for a1, the original matrix corresponding to a is the original matrix generated based on the original position information of each robot; then the original matrix corresponding to a2 is the target matrix generated based on a 1; for a3, the original matrix corresponding to a is the target matrix generated based on a 2.
III: in S103, the intermediate matrix may be generated, for example, as follows: and generating the intermediate matrix based on a predetermined constraint condition and the original matrix and the target matrix.
In a specific implementation, the constraint includes, for example, at least one of the following conditions a and B:
condition a: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix.
Condition B: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix.
In order to form a lattice corresponding to the target matrix by a plurality of robots, an intermediate matrix is first generated based on predetermined constraint conditions.
Specifically, when the intermediate matrix is generated, if the generated intermediate matrix can meet the condition a, the robot is controlled to move in the east-west direction and then in the north-south direction finally when the robot is controlled to move; at this time, it is practical to control all the robots to move in the east-west direction first, so that all the robots can be located in the middle position area indicated by the middle matrix after the movement is completed; and then all the robots are controlled to move in the north-south direction, so that all the robots can be located in the target position area indicated by the target matrix after the movement of all the robots is completed.
When the intermediate matrix is generated, if the generated intermediate matrix can satisfy the condition B, the robot is controlled to move in the north-south direction and then in the east-west direction. At this time, it is practical to control all the robots to move in the north-south direction first, so that all the robots can be located in the middle position area indicated by the middle matrix after the movement is completed; and then all the robots are controlled to move in the east-west direction, so that all the robots can be located in the target position area indicated by the target matrix after completing the movement.
When the intermediate matrix is generated, in some cases, the intermediate matrix satisfying the condition a may be able to be generated, and the intermediate matrix satisfying the condition B may not be generated, and the robot movement is controlled based on the intermediate matrix satisfying the condition a; in other cases, if an intermediate matrix satisfying the condition B can be generated and an intermediate matrix satisfying the condition a cannot be generated, the robot is controlled to move based on the intermediate matrix satisfying the condition B; in other cases, it is possible to generate an intermediate matrix satisfying the condition a or an intermediate matrix satisfying the condition B, and it is possible to control the motion of the robot using an intermediate matrix corresponding to either the condition a or the condition B.
IV: in S104, after generating the intermediate matrix, the robot can be controlled to move to the target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix, and the target matrix.
Specifically, the method comprises the following steps:
step (1): controlling the robot to move to an intermediate position area indicated by the intermediate matrix to be moved into the robot based on the original matrix and the intermediate matrix;
step (2): and controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix.
For the case where the constraint condition is that the condition a generates an intermediate matrix, the step (1) specifically includes:
aiming at each row in the original matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the intermediate matrix;
and controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving from the original position area indicated by the row element in the original matrix to the middle position area indicated by the row element in the middle matrix.
Specifically, the element mapping information of the row includes a plurality of pairs of elements having a mapping relationship; one element in each pair of elements with mapping relation is an element with the first value in the element value of the row in the original matrix; the other element is an element of the intermediate matrix whose element value is the first value in the row element.
And when each robot in the geographic position area corresponding to the row is controlled to move based on the element mapping information corresponding to the row, each robot in the geographic position area corresponding to the row is controlled to perform position conversion in the row so as to control the robot to move to the middle position area to be moved into the robot, wherein the middle position area is indicated in the row element by the middle matrix.
In the specific implementation: since the sum of the element values of the elements in each row in the generated intermediate matrix is equal to the sum of the element values of the elements in the corresponding row of the original matrix and the sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix under the limitation of the condition a, each robot is first controlled to perform in-line position transformation according to the original geographic position area where the robot is located and the intermediate position area indicated by the intermediate matrix, so that the formed dot matrix pattern is the position pattern indicated by the intermediate matrix after all robots reach the corresponding intermediate position area.
For example, the space where the robot moves is an area with M units in the north-south direction and N units in the east-west direction, that is, M × N geographical location areas, and then each geographical location area is numbered based on a certain rule; as shown in fig. 2, an example of numbering M x N geographic location areas is provided. In this example, any geographic location area is represented as: (i, j); wherein i represents the ith geographic location area in the north-south direction, and i satisfies: i is more than or equal to 0 and less than M-1; j represents the jth geographical location area in the east-west direction and satisfies: j is more than or equal to 0 and less than N-1. (0,0)
Any element in the original matrix is represented as:if it isA value of 1 indicates the elementThe robot has currently been parked in the corresponding geographic location area (i, j); if it isA value of 0 indicates the same as the elementNo robot is currently parked in the corresponding geographic location area (i, j).
Any element in the target matrix is represented as:if it isA value of 1 indicates the elementThe corresponding geographical location area (i, j) is to stop the robot; if it isA value of 0 indicates the same as the elementThe robot will be stopped in the corresponding geographical location area (i, j).
Any element in the intermediate matrix is represented as:if it isA value of 1 indicates the elementThe corresponding geographic position area (i, j) is used for holding the robot in the geographic position area corresponding to the element in the process that the robot moves from the original position area to the target position area indicated by the target matrix; if it isA value of 0 indicates the same as the elementThe corresponding geographical location area (i, j) is determined by the target matrix during the movement of the robot from the home location area to the target location area indicated by the target matrix,in the presence of the elementThe corresponding geographic location area does not hold the robot.
Suppose the s-th row P of the original matrixsSatisfies the following conditions:corresponding to the s-th row Q in the intermediate matrixsSatisfies the following conditions:and in each geographic position area, when the robot is stopped in a certain geographic position area corresponding to each element in the s-th row in the original matrix, the element value of the element is 1, otherwise, the element value is 0.
If there are H elements with an element value of 1 in the s-th row of the original matrix, i.e. there are also H elements with an element value of 1 in the s-th row of the intermediate matrix.
Then for PsAnd QsAnd establishing a mapping relation for the elements with the middle element value of 1. Wherein, PsThe first element of (1), and QsGenerating a first element pair corresponding to the element with the first element value of 1; psThe second element of (1), and QsCorresponding to the element with the second element value of 1 to generate a second element pair; … …, respectively; psThe H-th element of (A) is an element having a value of 1, and QsGenerating an H element pair corresponding to the element with the H element value of 1; further, the generated H element pairs form the s-th row PsAnd QsCorresponding element mapping information.
As shown in FIG. 3, a setup P is providedsAnd QsAnd (3) a schematic diagram of the mapping relation between the elements with the middle element value of 1. N is 8 in this example; wherein in this row there are 4 robots in total, i.e. H ═ 4; the original location areas are respectivelyRespectively indicated geographical location areas; to be connected with4 robots move in respectivelyAndin the geographical location areas indicated respectively, 4 pairs of elements are formed, respectively:andi.e. to be located atRobot movement to in indicated original geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIn the indicated geographical location area.
It should be noted here that, sinceAndactually correspond to the same geographical location area, and therefore, when controlling the movement of each robot, the robot may not actually be located at the same positionThe robots in the indicated geographical location area issue any movement instructions.
In this case, the step (2) specifically includes:
aiming at each column in the intermediate matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the target matrix;
and controlling each robot in the geographic position area corresponding to the column to move to the target position area indicated by the column element in the target matrix from the intermediate position area indicated by the column element in the intermediate matrix based on the element mapping information corresponding to the column.
Here, the element mapping information of the column also includes a plurality of pairs of elements having a mapping relationship. One element in each pair of elements with mapping relation is an element with a first value in the column in the intermediate matrix; the other element is an element of the target matrix whose element value is the first value in the column of elements.
And when each robot in the geographic position area corresponding to the column is controlled to move based on the element mapping information corresponding to the column, each robot in the geographic position area corresponding to the column is controlled to perform in-column position transformation so as to control the robot to move to a target position area to be moved into the robot, wherein the target position area is indicated in the column elements by the target matrix.
Suppose the l column W of the target matrixlSatisfies the following conditions:corresponding to the l column U in the target matrixlSatisfies the following conditions:and in each geographic position area, when a robot is left in a certain geographic position area corresponding to each element in the ith column in the middle matrix, the element value of the element is 1, otherwise, the element value is 0.
If there are K elements with an element value of 1 in the l-th column of the intermediate matrix, i.e. there are also K elements with an element value of 1 in the l-th column of the target matrix.
Then for WlAnd UlAnd establishing a mapping relation for the elements with the middle element value of 1. Wherein, WlThe first element of (1), and UlGenerating a first element pair corresponding to the element with the first element value of 1; wlThe second element of (1), and UlCorresponding to the element with the second element value of 1 to generate a second element pair; … …, respectively; wlThe Kth element of (1), and UlGenerating a Kth element pair corresponding to the element with the Kth element value of 1; further, the generated K element pairs form WlAnd UlCorresponding element mapping information.
As shown in FIG. 4, a setup W is providedlAnd UlAnd (3) a schematic diagram of the mapping relation between the elements with the middle element value of 1. M is 9 in this example; in this row, there are 4 robots in total, i.e., K ═ 4. Wherein the intermediate position areas are respectivelyAndrespectively indicated geographical location areas; to move 4 robots into eachAndin the respectively indicated intermediate geographical location areas, furthermore, 4 element pairs are formed, respectively: andi.e. will be located atRobot movement to in indicated geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIn the indicated geographical location area, will be locatedRobot movement to in indicated geographic location areaIndicated geographyIn the location area.
And further realizing the transformation of the robot from the original position area indicated by the original matrix to the target position area indicated by the target matrix based on the step (1) and the step (2) so as to form a robot dot matrix corresponding to the target image.
In addition, when the constraint condition is that an intermediate matrix is generated for the condition B, the step (1) specifically includes:
aiming at each column in the original matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the intermediate matrix;
controlling each robot in the geographic position area corresponding to the column to move from the original position area indicated by the column of elements in the original matrix to the middle position area indicated by the column of elements in the middle matrix based on the element mapping information corresponding to the column;
the step (1) specifically includes:
aiming at each row in the intermediate matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the target matrix;
and controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving to the target position area indicated by the row element in the target matrix from the middle position area indicated by the row element in the middle matrix.
The specific implementation manner is similar to the manner when the constraint condition is the condition a to generate the intermediate matrix, and is not described herein again.
In addition, in another embodiment of the present application, in order to avoid congestion and deadlock of the robot during the moving process, in the process of transforming the robot from the original position area indicated by the column of elements in the original matrix to the target position area indicated by the target matrix, after all the robots are controlled to reach the intermediate position area indicated by the intermediate matrix, the robot is controlled to transform from the intermediate position area indicated by the intermediate matrix to the target position area indicated by the target matrix.
According to the method and the device, after the target image is obtained, a target matrix is generated based on the target image, an original matrix is generated based on original position information of the robot, and then an intermediate matrix is generated according to the target matrix and the original matrix; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element, the value of each element in the target matrix represents whether the robot stays in the geographic position area corresponding to the element, the intermediate matrix is used as a transition between the original matrix and the target matrix, and the value of each element represents whether the robot stays in the geographic position area corresponding to the element in the process that the robot moves from the original position to the target position area indicated by the target matrix, and then the multiple robots can be controlled to move in two stages based on the original matrix, the intermediate matrix and the target matrix, so that the target position area indicated by the target matrix is adjusted, and the multiple robots can form a dot matrix corresponding to a target image with higher efficiency.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present application further provides a robot control device corresponding to the robot control method, and since the principle of solving the problem of the device in the embodiment of the present application is similar to that of the robot control method in the embodiment of the present application, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 5, a schematic diagram of a robot control apparatus provided in an embodiment of the present application is shown, where the apparatus includes: a first generation module 51, a second generation module 52, a third generation module 53, and a control module 54; wherein the content of the first and second substances,
a first generating module 51, configured to obtain a target image and generate a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element;
a second generating module 52, configured to generate an original matrix based on original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area;
a third generating module 53, configured to generate an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix;
and a control module 54, configured to control the robot to move to the target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix, and the target matrix.
In one possible embodiment, the first generating module 51, when generating the target matrix based on the target image, is configured to:
and generating the target matrix based on the target image and the number of the robots to be deployed.
In one possible embodiment, the first module, when generating the target matrix based on the target image and the actual number of robots to be deployed, is configured to:
converting the target image into a bitmap with a preset size; wherein pixel points in the bitmap correspond to elements in the target matrix one to one;
determining the required quantity of the current robot based on the pixel value of each pixel point in the bitmap;
under the condition that the required number of the current robots is different from the actual number of the robots to be deployed, adjusting the pixel value of at least one pixel point in the bitmap;
generating the target matrix based on the pixel value of each pixel point in the bitmap after the pixel value is adjusted;
and under the condition that the required number of the current robot is the same as the actual number of the robots to be deployed, generating the target matrix based on the pixel values of all the pixel points in the bitmap.
In a possible implementation, the third generating module 53, when generating an intermediate matrix based on the original matrix and the target matrix, is configured to:
and generating the intermediate matrix based on a predetermined constraint condition and the original matrix and the target matrix.
In one possible embodiment, the constraint includes:
the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix.
In one possible embodiment, the constraint includes:
the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix.
In a possible embodiment, the control module 54, when controlling the robot to move to the target position area indicated by the target matrix to be moved into the robot based on the original matrix, the intermediate matrix and the target matrix, is configured to:
controlling the robot to move to an intermediate position area to be moved into the robot, which is indicated by the intermediate matrix, based on the original matrix and the intermediate matrix;
and controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix.
In a possible embodiment, the constraint conditions include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and the sum of the element values of the respective elements in each column of the intermediate matrix is equal to the sum of the element values of the respective elements in the corresponding column of the target matrix:
the control module 54, when controlling the robot to move to the middle position area to be moved into the robot indicated by the middle matrix based on the original matrix and the middle matrix, is configured to:
aiming at each row in the original matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the intermediate matrix;
controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving from the original position area indicated by the row element in the original matrix to the middle position area indicated by the row element in the middle matrix;
the control module 54, when controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix, is configured to:
aiming at each column in the intermediate matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the target matrix;
and controlling each robot in the geographic position area corresponding to the column to move to the target position area indicated by the column element in the target matrix from the intermediate position area indicated by the column element in the intermediate matrix based on the element mapping information corresponding to the column.
In a possible embodiment, the constraint conditions include: the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and the sum of the element values of the respective elements in each column of the intermediate matrix is equal to the sum of the element values of the respective elements in the corresponding column of the original matrix:
the control module 54, when controlling the robot to move to the middle position area to be moved into the robot indicated by the middle matrix based on the original matrix and the middle matrix, is configured to:
aiming at each column in the original matrix, generating element mapping information corresponding to the column according to the element value of each element in the column and the element value of each element in the corresponding column in the intermediate matrix;
controlling each robot in the geographic position area corresponding to the column to move from the original position area indicated by the column of elements in the original matrix to the middle position area indicated by the column of elements in the middle matrix based on the element mapping information corresponding to the column;
the control module 54, when controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix, is configured to:
aiming at each row in the intermediate matrix, generating element mapping information corresponding to the row according to the element value of each element in the row and the element value of each element in the corresponding row in the target matrix;
and controlling each robot in the geographic position area corresponding to the row based on the element mapping information corresponding to the row, and moving to the target position area indicated by the row element in the target matrix from the middle position area indicated by the row element in the middle matrix.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present application further provides a computer device 60, as shown in fig. 6, which is a schematic structural diagram of the computer device 60 provided in the embodiment of the present application, and includes: a processor 61, a memory 62, and a bus 63. The memory 62 stores machine-readable instructions executable by the processor 61, the processor 61 and the memory 62 communicating via the bus 63 when the computer device 60 is running, the machine-readable instructions when executed by the processor 61 performing the following:
acquiring a target image, and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element;
generating an original matrix based on the original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area;
generating an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix;
and controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the robot control method in the foregoing method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the robot control method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the robot control method in the foregoing method embodiment, which may be specifically referred to in the foregoing method embodiment, and are not described herein again.
The embodiments of the present application also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A robot control method, comprising:
acquiring a target image, and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element;
generating an original matrix based on the original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area;
generating an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix;
and controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
2. The robot control method of claim 1, wherein the generating an object matrix based on the object image comprises:
and generating the target matrix based on the target image and the number of the robots to be deployed.
3. The robot control method of claim 2, wherein the generating the target matrix based on the target image and an actual number of the robots to be deployed comprises:
converting the target image into a bitmap with a preset size; wherein pixel points in the bitmap correspond to elements in the target matrix one to one;
determining the required quantity of the current robot based on the pixel value of each pixel point in the bitmap;
under the condition that the required number of the current robots is different from the actual number of the robots to be deployed, adjusting the pixel value of at least one pixel point in the bitmap;
generating the target matrix based on the pixel value of each pixel point in the bitmap after the pixel value is adjusted;
and under the condition that the required number of the current robot is the same as the actual number of the robots to be deployed, generating the target matrix based on the pixel values of all the pixel points in the bitmap.
4. The robot control method according to any one of claims 1 to 3, wherein the generating an intermediate matrix based on the original matrix and the target matrix comprises:
and generating the intermediate matrix based on a predetermined constraint condition and the original matrix and the target matrix.
5. The robot control method according to claim 4, wherein the constraint condition includes:
the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the original matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the target matrix.
6. The robot control method according to claim 4, wherein the constraint condition includes:
the sum of the element values of each element in each row of the intermediate matrix is equal to the sum of the element values of each element in the corresponding row of the target matrix; and is
The sum of the element values of the elements in each column of the intermediate matrix is equal to the sum of the element values of the elements in the corresponding column of the original matrix.
7. A robot control method according to any of claims 1-6, characterized in that said controlling the robot to move to a target position area indicated by the target matrix to be moved into the robot based on the original matrix, the intermediate matrix and the target matrix comprises:
controlling the robot to move to an intermediate position area to be moved into the robot, which is indicated by the intermediate matrix, based on the original matrix and the intermediate matrix;
and controlling the robot to move from the intermediate position area to the target position area to be moved into the robot, which is indicated by the target matrix, based on the intermediate matrix and the target matrix.
8. A robot control apparatus, comprising:
the first generation module is used for acquiring a target image and generating a target matrix based on the target image; the value of each element in the target matrix represents whether the robot is to stay in the geographic position area corresponding to the element;
the second generation module is used for generating an original matrix based on the original position information of the robot; the value of each element in the original matrix represents whether a robot exists in a geographic position area corresponding to the element; elements in the original matrix, which correspond to the positions in the target matrix, correspond to the same geographical position area;
a third generation module, configured to generate an intermediate matrix based on the original matrix and the target matrix; the element value of each element in the intermediate matrix represents whether the robot is to be stayed in the geographic position area corresponding to the element in the process of moving from the original position area indicated by the original matrix to the target position area indicated by the target matrix;
and the control module is used for controlling the robot to move to a target position area to be moved into the robot, which is indicated by the target matrix, based on the original matrix, the intermediate matrix and the target matrix.
9. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is run, the machine-readable instructions when executed by the processor performing the steps of the robot control method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, is adapted to carry out the steps of the robot control method according to any one of the claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010102598.0A CN113276105B (en) | 2020-02-19 | 2020-02-19 | Robot control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010102598.0A CN113276105B (en) | 2020-02-19 | 2020-02-19 | Robot control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113276105A true CN113276105A (en) | 2021-08-20 |
CN113276105B CN113276105B (en) | 2022-12-30 |
Family
ID=77275115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010102598.0A Active CN113276105B (en) | 2020-02-19 | 2020-02-19 | Robot control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113276105B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102331711A (en) * | 2011-08-12 | 2012-01-25 | 江苏合成物联网科技有限公司 | Formation control method for mobile autonomous robots |
FR2999305A1 (en) * | 2012-12-11 | 2014-06-13 | Thales Sa | METHOD FOR CONTROLLING A ROBOT ASSEMBLY AND ROBOT ASSEMBLY |
CN106113043A (en) * | 2016-08-06 | 2016-11-16 | 上海新时达电气股份有限公司 | Robot control system and method |
CN106155057A (en) * | 2016-08-05 | 2016-11-23 | 中南大学 | A kind of clustered machine people's graphical set construction method based on self-organizing behavior |
CN106358256A (en) * | 2016-10-26 | 2017-01-25 | 上海电机学院 | Multi-robot control coordinator generating method |
-
2020
- 2020-02-19 CN CN202010102598.0A patent/CN113276105B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102331711A (en) * | 2011-08-12 | 2012-01-25 | 江苏合成物联网科技有限公司 | Formation control method for mobile autonomous robots |
FR2999305A1 (en) * | 2012-12-11 | 2014-06-13 | Thales Sa | METHOD FOR CONTROLLING A ROBOT ASSEMBLY AND ROBOT ASSEMBLY |
CN106155057A (en) * | 2016-08-05 | 2016-11-23 | 中南大学 | A kind of clustered machine people's graphical set construction method based on self-organizing behavior |
CN106113043A (en) * | 2016-08-06 | 2016-11-16 | 上海新时达电气股份有限公司 | Robot control system and method |
CN106358256A (en) * | 2016-10-26 | 2017-01-25 | 上海电机学院 | Multi-robot control coordinator generating method |
Also Published As
Publication number | Publication date |
---|---|
CN113276105B (en) | 2022-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3754591A1 (en) | Image processing method and device, storage medium and electronic device | |
EP3120322B1 (en) | Method for processing input low-resolution (lr) image to output high-resolution (hr) image | |
EP3255586A1 (en) | Method, program, and apparatus for comparing data graphs | |
US9076234B2 (en) | Super-resolution method and apparatus for video image | |
JP2021100247A (en) | Distorted document image correction method and device | |
CN109871841B (en) | Image processing method, device, terminal and storage medium | |
CN110033009B (en) | Method for processing image data in a connected network | |
KR100912335B1 (en) | Image processing apparatus, image processing method, and recording medium storing image processing program | |
CN103985085A (en) | Image super-resolution amplifying method and device | |
CN109934773A (en) | A kind of image processing method, device, electronic equipment and computer-readable medium | |
WO2021115403A1 (en) | Image processing method and apparatus | |
CN111476718A (en) | Image amplification method and device, storage medium and terminal equipment | |
CN110443245A (en) | Localization method, device and the equipment of a kind of license plate area under unrestricted scene | |
JP2007241356A (en) | Image processor and image processing program | |
CN112419152A (en) | Image super-resolution method and device, terminal equipment and storage medium | |
CN113276105B (en) | Robot control method and device | |
CN112419146B (en) | Image processing method and device and terminal equipment | |
CN115147281A (en) | Image parameter adjusting method, device, equipment and storage medium | |
CN114445277A (en) | Depth image pixel enhancement method and device and computer readable storage medium | |
CN110334392B (en) | Customized pattern manufacturing method, device, terminal and computer readable medium | |
CN112365398A (en) | Super-resolution network training method, digital zooming method, device and electronic equipment | |
US8810809B2 (en) | Character output apparatus, character output method and computer readable medium | |
US20230351719A1 (en) | Method and electronic device for determining optimal global attention in deep learning model | |
CN112419186B (en) | Batch generation method and device for license plate images and computer equipment | |
CN111179166B (en) | Image processing method, device, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |