CN112975940A - Robot control method, information generation method and robot - Google Patents

Robot control method, information generation method and robot Download PDF

Info

Publication number
CN112975940A
CN112975940A CN201911276560.9A CN201911276560A CN112975940A CN 112975940 A CN112975940 A CN 112975940A CN 201911276560 A CN201911276560 A CN 201911276560A CN 112975940 A CN112975940 A CN 112975940A
Authority
CN
China
Prior art keywords
information
image
area
preset
indicated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911276560.9A
Other languages
Chinese (zh)
Inventor
张锋涛
罗芳杰
邵长东
高倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Ecovacs Commercial Robotics Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN201911276560.9A priority Critical patent/CN112975940A/en
Publication of CN112975940A publication Critical patent/CN112975940A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a robot control method, an information generation method and a robot, wherein the robot control method comprises the following steps: acquiring an image containing an image of an object to be indicated; identifying the image to determine the area information of the area of the object image to be indicated in the image; acquiring control parameters based on the region information; and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated. The technical scheme provided by the embodiment of the application has the advantages of complete finger operation without an adjustment process, rapidness, accuracy and strong environment adaptability.

Description

Robot control method, information generation method and robot
Technical Field
The application belongs to the technical field of intelligent equipment, and particularly relates to a robot control method, an information generation method and a robot.
Background
The shopping guide robot is one of the self-moving robots in common use, and has strong interactivity with users. The shopping guide process of the existing shopping guide robot is as follows: 1. the robot knows what goods the customer wants to purchase; 2. leading a customer to a goods shelf on which goods are placed; 3. the robot uses its laser beam to make laser spot on the commodity required by customer, and informs the customer of the position of the purchased article.
Under the general condition, a robot firstly shoots a light spot, then the robot shoots the position of the light spot through a camera, the coordinate difference between the light spot and an article to be pointed actually is judged through image analysis, then the XY coordinates of laser are adjusted, then the laser light spot is shot again, successive approximation is carried out until the laser light spot is shot on a commodity to be pointed, at the moment, the laser light spot is long and bright, and a user is informed of the commodity to be pointed.
Disclosure of Invention
The application provides a more simple and convenient technical scheme different from the existing object indicating mode.
In one embodiment of the present application, a robot control method is provided. The method comprises the following steps:
acquiring an image containing an image of an object to be indicated;
identifying the image to determine the area information of the area of the object image to be indicated in the image;
acquiring control parameters based on the region information;
and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
Optionally, the image has a plurality of setting regions; and
identifying the image to determine area information of an area where the object image to be indicated is located in the image, including:
identifying the object image to be indicated in the image to obtain the contour information of the object image to be indicated;
and determining the area information of the area of the image of the object to be indicated in the image according to the contour information and the area information of the plurality of set areas.
Optionally, determining the area information of the area where the image of the object to be indicated is located in the image according to the contour information and the area information of the plurality of set areas, including:
when the image of the object to be indicated spans at least two set areas based on the contour information and the area information of the set areas, selecting one area from the set areas;
area information of the selected area is acquired.
Optionally, selecting one region from the at least two setting regions includes:
and selecting one area, of the at least two set areas, of which the overlapping rate with the contour information meets a preset condition according to the contour information and the area information of each set area of the at least two set areas.
Optionally, the area information includes: the shape of the area outline and the coordinates of the characteristic points of the outline.
Optionally, based on the area information, obtaining a control parameter includes:
acquiring the corresponding relation between preset area information and control parameters;
and acquiring the control parameter corresponding to the area information according to the corresponding relation between the preset area information and the control parameter.
Optionally, the preset area information and the control parameter have a correspondence relationship: the first preset table information includes area information of a plurality of setting areas and control parameters corresponding to the area information.
Optionally, the preset table information is stored in a server local to the robot or connected to the robot.
Optionally, the corresponding relationship between the preset region information and the control parameter is obtained through a tabulation process;
the tabulation process comprises the following steps:
acquiring a test image shot by the robot in a set environment; wherein, the test image has at least two setting areas;
acquiring control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible identification position on the test image and respectively falls into the at least two set areas;
and correspondingly associating the area information of each set area and the control parameters when the visible identification position falls into each set area to obtain the first preset list information.
Optionally, obtaining control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible mark position on the test image and respectively falls into the at least two setting areas includes:
and acquiring control parameters of the indicating device when the visible identification positions of the indicating signals sent by the indicating device on the test image respectively fall into the center of each set area.
Optionally, the area of the image of the object to be indicated in the image is: the image outline area of the object to be indicated; the area information is the outline information of the image of the object to be indicated; and
based on the area information, obtaining control parameters, including:
acquiring position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
and determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
Optionally, determining the control parameter based on the contour information, the position information of the plurality of preset mark points, and the control parameter corresponding to the position information of each preset mark point includes:
determining target mark points falling into the image outline range of the object to be indicated based on the outline information and the position information of the plurality of preset mark points;
and acquiring control parameters corresponding to the position information of the target mark point.
Optionally, determining the control parameter based on the contour information, the position information of the plurality of preset mark points, and the control parameter corresponding to the position information of each preset mark point includes:
determining a pointing point location based on the contour information;
searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points;
and determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
Optionally, the second preset table includes: the position information of a plurality of preset mark points and the control parameters corresponding to the position information of each preset mark point; and the second preset table information is obtained through the following table making process:
acquiring a test image shot by the robot in a set environment; wherein, a plurality of marking points are arranged on the test image;
acquiring control parameters of the indicating device when visible marks of indicating signals sent by the indicating device on a test image respectively accord with coincidence requirements with all mark points;
and correspondingly associating the position information and the visible identification of each mark point with the control parameter when each mark point meets the coincidence requirement to obtain the second preset list information.
Optionally, the indication device comprises:
the indicating signal generator is used for sending out indicating signals to form a visual mark on the object to be indicated;
the driving device outputs power with at least one degree of freedom to drive the indicating signal generator to act;
the driving device comprises at least one driving motor; the control parameters include drive parameters of the respective drive motors.
In another embodiment of the present application, a robot is provided. The robot includes:
the acquisition device is used for shooting an image containing an image of an object to be indicated;
the indicating device is used for sending an indicating signal;
the processor is used for acquiring an image containing an image of an object to be indicated; identifying the image to determine the area information of the area of the object image to be indicated in the image; acquiring control parameters based on the region information; and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
Optionally, the image has a plurality of setting regions; and
the processor is further configured to identify the object image to be indicated in the image to obtain profile information of the object image to be indicated; and determining the area information of the area of the image of the object to be indicated in the image according to the contour information and the area information of the plurality of set areas.
Optionally, the processor is further configured to:
acquiring the corresponding relation between preset area information and control parameters;
and acquiring the control parameter corresponding to the area information according to the corresponding relation between the preset area information and the control parameter.
Optionally, the preset area information and the control parameter have a correspondence relationship: the first preset table information includes area information of a plurality of setting areas and control parameters corresponding to the area information.
Optionally, the corresponding relationship between the preset region information and the control parameter is obtained through a tabulation function of the processor;
the processor is further configured to:
acquiring a test image shot by a robot in a set environment, wherein the test image is provided with at least two set areas;
acquiring control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible identification position on the test image and respectively falls into the at least two set areas;
and correspondingly associating the area information of each set area and the control parameters when the visible identification position falls into each set area to obtain the first preset list information.
Optionally, the area of the image of the object to be indicated in the image is: the area within the image outline range of the object to be indicated; the area information is the outline information of the image of the object to be indicated; and
the processor is further configured to:
acquiring position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
and determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
Optionally, the processor is further configured to:
determining target mark points falling into the image outline range of the object to be indicated based on the outline information and the position information of the plurality of preset mark points; acquiring control parameters corresponding to the position information of the target mark points; or
Determining a pointing point location based on the contour information; searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points; and determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
Optionally, the second preset table includes: the position information of a plurality of preset mark points and the control parameters corresponding to the position information of each preset mark point; and
the processor is further configured to:
acquiring a test image shot by the robot in a set environment; wherein, a plurality of marking points are arranged on the test image;
acquiring control parameters of the indicating device when visible marks of indicating signals sent by the indicating device on a test image respectively accord with coincidence requirements with all mark points;
and correspondingly associating the position information and the visible identification of each mark point with the control parameter when each mark point meets the coincidence requirement to obtain the second preset list information.
In yet another embodiment of the present application, an information generating method is provided. The method comprises the following steps:
controlling the indicating device to act according to the control parameters so that an indicating signal sent by the indicating device forms a visible mark on the projection surface;
acquiring a first image containing the visual identification image;
identifying the first image to determine the position information of the visual identification image on the first image;
generating a table entry stored in preset table information based on the position information and the control parameter, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameter in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
Optionally, generating an entry stored in preset entry information based on the position information and the control parameter includes:
acquiring area information of a plurality of set areas of the first image;
determining a set area where the position information is located in the plurality of set areas;
and associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
Optionally, generating an entry stored in preset entry information based on the position information and the control parameter, further comprising:
judging whether the position relation between the position information and the central point of the set area where the position information is located meets the coincidence judgment requirement or not;
and under the condition that the coincidence judgment requirement is judged to be met, triggering the action of associating the area information of the set area where the position information is located with the control parameter and storing the area information as a table entry into the preset table information.
Optionally, the method further comprises:
under the condition that the coincidence judgment requirement is not met, adjusting the control parameters until the position relation between the position of the visual identifier corresponding to the indication signal sent by the indicating device in the first image and the central point meets the coincidence judgment condition;
and associating the area information of the set area where the position information is located with the adjusted control parameter, and storing the associated area information as a table entry into the preset table information.
Optionally, generating an entry stored in preset entry information based on the position information and the control parameter includes:
and associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
Optionally, generating an entry stored in preset entry information based on the position information and the control parameter includes:
acquiring a plurality of preset mark points of the first image;
and under the condition that one preset mark point with the distance from the visual identification image meeting the set requirement exists in the plurality of preset mark points, associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
Optionally, generating an entry stored in preset entry information based on the position information and the control parameter, further comprising:
under the condition that no mark point with the distance between the image of the visual mark and the preset mark point in the plurality of preset mark points meets the set requirement, adjusting the control parameter until the distance between the position of the visual mark corresponding to the indication signal sent by the indicating device in the first image and one preset mark point in the plurality of preset mark points meets the set requirement;
and associating the position information with the adjusted control parameter, and storing the associated position information and the adjusted control parameter as a table entry into the preset table information.
In yet another embodiment of the present application, a robot is provided. The robot includes:
the indicating device is used for controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visible mark on the projection surface;
the acquisition device is used for acquiring a first image containing the visual identification image;
the processor is used for identifying the first image so as to determine the position information of the visual identification image on the first image; generating a table entry stored in preset table information based on the position information and the control parameter, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameter in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
Optionally, the processor is further configured to:
acquiring area information of a plurality of set areas of the first image;
determining a set area where the position information is located in the plurality of set areas;
and associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
Additionally, optionally, the processor is further configured to:
and associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
In the technical scheme provided by each embodiment of the application, after an image containing an object image to be indicated is acquired, area information of an area where the object image to be indicated is located in the image is determined through image identification, and a control parameter is acquired based on the area information; the process that the robot shoots out a light spot in advance and then adjusts the control parameter according to the position relation between the picked image containing the light spot and the image of the object to be indicated is not needed, and the situation that the light spot cannot be identified due to the image problem and cannot be indicated can not occur; therefore, compared with the prior art, the technical scheme provided by each embodiment of the application has the advantages of quick whole process, high indication success rate and strong environment adaptation capability.
The technical solutions of the present application will be described in detail below with reference to the accompanying drawings and specific embodiments.
Drawings
Fig. 1 is a schematic flowchart of a robot control method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a region partition of an image according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a distribution of marks in an image according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a robot control method according to another embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a robot control method according to another embodiment of the present disclosure;
fig. 6 is a schematic flowchart of a robot control method according to another embodiment of the present application;
FIG. 7 is a schematic block diagram of a robot structure provided in an embodiment of the present application;
fig. 8 is a flowchart illustrating an information generating method according to an embodiment of the present application.
Detailed Description
In the prior art, a method for realizing object pointing by a shopping guide robot is as follows: the robot firstly shoots a light spot, then the robot shoots the position of the light spot through the camera, the coordinate difference between the light spot and an article to be pointed actually is judged through image analysis, then the XY coordinates of the laser are adjusted, then the laser light spot is shot again, successive approximation is carried out until the laser light spot is shot on the article to be pointed, at the moment, the laser light spot is long and bright, and a user is informed of the article to be pointed.
However, at present, the commodity is generally placed on a show window or a shelf, and the show window or the shelf is provided with a decorative lamp and a backlight lamp which are bright. At this time, the image shot back by the camera on the robot due to the limitation of the dynamic range of the camera contains laser spots and spots of the decorative lamp and the backlight lamp, and the laser spots cannot be distinguished during image analysis, so that the commodity cannot be indicated.
In view of the above problems, embodiments of the present application provide a robot control method, an information generation method, and a robot. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart of a robot control method according to an embodiment of the present application, as shown in fig. 1.
The method comprises the following steps:
101. and acquiring an image containing an image of the object to be indicated.
102. And identifying the image to determine the area information of the area of the object image to be indicated in the image.
103. And acquiring control parameters based on the area information.
104. And controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
In the above 101, the image containing the image of the object to be indicated may be acquired by a visual sensor (e.g. a camera) on the robot. The robot has different application scenes, and correspondingly, objects to be indicated are different. For example, in a business super application environment, the robot guides the user to find the desired product, and the object to be indicated is the product. In a hotel or restaurant application environment, for example, the robot guides the user to a designated dining position, and the object to be indicated is at the dining table.
In 102, the image recognition may be implemented by using the existing image recognition technology. The specific image recognition principle can be referred to corresponding contents in the prior art, and is not described in detail herein. In an achievable technical solution, the image is divided into a plurality of setting areas in advance, and the area information of the area where the object image to be indicated is located in the image may be: and determining the area information of one set area where the object image to be indicated is located in the plurality of set areas. In another implementation solution, the area information of the area where the image of the object to be indicated is located in the image may be: and area information, namely contour information, of the contour area of the image of the object to be indicated.
In specific implementation, the area information may include, but is not limited to: area outline shape, outline feature point coordinates and outline size. For example, if the setting area is a square area, the area information of each setting area includes: coordinates of four vertexes of a square shape and a square; alternatively, the area information of each setting area includes: square shape, square side length. For another example, if the outline region of the image of the object to be indicated is rectangular, the region information of the outline region may include: rectangle, long side size and short side size of rectangle; alternatively, the region information of the outline region may include: coordinates of four vertexes of a rectangle and a rectangle; or, the image contour of the object to be indicated is an irregular shape, and the area information of the image contour region of the object to be indicated may include: coordinates of a plurality of envelope points on the contour envelope.
In an implementation technical solution, a server local to the robot or connected to the robot stores first information as a basis for inquiry, where the first information includes: and the corresponding relation between the preset area information and the control parameters. In specific implementation, the step 103 "obtaining the control parameter based on the area information" may specifically include:
1031. and acquiring the corresponding relation between the preset area information and the control parameters.
Specifically, the corresponding relationship between the preset area information and the control parameter is obtained from the local or from a server connected with the robot.
1032. And acquiring the control parameter corresponding to the area information (namely the area information of the area of the image of the object to be indicated in the image) according to the corresponding relation between the preset area information and the control parameter.
Or, in another achievable technical solution, the area of the image of the object to be indicated in the image is: the image outline area of the object to be indicated; the area information is the outline information of the image of the object to be indicated. The robot stores second information serving as a query basis locally or at a server connected with the robot, and the second information comprises: and the position information of the preset mark points and the control parameters corresponding to the position information of each preset mark point. In specific implementation, the step 103 "obtaining the control parameter based on the area information" may specifically include:
1031' obtaining position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
1032', determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
The 1032' ″ determining the control parameter based on the control parameter corresponding to the contour information, the position information of the preset mark points, and the position information of each preset mark point may be implemented in the following two ways:
in a first way,
Determining target mark points falling into the image outline range of the object to be indicated based on the outline information and the position information of the plurality of preset mark points; and acquiring control parameters corresponding to the position information of the target mark point.
The second way,
Determining a pointing point location based on the contour information; searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points; and determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
The first mode can be simply understood as follows: and knowing the control parameters corresponding to the preset mark points, and directly taking the control parameters corresponding to the mark points with the positions falling into the image outline area of the object to be indicated as the control parameters for finally controlling the indicating device. Because the marking point is in the image outline area of the object to be indicated, the action of the indicating device is controlled according to the control parameter corresponding to the marking point, a visual mark is formed on the object to be indicated certainly, and perhaps the visual mark is possibly not in the center of the object to be indicated, but the final indicating result is not influenced.
In the second mode, the pointing point position may be, but is not limited to: and the position of the central point of the image contour of the object to be indicated. The "setting requirement" mentioned in the above step may be, but is not limited to: closest to the pointing point location. The position relationship between the target mark point and the position of the indication point can be understood as follows: and the coordinate difference between the target mark point coordinate and the indicating point coordinate. And determining a control parameter corresponding to the position of the indicating point on the basis of a control parameter corresponding to the known target marking point according to the coordinate difference between the target marking point coordinate and the indicating point coordinate. For example, assume that the first control parameters corresponding to the known target mark points include: a first parameter Xn for controlling the action of the first motor and a second parameter Yn for controlling the action of the second motor; and calculating a first increment delta x on the basis of the first parameter and a second increment delta y on the basis of the second parameter based on the coordinate difference of the target mark point coordinate and the indicating point coordinate. Therefore, the control parameters corresponding to the position of the indicating point include: a first parameter Xn' ═ Xn + Δ x for controlling the action of the first motor; a second parameter Yn + Δ y for controlling the second motor operation.
What needs to be added here is: the specific expression form of the corresponding relationship between the preset area information and the control parameter may be: the first preset table information includes area information of a plurality of setting areas and control parameters corresponding to the area information. The position information of the preset mark points and the control parameters corresponding to the position information of each preset mark point can be represented as second preset table information. The generation process of the first preset table information and the second preset table information will be described in detail below.
In a specific embodiment, the pointing device of the robot may include, but is not limited to: an indication signal generator and a driving device. The indicating signal generator is used for sending out indicating signals to form a visual mark on the object to be indicated; and the driving device outputs power with at least one degree of freedom to drive the indicating signal generator to act. The driving device comprises at least one driving motor; accordingly, in step 104, the control parameters may include: drive parameters of the respective drive motors.
Taking the example that the driving device comprises two driving motors, for example, a first driving motor and a second driving motor are included, and the second driving motor is arranged at the output end of the first driving motor; the indication signal generator is arranged at the output end of the second driving motor. The indication signal generator may be a laser pointer or the like, and this embodiment is not particularly limited thereto. The drive parameters of the individual drive motors can be understood simply as: the device is used for generating corresponding control signals, so that each driving motor outputs corresponding power according to the received control signals to change the pose of the indicating signal generator, and the indicating signals sent out by the indicating signal generator in the current pose state form visual marks, such as laser spots, on the object to be indicated.
By adopting the technical scheme provided by the embodiment, the robot does not need to firstly shoot a light spot; after an image containing an object image to be indicated is acquired, determining area information of an area where the object image to be indicated is located in the image through image identification, and acquiring a control parameter based on the area information; the situation that the light spots which are firstly printed cannot be identified due to the image problem so that the indication cannot be performed can not occur; therefore, compared with the prior art, the technical scheme provided by the embodiment has the advantages of quick whole process, high indication success rate and strong environment adaptation capability.
Further, in the method provided by this embodiment, the image has a plurality of setting areas; correspondingly, in this embodiment, the step 102 "identifying the image to determine the area information of the area where the image of the object to be indicated is located in the image" may specifically include:
1021. identifying the object image to be indicated in the image to obtain the contour information of the object image to be indicated;
1022. and determining the area information of the area of the image of the object to be indicated in the image according to the contour information and the area information of the plurality of set areas.
In the implementation, the case that the image of the object to be indicated spans two, three or four areas may occur, and for such cases, the present embodiment also provides the following solution. That is, the step 1022 "determining the area information of the area where the object image to be indicated is located in the image according to the contour information and the area information of the plurality of set areas" includes:
s11, selecting one area from at least two set areas when the image of the object to be indicated spans the two set areas based on the contour information and the area information of the set areas;
and S12, acquiring the area information of the selected area.
Further, the step S11 of "selecting one area from the at least two setting areas" includes:
and selecting one area, of the at least two set areas, of which the overlapping rate with the contour information meets a preset condition according to the contour information and the area information of each set area of the at least two set areas.
For example, a region having the largest overlapping area with the contour information may be selected from the at least two setting regions; if two or three of the at least two set regions have the same overlapping area with the contour information, one of the at least two set regions may be selected.
The above mentioned corresponding relationship between the preset region information and the control parameter is: the first preset table information comprises area information of a plurality of set areas and control parameters corresponding to the area information; wherein, the first preset list information is obtained through a list making process. In particular, the method comprises the following steps of,
the tabulation process of the first preset table information comprises the following steps:
s21, acquiring a test image shot by the robot in a set environment; wherein, the test image has at least two setting areas;
s22, acquiring control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible mark position on the test image and respectively falls into the at least two set areas;
and S23, correspondingly associating the area information of each set area and the control parameters when the visible identification position falls into each set area to obtain the first preset table information.
In a specific implementation, the step S22 "obtaining the control parameters of the indicating device when the visible mark position of the indicating signal sent by the indicating device on the test image respectively falls into the at least two setting areas" may specifically be:
and acquiring control parameters of the indicating device when the visible identification positions of the indicating signals sent by the indicating device on the test image respectively fall into the center of each set area.
The above mentioned corresponding relationship between the position information of the preset mark point and the control parameter is: second preset table information containing control parameters when the position information and the visible identification of each mark point meet the coincidence requirement; and the second preset table information is obtained through a table making process. In particular, the method comprises the following steps of,
the tabulation process of the second preset table information comprises the following steps:
s31, acquiring a test image shot by the robot in a set environment; wherein, a plurality of marking points are arranged on the test image;
s32, acquiring control parameters of the indicating device when visible marks of the indicating signal sent by the indicating device on the test image respectively accord with the coincidence requirements with the mark points;
and S33, correspondingly associating the position information and the visible identification of each mark point with the control parameters when each mark point meets the coincidence requirement, and obtaining the information of the second preset list.
The execution subject of the method can be a processor of a robot, including but not limited to a shopping guide robot and a road guide robot. In one implementation, the robot includes a body, and a processor, and a collection device and an indication device coupled to the processor are disposed on the body. The acquisition device is used for shooting an object to be indicated to obtain an image and sending the image to the processor, or the processor calls the image from the acquisition device. The indication signal emitted by the indicating device includes, but is not limited to, a laser beam.
Assuming that the image is divided into a plurality of sequentially numbered squares, the region information may include the number (or label) of the squares. For example, the image area is divided into 20 squares, and then the 20 squares are sequentially assigned with corresponding numbers (or labels), such as numbers of 1-20; or, as shown in fig. 2, each region constitutes region information according to the row-column number, that is, the region information of the region in the first row and the first column at the top left corner is 00; the area information of the first row and second column area is 01; … … the area information of the fifth row and column area is 44. Of course, the division form of the region is not limited to the square grid, and may be a region of other shapes, for example, a grid shape including but not limited to a triangular grid, a rectangular grid, a pentagonal grid, a hexagonal grid and other shapes, and the like. Taking the circle shown in fig. 2 as an example of a product image, the area where the product image is located is the area 11 of the second row and the second column of squares, and the area information of the area includes: coordinates of four vertices of the quadrilateral.
For example, the robot finger process is:
firstly, shooting an object to be indicated, namely a commodity by a collecting device to obtain an image containing a commodity image;
then, the processor receives the image sent by the acquisition device, or the processor calls the image from the acquisition device. The processor identifies the image, identifies the area 11 of the commodity image in the image, and determines the area information of the area 11.
And then, the processor can acquire the corresponding control parameter according to the area information, so as to control the indicating device to perform the object pointing action according to the control parameter.
The correspondence between the area information and the control parameter may be characterized as first preset table information shown in table 1 below. Table 1 may be pre-stored in the corresponding memory area, waiting for the processor to call, as shown in table 1 below.
TABLE 1 first Preset Table information
Region(s) Control parameter
00 (X0,Y0)
01 (X0,Y1)
…… ……
10 (X1,Y0)
11 (X1,Y1)
…… ……
NxNy (XN,YN)
In the above example, the area where the product image is located is the area 11 of the second row and the second column of squares, and the control parameter corresponding to the area information 11 that can be obtained based on the table 1 is (X)1,Y1) And according to the control parameters, the processor can control the action of the indicating device to enable the indicating device to emit laser beams to be shot on the commodity, so that light spots appear on the commodity, and a customer can find out a required commodity position according to the light spots.
Taking the case that the robot locally stores the second preset table information as an example, as shown in table 2 below,
TABLE 2 second Preset Table information
Mark point numbering Mark point position information Control parameter
1 (x0,y0) (X0,Y0)
2 (x0,y1) (X0,Y1)
…… …… ……
7 (x1,y0) (X1,Y0)
8 (x1,y1) (X1,Y1)
…… …… ……
N (xn,yn) (XN,YN)
Specifically, the robot finger process is as follows:
firstly, shooting an object to be indicated, namely a commodity by a collecting device to obtain an image containing a commodity image;
then, the processor receives the image sent by the acquisition device, or the processor calls the image from the acquisition device. The processor identifies the image and identifies the area information of the outline area occupied by the commodity image in the image.
Then, the processor acquires the position information of a plurality of preset mark points; and determining target mark points falling into the commodity image contour area based on the contour information and the position information of the plurality of preset mark points.
And finally, the processor acquires the second preset table information shown in the table 2 so as to acquire the control parameter corresponding to the position information of the target mark point according to the corresponding relation between the position information of the preset mark point and the control parameter.
Referring to fig. 3, assuming that the mark points 11 in the second row and the second column fall into the image outline area of the commodity, the control parameters corresponding to the mark points 11 in the second row and the second column can be obtained by querying the second preset table information shown in table 2, and the processor controls the indicating device to act according to the obtained control parameters, so that a visible mark (such as a light spot) can be formed on the commodity.
Based on the technical solutions provided by the above embodiments, two specific embodiments as shown in fig. 4, 5 and 6 are proposed as follows. Fig. 4 shows a flowchart of a robot control method according to an embodiment of the present application. Specifically, as shown in fig. 4, the method includes:
201. and acquiring an image containing an image of the object to be indicated.
202. And identifying the image to determine the area information of the set area of the image of the object to be indicated.
203. Acquiring first preset list information, wherein the first preset list information comprises: the area information including a plurality of setting areas and the control parameter corresponding to each area information.
204. And acquiring the control parameter corresponding to the area information from the first preset table information.
205. And controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
For the contents of the above 201 and 205, reference may be made to the above embodiments, which are not described herein again.
The process of determining the area information in 202 can be seen in the above embodiments. In addition, there is a case where the object image to be indicated spans multiple setting regions, in this case, one of the multiple setting regions spanned by the object image to be indicated may be selected, or one with the largest overlapping area may be selected, and the like, which is not particularly limited in this embodiment.
In 203, the first preset list information may be obtained locally or from a server connected to the robot.
With reference to fig. 2, taking the oval shape shown in fig. 2 as an example of the product image, the contour information of the product image determines that the product image spans the two setting areas 22 and 32 according to the contour information and the area information of the setting areas. At this point, the processor selects one of the two regions 22 and 32. For example, one way is: the selection area 22 is an area where the commodity is located; the other mode is as follows: the selection area 32 is an area where the commodity is located; yet another way is: and selecting an area with the overlapping rate of the commodity image meeting a preset condition.
What needs to be added here is: when the object image to be indicated is identified, the contour information of the object image to be indicated can be obtained; the ratio of the overlapping area of the image of the object to be indicated and the set area where the image of the object is located to the outline area of the image of the object to be indicated, or the ratio of the overlapping area to the whole set area is the overlapping rate. For example, the image of the object to be indicated is located in a set area, and the ratio of the overlapping portion of the object to be indicated and the set area where the object to be indicated is located to the whole image of the object to be indicated is one hundred percent.
For example, the oval image of the object to be indicated in fig. 2 spans two areas, and both the areas 22 and 32 are partially overlapped with the object to be indicated, and the overlapping rates of the two areas and the image of the object to be indicated may be equal or may not be equal. If the preset condition is that the overlapping rate is maximum, determining which area of the areas 22 and 32 has the maximum overlapping rate with the image of the object to be indicated; assuming that the overlapping rate of the area 22 and the image of the object to be indicated is the maximum, the area 22 is selected as the area where the object to be indicated is located. If the overlapping rates of the areas 22 and 32 and the image of the object to be indicated are equal, one area can be randomly selected as the area where the object to be indicated is located.
Fig. 5 shows a flowchart of a robot control method according to an embodiment of the present application. Specifically, as shown in fig. 5, the method includes:
301. acquiring an image containing an image of an object to be indicated;
302. identifying the image to determine the outline information of the outline area occupied by the image of the object to be indicated on the image;
303. acquiring position information of a plurality of preset mark points;
304. determining target mark points falling into the image contour area of the object to be indicated based on the contour information and the position information of the plurality of preset mark points;
305. and acquiring control parameters corresponding to the position information of the target mark point according to the position information of the preset mark points and the control parameters corresponding to each position information.
For the contents of the above steps 301 and 302, reference may be made to the above embodiments, which are not described herein again.
In 303, the position information of the plurality of preset mark points may be coordinate information.
What needs to be added here is: the coordinates mentioned in the embodiments herein can be understood as: coordinates in the image coordinate system. An acquisition device for acquiring images is assumed to be a camera, and a camera coordinate system is a three-dimensional rectangular coordinate system established by taking a focusing center of the camera as an origin and taking an optical axis as a Z axis. The intersection point of the optical axis and the image plane is the origin of an image coordinate system, and the image coordinate system is a two-dimensional rectangular coordinate system containing an X axis and a Y axis. The area information, contour information, and the like mentioned in the above embodiments are determined based on the image coordinate system.
In the above step 304, a plurality of target mark points that fall within the image outline area of the object to be indicated may be determined. When a plurality of target mark points are provided, one of the target mark points can be randomly selected; or selecting one closest to the center of the image outline area of the object to be indicated from the image outline areas; and so on.
Fig. 6 shows a flowchart of a robot control method according to an embodiment of the present application. Specifically, as shown in fig. 6, the method includes:
401. acquiring an image containing an image of an object to be indicated;
402. identifying the image to determine the outline information of the outline area occupied by the image of the object to be indicated on the image;
403. determining a pointing point location based on the contour information;
wherein the pointing point location may be, but is not limited to: and the position of the central point of the image contour of the object to be indicated.
404. And acquiring the position information of the preset mark points and the control parameters corresponding to the position information of each preset mark point.
In a specific implementation, the position information of the plurality of preset mark points and the control parameter corresponding to the position information of each preset mark point may be included in the second preset table information. The second preset table information may be implemented through a tabulation process, and the related contents may be referred to the description in the above embodiment.
405. And searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points.
For example, the setting is required to be closest. Correspondingly, the step 405 may specifically be: and searching a target mark point which is closest to the position of the indicating point from the plurality of preset mark points.
406. And determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
In specific implementation, the position relationship may be characterized as: and the coordinate difference between the target mark point coordinate and the indicating point coordinate. For example, the coordinate difference includes: and under the image coordinate system, the X-axis coordinate difference and the Y-axis coordinate difference. And determining the control parameter corresponding to the position of the indicating point according to the coordinate difference and the control parameter corresponding to the target marking point.
Accordingly, referring to fig. 7, the present application provides a robot, which may perform the methods described in the embodiments above. Specifically, the robot includes: a processor 30, and an acquisition device 10 and a pointing device 20 coupled to the processor 30, respectively.
The acquisition device 10 is used for shooting an image containing an image of an object to be indicated. The indication device 30 is used for sending out an indication signal. The processor 30 is configured to obtain an image containing an image of an object to be indicated; identifying the image to determine the area information of the area of the object image to be indicated in the image; acquiring control parameters based on the region information; and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
Further, the image has a plurality of setting regions; the processor 30 is further configured to identify the object image to be indicated in the image, so as to obtain contour information of the object image to be indicated; and determining the area information of the area of the image of the object to be indicated in the image according to the contour information and the area information of the plurality of set areas.
Further, the processor 30 is further configured to:
acquiring the corresponding relation between preset area information and control parameters;
and acquiring the control parameter corresponding to the area information according to the corresponding relation between the preset area information and the control parameter.
Further, the preset area information and the control parameter have a corresponding relationship as follows: the first preset table information includes area information of a plurality of setting areas and control parameters corresponding to the area information.
Further, the corresponding relationship between the preset area information and the control parameter is obtained through a tabulation function of the processor. Correspondingly, the processor 30 is further configured to:
acquiring a test image shot by a robot in a set environment, wherein the test image is provided with at least two set areas;
acquiring control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible identification position on the test image and respectively falls into the at least two set areas;
and correspondingly associating the area information of each set area and the control parameters when the visible identification position falls into each set area to obtain the first preset list information.
The robot described in embodiment 3 can execute the method described in embodiment 2, and the related contents described in embodiments 3 and 2 can be referred to for reference, and are not described herein again.
In another implementation, the area of the image of the object to be indicated in the image is: the area within the image outline range of the object to be indicated; the area information is the outline information of the image of the object to be indicated. Correspondingly, the processor 30 is further configured to:
acquiring position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
and determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
Further, the processor 30 may be further configured to: determining target mark points falling into the image outline range of the object to be indicated based on the outline information and the position information of the plurality of preset mark points; acquiring control parameters corresponding to the position information of the target mark points; or
Determining a pointing point location based on the contour information; searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points; and determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
Further, the second preset table contains: and the position information of the preset mark points and the control parameters corresponding to the position information of each preset mark point. Correspondingly, the processor 30 is further configured to:
acquiring a test image shot by the robot in a set environment; wherein, a plurality of marking points are arranged on the test image;
acquiring control parameters of the indicating device when visible marks of indicating signals sent by the indicating device on a test image respectively accord with coincidence requirements with all mark points;
and correspondingly associating the position information and the visible identification of each mark point with the control parameter when each mark point meets the coincidence requirement to obtain the second preset list information.
In embodiment 3, the robot includes, but is not limited to, a shopping guide robot, a road guide robot. In one implementation, the robot includes a body on which a processor 30 is disposed, and an acquisition device 10 and a pointing device 20 coupled to the processor 30. The acquisition device 10 is used for shooting an object to be indicated to form an image and sending the image to the processor, or the processor 30 calls the image from the acquisition device 10. The indication signal emitted by the indicating device 20 includes, but is not limited to, a laser beam.
In the embodiment of the present application, the pointing device 20 can be implemented by including a laser pointer and two driving motors, where the two driving motors can drive the laser pointer to swing or move so as to adjust the position pointed by the laser pointer.
Referring to fig. 8, an embodiment of the present application further provides an information generating method. The first preset table information and the second preset table information mentioned in the above embodiments can be implemented by using the information generating method provided in this embodiment. Specifically, as shown in fig. 8, the method includes:
501. controlling the indicating device to act according to the control parameters so that an indicating signal sent by the indicating device forms a visible mark on the projection surface;
502. acquiring a first image containing the visual identification image;
503. identifying the first image to determine the position information of the visual identification image on the first image;
504. generating a table entry stored in preset table information based on the position information and the control parameter, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameter in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
The execution subject of the above method may be a processor of the robot. The robot can perform the above steps under a specific environment. For example, in front of a white wall surface, the distance between the robot and the wall surface is 3 meters, and the ambient light is dark, so that the laser spots can be conveniently found through image processing. Of course, each table entry in the above process may also be generated in the actual working scene of the robot. For example, in an actual working scene, the robot records data generated in the pointing process (such as control parameters for controlling the pointing device and position information of a visual identification image on the first image), and generates an entry stored in preset table information based on the data, so as to facilitate the query in the subsequent work.
In the above 504, based on the position information and the control parameter, a table entry stored in preset table information is generated; two forms can be embodied. For example, the first mode: and dividing a plurality of setting areas on the first image, associating the setting area where the position information is located with the control parameter, and storing the setting area as a table entry into the preset table information. The second mode is as follows: and directly using the point corresponding to the position information as a mark point, associating the position information with the control parameter, and storing the position information and the control parameter as an expression into the preset table information. The third mode is as follows: marking a plurality of preset mark points on the first image, and associating the position information with the adjusted control parameter to be stored in the preset list information as a list item if the coincidence of the visual identification image and one preset mark point is determined based on the position information and the position information of each preset mark point in the plurality of preset mark points. The following will explain each mode in detail.
The first method, that is, the step 504 "generating the table entry stored in the preset table information based on the position information and the control parameter", includes:
5041. acquiring area information of a plurality of set areas of the first image.
5042. And determining a set area where the position information is located in the plurality of set areas.
5043. And associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
The shape of the divided set area may be a square grid or other shapes, for example, the at least two areas are at least two square grid areas. When the test image is acquired, the test image photographed under the set environment can be acquired. The setting of the environment refers to controlling light and environment brightness of the environment, enabling a shot image to be the clearest, enabling laser points to be the most obvious and facilitating identification of the environment of the laser spots, for example, selecting a white wall, enabling the distance between the robot and the wall to be 3 m, enabling ambient light to be dark, enabling the acquisition device to shoot pictures, and enabling the processor to identify the laser spots when performing image identification.
Further, step 504 "generating an entry stored in the preset entry information based on the position information and the control parameter" may further include:
5044. judging whether the position relation between the position information and the central point of the set area where the position information is located meets the coincidence judgment requirement or not;
5045. and under the condition that the coincidence judgment requirement is judged to be met, triggering the action of associating the area information of the set area where the position information is located with the control parameter and storing the area information as a table entry into the preset table information.
Still further, step 504 "generating an entry stored in the preset entry information based on the position information and the control parameter" may further include the following steps:
5046. under the condition that the coincidence judgment requirement is not met, adjusting the control parameters until the position relation between the position of the visual identifier corresponding to the indication signal sent by the indicating device in the first image and the central point meets the coincidence judgment condition;
5047. and associating the area information of the set area where the position information is located with the adjusted control parameter, and storing the associated area information as a table entry into the preset table information.
The robot adjusts the visual identification image (i.e. laser spot) to each set central point by using the existing gradual approximation method, and simultaneously, the robot stores the position information of the visual identification image at the moment and the finally adjusted control parameters in an associated manner. In the process of executing the object pointing action by the robot, a light spot is not required to be printed firstly, the object to be pointed is judged to be in which set region divided before through an image, if the object to be pointed is in the first region, the control parameter corresponding to the first region is directly obtained, then the indicating device is controlled according to the obtained control parameter, and 2 or more times of adjustment is not required; in addition, the indication action can be accurately and rapidly finished without the image of the ambient light around the object to be indicated.
The first method, that is, the step 504 "generating the table entry stored in the preset table information based on the position information and the control parameter", includes:
5041', the location information is associated with the control parameter and stored as an entry in the preset table information.
Namely, the image of the visible identification point is marked as the marking point, the position information is directly related with the control parameter and is stored in the preset list information as a list item.
In another implementation solution, the step 504 "generating an entry stored in preset table information based on the position information and the control parameter" includes:
5042', obtaining a plurality of preset marker points of the first image;
5043' and if there is a preset mark point whose distance from the visible mark image meets the set requirement, associating the position information with the control parameter and storing the associated position information as a table entry in the preset table information.
Further, step 504 "generating an entry stored in the preset table information based on the position information and the control parameter" may further include the following steps:
5044' and in the case that there is no mark point with a distance from the image of the visual mark meeting the setting requirement, adjusting the control parameter until the distance between the position of the visual mark in the first image corresponding to the indication signal sent by the indication device and one of the preset mark points meets the setting requirement;
and associating the position information with the adjusted control parameter, and storing the associated position information and the adjusted control parameter as a table entry into the preset table information.
The information generating method provided by the embodiment and the related content described in the above embodiment of the robot control method can be referred to by reference.
Correspondingly, the embodiment of the application also provides a robot which can execute the method in the embodiment 8. Specifically, the robot includes: the system comprises a processor, and an acquisition device and an indicating device which are respectively coupled with the processor; the structure can be seen in fig. 7.
The indicating device is used for controlling the indicating device to act according to the control parameters so that the indicating signal sent by the indicating device forms a visible mark on the projection surface;
the acquisition device is used for acquiring a first image containing the visual identification image;
the processor is used for identifying the first image so as to determine the position information of the visual identification image on the first image; generating a table entry stored in preset table information based on the position information and the control parameter, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameter in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
Further, the processor is further configured to:
acquiring area information of a plurality of set areas of the first image;
determining a set area where the position information is located in the plurality of set areas;
and associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
Further, the processor is further configured to: and associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
The robot provided in this embodiment may perform the functions corresponding to the information generating method provided in fig. 8. The related contents of the present embodiment, the embodiment shown in fig. 8 and the robot control method embodiment can be referred to by reference.
The robot includes, but is not limited to, a shopping guide robot and a road directing robot. In one implementation, the robot includes a body, and a processor, and a collection device and an indication device coupled to the processor are disposed on the body. The acquisition device is used for shooting a test image and sending the test image to the processor, or the processor calls the test image from the acquisition device. The indication signal emitted by the indicating device includes, but is not limited to, a laser beam.
The technical solution adopted in the present application is described below with reference to specific application scenarios to help understanding. In the following application scenario, a robot is taken as an example of a shopping guide robot.
Application scenario 1
The robot receives the customer at the entrance of the mall. After the robot interacts with the customer, the customer is informed that the customer wants to purchase the commodity A, and then the customer is led to the shelf on which the commodity A is placed.
The robot shoots the commodity A through the acquisition device to form an image containing the commodity A image. The processor receives the image sent by the acquisition device or calls the image from the acquisition device. The processor recognizes the image, recognizes that the region of the image of the article a in the image is in the second row and the second column of the grid region 11 (as shown in fig. 2), and acquires the region information of the grid region 11. The control parameter (X) corresponding to the area information of the square area 11 is acquired from the first preset table information1,Y1)。
The robot controls the indicating device to act according to the control parameters, so that the indicating device emits laser beams to be irradiated on the commodity A, and light spots appear on the commodity A, so that a customer can find out the position of the commodity A according to the light spots.
Application scenario 2
The robot receives the customer at the entrance of the mall. After the robot interacts with the customer, the customer knows that the customer wants to purchase the commodity B, and then the customer is led to the shelf on which the commodity B is placed.
The robot shoots the commodity B through the acquisition device to form an image containing the commodity B image. The processor receives the image sent by the acquisition device or calls the image from the acquisition device. The processor identifies the image and identifies the region occupied by the image of the article a in the image as a grid region 22 and a grid region 32 (see fig. 2). Randomly selecting a checkered area 22 from the checkered areas 22 and 32; acquiring the area information of the grid area 22, and acquiring the control parameter (X) corresponding to the area information of the grid area 22 from the first preset table2,Y2)。
The robot controls the indicating device to act according to the control parameters, so that the indicating device emits laser beams to hit the commodity B, and light spots appear on the commodity B, so that a customer can find the position of the commodity B according to the light spots.
Application scenario 3
The robot provides a walk-in service for the customer at the entrance of the restaurant. And the robot interacts with the customer or leads the customer to the area of the table C after receiving the table sent by the ranking server.
For example, at or near the entrance of the area, the robot takes an image containing the table C through the acquisition device, and the processor receives the image from the acquisition device and recognizes the image to determine the area information of the table C image outline area. The processor acquires the position information of a plurality of preset marking points, and selects a marking point with the position information positioned in the table C image outline area from the plurality of preset marking points as a target marking point; and acquiring control parameters corresponding to the position information of the target mark point from the second preset table information.
The robot controls the indicating device to act according to the control parameters, so that the indicating device emits laser beams to be shot on the table C, and light spots appear on the table C, so that a customer can find a position to sit according to the light spots.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (28)

1. A robot control method, comprising:
acquiring an image containing an image of an object to be indicated;
identifying the image to determine the area information of the area of the object image to be indicated in the image;
acquiring control parameters based on the region information;
and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
2. The method of claim 1, wherein the image has a plurality of set regions; and
identifying the image to determine area information of an area where the object image to be indicated is located in the image, including:
identifying the object image to be indicated in the image to obtain the contour information of the object image to be indicated;
and determining the area information of the area of the image of the object to be indicated in the image according to the contour information and the area information of the plurality of set areas.
3. The method according to claim 2, wherein determining the area information of the area where the object image to be indicated is located in the image according to the contour information and the area information of the plurality of set areas comprises:
when the image of the object to be indicated spans at least two set areas based on the contour information and the area information of the set areas, selecting one area from the set areas;
area information of the selected area is acquired.
4. The method of claim 3, wherein selecting a region from the at least two defined regions comprises:
and selecting one area, of the at least two set areas, of which the overlapping rate with the contour information meets a preset condition according to the contour information and the area information of each set area of the at least two set areas.
5. The method according to any one of claims 1 to 4, wherein the region information comprises: the shape of the area outline and the coordinates of the characteristic points of the outline.
6. The method according to any one of claims 1 to 4, wherein obtaining control parameters based on the region information comprises:
acquiring the corresponding relation between preset area information and control parameters;
and acquiring the control parameter corresponding to the area information according to the corresponding relation between the preset area information and the control parameter.
7. The method of claim 6, wherein the preset region information corresponds to the control parameter in a relationship of: the first preset table information includes area information of a plurality of setting areas and control parameters corresponding to the area information.
8. The method of claim 7, wherein the preset table information is stored locally to the robot or at a server connected to the robot.
9. The method according to claim 7, wherein the correspondence between the preset region information and the control parameter is obtained through a tabulation process;
the tabulation process comprises the following steps:
acquiring a test image shot by the robot in a set environment; wherein, the test image has at least two setting areas;
acquiring control parameters of the indicating device when the indicating signal sent by the indicating device presents a visible identification position on the test image and respectively falls into the at least two set areas;
and correspondingly associating the area information of each set area and the control parameters when the visible identification position falls into each set area to obtain the first preset list information.
10. The method according to claim 9, wherein obtaining the control parameters of the indicating device when the indicating signal from the indicating device shows the visible mark position on the test image and respectively falls into the at least two setting areas comprises:
and acquiring control parameters of the indicating device when the visible identification positions of the indicating signals sent by the indicating device on the test image respectively fall into the center of each set area.
11. The method according to claim 1, wherein the area of the image of the object to be indicated is: the image outline area of the object to be indicated; the area information is the outline information of the image of the object to be indicated; and
based on the area information, obtaining control parameters, including:
acquiring position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
and determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
12. The method of claim 11, wherein determining the control parameter based on the contour information, the position information of the plurality of preset mark points, and the control parameter corresponding to the position information of each preset mark point comprises:
determining target mark points falling into the image outline range of the object to be indicated based on the outline information and the position information of the plurality of preset mark points;
and acquiring control parameters corresponding to the position information of the target mark point.
13. The method of claim 11, wherein determining the control parameter based on the contour information, the position information of the plurality of preset mark points, and the control parameter corresponding to the position information of each preset mark point comprises:
determining a pointing point location based on the contour information;
searching a target mark point which meets the setting requirement with the position of the indicating point from the plurality of preset mark points;
and determining the control parameter corresponding to the position of the indicating point according to the position relation between the target marking point and the position of the indicating point and the control parameter corresponding to the target marking point.
14. The method according to any one of claims 11 to 13, wherein the second preset table contains: the position information of a plurality of preset mark points and the control parameters corresponding to the position information of each preset mark point; and the second preset table information is obtained through the following table making process:
acquiring a test image shot by the robot in a set environment; wherein, a plurality of marking points are arranged on the test image;
acquiring control parameters of the indicating device when visible marks of indicating signals sent by the indicating device on a test image respectively accord with coincidence requirements with all mark points;
and correspondingly associating the position information and the visible identification of each mark point with the control parameter when each mark point meets the coincidence requirement to obtain the second preset list information.
15. The method of claim 1, wherein the indicating means comprises:
the indicating signal generator is used for sending out indicating signals to form a visual mark on the object to be indicated;
the driving device outputs power with at least one degree of freedom to drive the indicating signal generator to act;
the driving device comprises at least one driving motor; the control parameters include drive parameters of the respective drive motors.
16. A robot, comprising:
the acquisition device is used for shooting an image containing an image of an object to be indicated;
the indicating device is used for sending an indicating signal;
the processor is used for acquiring an image containing an image of an object to be indicated; identifying the image to determine the area information of the area of the object image to be indicated in the image; acquiring control parameters based on the region information; and controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visual mark on the object to be indicated.
17. The robot of claim 16, wherein the processor is further configured to:
acquiring the corresponding relation between preset area information and control parameters;
and acquiring the control parameter corresponding to the area information according to the corresponding relation between the preset area information and the control parameter.
18. The robot according to claim 16, wherein the image of the object to be indicated is located in the area of: the area within the image outline range of the object to be indicated; the area information is the outline information of the image of the object to be indicated; and
the processor is further configured to:
acquiring position information of a plurality of preset mark points and control parameters corresponding to the position information of each preset mark point;
and determining the control parameters based on the control parameters corresponding to the contour information, the position information of the preset mark points and the position information of each preset mark point.
19. An information generating method, comprising:
controlling the indicating device to act according to the control parameters so that an indicating signal sent by the indicating device forms a visible mark on the projection surface;
acquiring a first image containing the visual identification image;
identifying the first image to determine the position information of the visual identification image on the first image;
generating a table entry stored in preset table information based on the position information and the control parameters, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameters in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
20. The method of claim 19, wherein generating an entry for storing in preset table information based on the location information and the control parameter comprises:
acquiring area information of a plurality of set areas of the first image;
determining a set area where the position information is located in the plurality of set areas;
and associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
21. The method of claim 20, wherein generating an entry for storing in preset table information based on the location information and the control parameter, further comprises:
judging whether the position relation between the position information and the central point of the set area where the position information is located meets the coincidence judgment requirement or not;
and under the condition that the coincidence judgment requirement is judged to be met, triggering the action of associating the area information of the set area where the position information is located with the control parameter and storing the area information as a table entry into the preset table information.
22. The method of claim 21, further comprising:
under the condition that the coincidence judgment requirement is not met, adjusting the control parameters until the position relation between the position of the visual identifier corresponding to the indication signal sent by the indicating device in the first image and the central point meets the coincidence judgment condition;
and associating the area information of the set area where the position information is located with the adjusted control parameter, and storing the associated area information as a table entry into the preset table information.
23. The method of claim 19, wherein generating an entry for storing in preset table information based on the location information and the control parameter comprises:
and associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
24. The method of claim 19, wherein generating an entry for storing in preset table information based on the location information and the control parameter comprises:
acquiring a plurality of preset mark points of the first image;
and under the condition that one preset mark point with the distance from the visual identification image meeting the set requirement exists in the plurality of preset mark points, associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
25. The method of claim 24, wherein generating an entry for storing in preset table information based on the location information and the control parameter, further comprises:
under the condition that no mark point with the distance between the image of the visual mark and the preset mark point in the plurality of preset mark points meets the set requirement, adjusting the control parameter until the distance between the position of the visual mark corresponding to the indication signal sent by the indicating device in the first image and one preset mark point in the plurality of preset mark points meets the set requirement;
and associating the position information with the adjusted control parameter, and storing the associated position information and the adjusted control parameter as a table entry into the preset table information.
26. A robot, comprising:
the indicating device is used for controlling the action of the indicating device according to the control parameters so that the indicating signal sent by the indicating device forms a visible mark on the projection surface;
the acquisition device is used for acquiring a first image containing the visual identification image;
the processor is used for identifying the first image so as to determine the position information of the visual identification image on the first image; generating a table entry stored in preset table information based on the position information and the control parameters, so that the robot searches the table entry corresponding to the area information from the preset table information in the execution process of the object pointing action, and controlling an indicating device to act according to the control parameters in the table entry to form a visual identifier on the object to be indicated;
the area information is information of an area where the object image to be indicated is located in the second image.
27. The robot of claim 26, wherein the processor is further configured to:
acquiring area information of a plurality of set areas of the first image;
determining a set area where the position information is located in the plurality of set areas;
and associating the area information of the set area where the position information is located with the control parameter, and storing the area information as a table entry into the preset table information.
28. The robot of claim 26, wherein the processor is further configured to:
and associating the position information with the control parameter to be used as a table entry to be stored in the preset table information.
CN201911276560.9A 2019-12-12 2019-12-12 Robot control method, information generation method and robot Pending CN112975940A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911276560.9A CN112975940A (en) 2019-12-12 2019-12-12 Robot control method, information generation method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911276560.9A CN112975940A (en) 2019-12-12 2019-12-12 Robot control method, information generation method and robot

Publications (1)

Publication Number Publication Date
CN112975940A true CN112975940A (en) 2021-06-18

Family

ID=76331789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911276560.9A Pending CN112975940A (en) 2019-12-12 2019-12-12 Robot control method, information generation method and robot

Country Status (1)

Country Link
CN (1) CN112975940A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113827152A (en) * 2021-08-30 2021-12-24 北京盈迪曼德科技有限公司 Regional state determination method and device and robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622108A (en) * 2012-01-18 2012-08-01 深圳市中科睿成智能科技有限公司 Interactive projecting system and implementation method for same
CN103425409A (en) * 2013-07-30 2013-12-04 华为终端有限公司 Control method and device for projection display
CN104765380A (en) * 2014-01-03 2015-07-08 科沃斯机器人科技(苏州)有限公司 Light spot indication robot and light spot indication method thereof
CN106363631A (en) * 2016-10-14 2017-02-01 广州励丰文化科技股份有限公司 Mechanical arm control table and method based on ultrasonic distance measuring
CN107923979A (en) * 2016-07-04 2018-04-17 索尼半导体解决方案公司 Information processor and information processing method
CN108010011A (en) * 2017-10-23 2018-05-08 鲁班嫡系机器人(深圳)有限公司 A kind of device for helping to confirm the target area on target object and the equipment including the device
CN109773783A (en) * 2018-12-27 2019-05-21 北京宇琪云联科技发展有限公司 A kind of patrol intelligent robot and its police system based on spatial point cloud identification
CN110293554A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Control method, the device and system of robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622108A (en) * 2012-01-18 2012-08-01 深圳市中科睿成智能科技有限公司 Interactive projecting system and implementation method for same
CN103425409A (en) * 2013-07-30 2013-12-04 华为终端有限公司 Control method and device for projection display
CN104765380A (en) * 2014-01-03 2015-07-08 科沃斯机器人科技(苏州)有限公司 Light spot indication robot and light spot indication method thereof
CN107923979A (en) * 2016-07-04 2018-04-17 索尼半导体解决方案公司 Information processor and information processing method
CN106363631A (en) * 2016-10-14 2017-02-01 广州励丰文化科技股份有限公司 Mechanical arm control table and method based on ultrasonic distance measuring
CN108010011A (en) * 2017-10-23 2018-05-08 鲁班嫡系机器人(深圳)有限公司 A kind of device for helping to confirm the target area on target object and the equipment including the device
CN110293554A (en) * 2018-03-21 2019-10-01 北京猎户星空科技有限公司 Control method, the device and system of robot
CN109773783A (en) * 2018-12-27 2019-05-21 北京宇琪云联科技发展有限公司 A kind of patrol intelligent robot and its police system based on spatial point cloud identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113827152A (en) * 2021-08-30 2021-12-24 北京盈迪曼德科技有限公司 Regional state determination method and device and robot
CN113827152B (en) * 2021-08-30 2023-02-17 北京盈迪曼德科技有限公司 Regional state determination method and device and robot

Similar Documents

Publication Publication Date Title
CN108492356A (en) Augmented reality system and its control method
Pagano et al. A vision guided robotic system for flexible gluing process in the footwear industry
CN101479690A (en) Generating position information using a video camera
CN108875804A (en) A kind of data processing method and relevant apparatus based on laser point cloud data
JP4402458B2 (en) Method for determining corresponding points in 3D measurement
CN110017769A (en) Part detection method and system based on industrial robot
CN111258411A (en) User interaction method and device
TWI526879B (en) Interactive system, remote controller and operating method thereof
US10890430B2 (en) Augmented reality-based system with perimeter definition functionality
CN111340890A (en) Camera external reference calibration method, device, equipment and readable storage medium
CN112975940A (en) Robot control method, information generation method and robot
CN107346013B (en) A kind of method and device for calibrating locating base station coordinate system
WO2019080812A1 (en) Apparatus for assisting in determining target region on target object, and device comprising the apparatus
CN112184793A (en) Depth data processing method and device and readable storage medium
JP2003333590A (en) System for generating image at site
JP2002031513A (en) Three-dimensional measuring device
JP2009175012A (en) Measurement device and measurement method
CN106248058B (en) A kind of localization method, apparatus and system for means of transport of storing in a warehouse
CN113450414A (en) Camera calibration method, device, system and storage medium
US11461987B1 (en) Systems and methods for defining, bonding, and editing point cloud data points with primitives
KR20210023431A (en) Position tracking system using a plurality of cameras and method for position tracking using the same
CN106371058A (en) Positioning apparatus and positioning method
CN114833038A (en) Gluing path planning method and system
JP4902564B2 (en) Marker detection and identification device and program thereof
CN114494468A (en) Three-dimensional color point cloud construction method, device and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618