CN109934867B - Image explanation method, terminal and computer readable storage medium - Google Patents

Image explanation method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN109934867B
CN109934867B CN201910181131.7A CN201910181131A CN109934867B CN 109934867 B CN109934867 B CN 109934867B CN 201910181131 A CN201910181131 A CN 201910181131A CN 109934867 B CN109934867 B CN 109934867B
Authority
CN
China
Prior art keywords
image
explained
distance
robot
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910181131.7A
Other languages
Chinese (zh)
Other versions
CN109934867A (en
Inventor
骆磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN201910181131.7A priority Critical patent/CN109934867B/en
Publication of CN109934867A publication Critical patent/CN109934867A/en
Application granted granted Critical
Publication of CN109934867B publication Critical patent/CN109934867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The embodiment of the invention relates to the field of robots, and discloses an image interpretation method, a terminal and a computer readable storage medium. The image explaining method is applied to the robot and comprises the following steps: acquiring a relative position relation between a release position of an image to be explained and a position of the robot; determining limb actions to be executed according to the relative position relation; and executing limb actions and explaining the image to be explained. According to the embodiment, the robot can explain the body movement of the delivered image, so that the attraction of the delivered image to the user is improved, and the learning effect of the user on the delivered content of the robot is improved.

Description

Image explanation method, terminal and computer readable storage medium
Technical Field
The embodiment of the invention relates to the field of robots, in particular to a method, a terminal and a computer-readable storage medium for image explanation.
Background
With the continuous progress of science, robots are applied in more and more fields, such as robots for welcoming in the service industry, robots for patrolling in the security field, and robots for production and processing in the industrial industry, and therefore, the trend of replacing manpower by robots is not obstructed at present.
The inventor finds that at least the following problems exist in the prior art: the development of the robot in the teaching and family fields is slow, the current teaching robot only aims at children, and the teaching robot can only tell stories, learn to say words and the like. The robot can only tell stories or play teaching contents, so that the content spoken by the robot is unattractive to audiences, and the audiences can know and learn the content spoken by the robot.
Disclosure of Invention
An object of embodiments of the present invention is to provide an image interpretation method, a terminal, and a computer-readable storage medium, which enable a robot to perform an interpretation of body movements of a delivered image, thereby improving an attraction of the delivered image to a user and improving a learning effect of the user on delivered content of the robot.
In order to solve the above technical problem, an embodiment of the present invention provides an image interpretation method applied to a robot, including: acquiring a relative position relation between a release position of an image to be explained and a position of the robot; determining limb actions to be executed according to the relative position relation; and executing limb actions and explaining the image to be explained.
An embodiment of the present invention further provides a terminal, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the method for image interpretation.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the image interpretation method described above.
Compared with the prior art, the method and the device for the image interpretation can determine the body actions (such as indication actions, moving actions and the like) which need to be executed by the robot on the image to be interpreted by the robot by acquiring the relative position relation between the putting position of the image to be interpreted and the position of the robot, and the body actions corresponding to the image to be interpreted are generated in the process of interpreting the image to be interpreted, so that the attraction of the image to be interpreted on a user is increased, the enthusiasm of the user for acquiring information contained in the image to be interpreted is improved, and the interaction between the robot and the user and the efficiency of the user for acquiring knowledge are improved due to the increase of the body actions.
In addition, according to the relative position relationship, the limb actions to be executed are determined, which specifically comprises: determining a path for indicating the image to be explained according to the relative position relation; and determining that the limb actions to be executed indicate the image to be explained according to the path. Through the relative position relationship, the path indicating the image to be explained can be determined, and then the robot can accurately explain the image to be explained according to the indicating path.
In addition, acquiring the relative position relationship between the release position of the image to be explained and the position of the robot specifically includes: determining relative position information between a release position of an image to which an image to be explained belongs and a position of the robot, and taking the relative position information as first relative position information; determining the position information of the image to be explained in the image after the image to be explained is released, and taking the position information as second relative position information; and determining the relative position relationship between the release position of the image to be explained and the position of the robot according to the first relative position information and the second relative position information. The accuracy of determining the relative position relationship between the release position of the image to be explained and the robot can be influenced due to the fact that the angle exists in the visual angle of the robot, and the relative position relationship between the release position of the image to be explained and the position of the robot can be accurately determined by determining the position of the release position of the image to be explained relative to the position of the robot and according to the position information of the image to be explained in the image after the image to be explained is released, so that the accuracy of the indication of the image to be explained of the robot can be ensured.
In addition, determining relative position information between the release position of the image to which the image to be explained belongs and the position of the robot, and as first relative position information, specifically including: determining the distances from the position of the robot to an upper boundary, a lower boundary, a left boundary and a right boundary of a throwing area of the image, taking the distance from the position of the robot to the upper boundary as a first distance, taking the distance from the position of the robot to the lower boundary as a second distance, taking the distance from the position of the robot to the left boundary as a third distance and taking the distance from the position of the robot to the right boundary as a fourth distance; determining an included angle formed from the position of the robot to the upper boundary and the lower boundary of the throwing area of the image to which the robot belongs, and taking the included angle as a first included angle; determining an included angle formed by the left boundary and the right boundary from the position of the robot to the throwing area of the image as a second included angle; and taking the first distance, the second distance, the third distance, the fourth distance, the first included angle and the second included angle as first relative position information. The mode of confirming first distance, second distance, third distance, fourth distance, first contained angle and second contained angle is simple and accurate, therefore the robot can determine first relative position relation fast, is convenient for treat the explanation image fast and explains, improves user's the effect of watching this image of waiting to explain.
In addition, determining the position information of the image to be explained after the image to be explained is released, and using the position information as second relative position information specifically includes: acquiring size information of a release region of an image to which the image belongs, and acquiring position information of an image to be explained in the image to which the image belongs; determining the distance between the throwing area of the image to be explained and the left boundary or the right boundary and the distance between the throwing area of the image to be explained and the upper boundary or the lower boundary according to the size information of the throwing area of the image to be explained and the position information of the image to be explained in the image to be explained, taking the distance between the throwing area of the image to be explained and the left boundary or the right boundary as the transverse position information of the throwing position of the image to be explained, and taking the distance between the throwing area and the upper boundary or the lower boundary as the longitudinal position information of the throwing position of the image to be explained; the lateral position information and the longitudinal position information are taken as second relative position information. According to the position information of the image to be explained in the image and the size information of the throwing area of the image, the position information of the throwing area of the image to be explained in the throwing area of the image can be quickly and accurately determined according to simple data knowledge.
In addition, acquiring the size information of the placement area of the belonging image specifically includes: determining the distance between the upper boundary and the lower boundary by utilizing a trigonometric function according to the first distance, the second distance and the first included angle, and taking the obtained distance as first size information of the throwing area of the image; determining the distance between the left boundary and the right boundary by utilizing a trigonometric function according to the third distance, the fourth distance and the second included angle, and taking the obtained distance as second size information of the throwing area of the image; and taking the first size information and the second size information as the size information of the putting region of the image. The size information of the throwing area of the image can be calculated by utilizing the first relative position information, and the acquisition mode is simple and accurate.
In addition, according to the first relative position information and the second relative position information, determining a relative position relationship between the release position of the image to be explained and the position of the robot, specifically including: determining a longitudinal deflection angle formed from the position of the robot to the throwing position of the image to be explained and to the upper boundary by utilizing a trigonometric function relation according to the first distance, the second distance, the first size information, the first included angle and the longitudinal position information; determining a horizontal deflection angle formed from the position of the robot to the release position of the image to be explained and to the left boundary by utilizing a trigonometric function relation according to the third distance, the fourth distance, the second size information, the second included angle and the horizontal position information; and taking the transverse deflection angle and the longitudinal deflection angle as the relative position relation between the release position of the image to be explained and the position of the robot. After the first included angle and the second included angle are determined, the putting position of the image to be explained can be uniquely determined, the indicated path can be accurately determined, and the robot can conveniently indicate the image to be explained.
In addition, after determining that the limb action to be executed indicates the image to be explained according to the path, the image explaining method further comprises the following steps: determining third relative position information between the release position of the next image to be explained and the release position of the image to be explained; determining the relative position relationship between the release position of the next image to be explained and the position of the robot according to the third relative position information; determining the next limb action to be executed according to the relative position relationship between the release position of the next image to be explained and the position of the robot; and executing the next limb action and explaining the next image to be explained. The robot can quickly determine the next limb action through the third relative position information, so that the limb actions of the robot are coherent, and the robot is more beneficial to the learning of a user.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic flowchart illustrating a method for image interpretation according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of an image and an image to be explained in a method for image explanation according to a first embodiment of the present application;
FIG. 3 is a schematic view of a robot viewing angle in a method for image interpretation according to a first embodiment of the present application;
fig. 4 is a schematic specific flowchart of acquiring a relative position relationship in an image interpretation method according to a first embodiment of the present application;
FIG. 5 is a schematic top view of a robot in a method of image interpretation according to a first embodiment of the present application;
FIG. 6 is a flowchart illustrating an exemplary method for image interpretation according to a second embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a terminal according to a third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
A first embodiment of the present invention relates to a method of image interpretation. The image interpretation method is applied to a robot, which may be a robot for teaching, a housekeeper robot for home field, or the like. The specific flow of the image interpretation method is shown in fig. 1, and includes:
step 101: and acquiring the relative position relation between the release position of the image to be explained and the position of the robot.
Specifically, the image to be explained is a part of the content of the image to which the image belongs, and as shown in fig. 2, the image B is the image to which the image a belongs, and the image a is the image to be explained. The robot sends the image to the display screen in real time, and the image is released by the display screen, or the image is released on the display screen through a display device such as a projector, wherein the robot can be in communication connection with the display device (such as the display screen, the projector and the like), and can also directly send the image to the display device through a wireless signal, for example, a Miracast technology is adopted, so that the display device can release the image in real time. It is understood that the image may be stored in a local memory of the display device, or may be obtained from other devices (e.g., a cloud, a server, a local computer, etc.). The images can be PPT, pictures, images in a video stream, and the like.
In a specific implementation, the relative position relationship between the release position of the image to be explained and the position of the robot is obtained, and multiple modes are available, for example, the position of the image to be explained can be identified through the image identification function of the robot, and the release position of the image to be explained can be determined by combining the distance measurement function, so that the relative relationship between the release position of the image to be explained and the position of the robot is determined.
However, when the robot is very close to the display screen displaying the image, the viewing angle of the robot looking at the display screen may be very deviated, as shown in the top view of fig. 3, MN is the display screen from the top view, a' is the image to be explained, and the viewing angle of the robot is shown by the dotted line in fig. 3. Further, the determination of the placement position of the image to be explained is inaccurate, the influence of the surrounding environment on the visual angle of the robot is unpredictable (such as strong light, occlusion, and the like), and if the image to be explained is searched on the display screen only in a visual manner, the accuracy cannot be 100%. Therefore, the present embodiment may also adopt a manner in another specific implementation to acquire a relative positional relationship between the release position of the image to be explained and the position of the robot.
In another specific implementation, the method shown in fig. 4 is used to obtain the relative position relationship. Specifically, the method includes the following substeps.
Step 1011: and determining relative position information between the throwing position of the image to be explained and the position of the robot, and taking the relative position information as first relative position information.
Determining the distances from the position of the robot to the upper boundary, the lower boundary, the left boundary and the right boundary of the throwing area of the image, taking the distance to the upper boundary as a first distance, taking the distance to the lower boundary as a second distance, taking the distance to the left boundary as a third distance and taking the distance to the right boundary as a fourth distance. Determining an included angle formed from the position of the robot to the upper boundary and the lower boundary of the throwing area of the image to which the robot belongs, and taking the included angle as a first included angle; determining an included angle formed by the left boundary and the right boundary from the position of the robot to the throwing area of the image as a second included angle; and taking the first distance, the second distance, the third distance, the fourth distance, the first included angle and the second included angle as first relative position information.
Specifically, the robot does not know the release position of the image, and can determine the release position of the image by determining first position information of the release position of the image relative to the robot in a distance measurement mode and a vision mode, wherein the first relative position information includes: the first distance, the second distance, the third distance, the fourth distance, the first included angle and the second included angle. The following description will take the determination of the first distance, the second distance, and the first angle as an example.
For example, if the placement area of the belonging image is rectangular, the placement area is perpendicular to the plane where the robot stands, and the boundary of the plane where the robot stands is defined as the left boundary and the right boundary, then looking down the robot and the placement area of the belonging image, as shown in fig. 5, point M is the vertex of the left boundary, N is the vertex of the right boundary, O is the position of the robot, and a' is the position in the top view of the image to be explained. The robot can determine a third distance d3 between OM and a fourth distance d4 between ON according to the ranging sensor, and can measure a second angle α formed between OM and ON.
Similarly, the robot measures the first distance, the second distance and the first included angle in a similar manner, and the description thereof will not be repeated here.
Step 1012: and determining the position information of the image to be explained in the image after the image to be explained is released, and taking the position information as second relative position information.
In a specific implementation, size information of a release region of an image to which the image belongs is obtained, and position information of an image to be explained in the image to which the image belongs is obtained; determining the distance between the throwing area of the image to be explained and the left boundary or the right boundary and the distance between the throwing area of the image to be explained and the upper boundary or the lower boundary according to the size information of the throwing area of the image to be explained and the position information of the image to be explained in the image to be explained, taking the distance between the throwing area of the image to be explained and the left boundary or the right boundary as the transverse position information of the throwing position of the image to be explained, and taking the distance between the throwing area of the image to be explained and the upper boundary or the lower boundary as the longitudinal position information of the throwing position of the image to be explained; the lateral position information and the longitudinal position information are taken as second relative position information.
Specifically, the robot may obtain the size information of the launch area of the belonging image from the display device, for example, the display device is a television, and the size information of the television is the size information of the launch area of the belonging image, so that the robot directly obtains the size information of the television as the size information of the launch area of the belonging image. The robot may obtain the position information of the image to be explained in the image to which the robot belongs, for example, if the image to which the robot belongs is sent to the display device for displaying, the robot may directly obtain the position information of the image to be explained in the image to which the robot belongs, and if the image to which the robot belongs is stored on the display device, the robot may obtain the position information from the display device.
It should be noted that the size information of the drop zone of the belonging image includes first size information and second size information, where the first size information is a distance between an upper boundary and a lower boundary in the drop zone of the belonging image, and the second size information is a distance between a left boundary and a right boundary in the drop zone of the belonging image.
Because the throwing area of the image to be explained and the image to be explained are only in the scaling relation of the actual size, the distance between the throwing area of the image to be explained and the left boundary or the right boundary and the distance between the throwing area of the image to be explained and the upper boundary or the lower boundary can be determined and determined according to the size information of the throwing area of the image to be explained and the position information of the image to be explained in the image to be explained, and the distance between the throwing area of the image to be explained and the left boundary or the right boundary is taken as the transverse position information of the throwing position of the image to be explained.
For example, as shown in fig. 2, the position of the image a to be explained in the belonging image B is: distance from X1% of left boundary of belonging image, distance from X2% of right boundary of belonging image, distance from Y1% of upper boundary of belonging image, distance from Y2% of lower boundary of belonging image, wherein X1% + X2% is 100%, and Y1% + Y2% is 100%; according to the position relation and the size information of the throwing area of the image, the position of the throwing area of the image to be explained in the throwing area of the image can be determined.
Step 1013: and determining the relative position relationship between the release position of the image to be explained and the position of the robot according to the first relative position information and the second relative position information.
In a specific implementation, according to the first distance, the second distance, the first size information, the first included angle and the longitudinal position information, determining a longitudinal deflection angle formed from the position of the robot to the throwing position of the image to be explained and to the upper boundary by utilizing a trigonometric function relation; determining a horizontal deflection angle formed from the position of the robot to the release position of the image to be explained and to the left boundary by utilizing a trigonometric function relation according to the third distance, the fourth distance, the second size information, the second included angle and the horizontal position information; and taking the transverse deflection angle and the longitudinal deflection angle as the relative position relation between the release position of the image to be explained and the position of the robot.
The determination of the lateral deflection angle is described in detail below with reference to fig. 5.
According to the fact that the third distance is d3, the fourth distance is d4, MN is second size information and the second included angle alpha, the size of the & lt ONM can be calculated by utilizing a trigonometric function, MA ' is transverse position information, and since MA '/AN ' is known, the length of MA ', the length of A ' N can be obtained according to MA '/AN ' and the length of MN; since the size of the angle ONM is determined, the transverse deflection angle beta can be solved by utilizing the triangle OA' N, and the transverse deflection angle can also be the angle alpha-beta.
The solution for the longitudinal deflection angle is substantially the same as the solution for the transverse deflection angle and will not be described in detail here.
Step 102: and determining the limb actions required to be executed according to the relative position relation.
In a specific implementation, a path for indicating an image to be explained is determined according to the relative position relationship; and determining that the limb actions to be executed indicate the image to be explained according to the path.
Specifically, according to the transverse deflection angle and the longitudinal deflection angle, a linear distance from a release position of an image to be explained to a position where the robot is located can be determined, the determined linear distance is used as an indication path, and the indication path can also be that the robot moves to the release position of the image according to the transverse deflection angle, that is, as shown in fig. 5, the robot deflects by an angle β and moves to a 'and reaches the transverse position of a', and then the robot points to the longitudinal position of the image to be explained.
Step 103: and executing limb actions and explaining the image to be explained.
Specifically, if the indication path of the robot is a linear distance from the release position of the image to be explained to the position of the robot, the robot can indicate the image to be explained by moving the laser position of the laser pen and perform explanation by matching with the language.
The robot can also indicate the image to be explained and explain the image in coordination with the language by moving the position of the robot and controlling the pointing direction of the arm.
Compared with the prior art, the method and the device for the image interpretation can determine the body actions (such as indication actions, moving actions and the like) which need to be executed by the robot on the image to be interpreted by the robot by acquiring the relative position relation between the putting position of the image to be interpreted and the position of the robot, and the body actions corresponding to the image to be interpreted are generated in the process of interpreting the image to be interpreted, so that the attraction of the image to be interpreted on a user is increased, the enthusiasm of the user for acquiring information contained in the image to be interpreted is improved, and the interaction between the robot and the user and the efficiency of the user for acquiring knowledge are improved due to the increase of the body actions.
A second embodiment of the invention relates to a method of image interpretation. The second embodiment is a further improvement of the first embodiment, and the main improvements are as follows: in the second embodiment of the present invention, after determining that the body motion to be executed indicates the image to be explained according to the path, third relative position information between the placement position of the next image to be explained and the placement position of the image to be explained is determined. The specific flow is shown in fig. 6.
Step 201: and acquiring the relative position relation between the release position of the image to be explained and the position of the robot.
It should be noted that the size information of the launch area of the belonging image may also be obtained according to the first relative position information. Specifically, according to a first distance, a second distance and a first included angle, determining a distance between an upper boundary and a lower boundary by using a trigonometric function, and taking the obtained distance as first size information of a throwing area of the image; determining the distance between the left boundary and the right boundary by utilizing a trigonometric function according to the third distance, the fourth distance and the second included angle, and taking the obtained distance as second size information of the throwing area of the image; and taking the first size information and the second size information as the size information of the putting region of the image.
As shown in fig. 5, knowing the third distance d3, the fourth distance d4 and the second included angle α, the length of the MN, that is, the distance between the upper boundary and the lower boundary of the casting area of the belonging image can be determined by using a trigonometric function, and the length information of the MN is used as the second size information. The solving method of the first size information is substantially the same as the solving method of the second size information, and will not be described herein again.
Step 202: and determining the limb actions required to be executed according to the relative position relation.
This step is substantially the same as step 102 of the first embodiment, and is not described herein again.
Step 203: and executing limb actions and explaining the image to be explained.
This step is substantially the same as step 103 of the first embodiment, and is not described herein again.
Step 204: and determining third relative position information between the throwing position of the next image to be explained and the throwing position of the image to be explained.
Specifically, the position of the throwing area of the next image to be explained in the throwing area of the next image to be explained is determined according to the position information of the next image to be explained in the next image to be explained; and then according to the relative position relation between the throwing position of the image to be explained and the robot, the third relative position information between the throwing position of the next image to be explained and the throwing position of the image to be explained can be determined.
Step 205: and determining the relative position relationship between the release position of the next image to be explained and the position of the robot according to the third relative position information.
Specifically, according to the relative position relationship between the release position of the image to be explained and the position of the robot, and the third relative position information between the release position of the next image to be explained and the release position of the image to be explained, the relative position relationship between the release position of the next image to be explained and the position of the robot can be determined.
Step 206: and determining the next limb action to be executed according to the relative position relationship between the release position of the next image to be explained and the position of the robot.
This step 206 is substantially the same as step 202, and will not be described herein.
Step 207: and executing the next limb action and explaining the next image to be explained.
Step 207 is substantially the same as step 203, and will not be described herein.
In the image interpretation method in this embodiment, the robot can quickly determine the relative position relationship between the delivery position of the next image to be interpreted and the position of the robot through the third relative position information between the delivery position of the next image to be interpreted and the delivery position of the image to be interpreted, and further can quickly determine the next limb action, so that the limb actions of the robot are coherent, and the interactivity with the user is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
The third embodiment of the present invention relates to a terminal, and the terminal 30 includes: at least one processor 301; and a memory 302 communicatively coupled to the at least one processor 301; the memory 302 stores instructions executable by the at least one processor 301, and the instructions are executed by the at least one processor 301, so that the at least one processor 301 can execute the image interpretation method according to the first embodiment or the second embodiment. The specific structure is shown in fig. 7.
The memory 302 and the processor 301 are connected by a bus, which may include any number of interconnected buses and bridges that link one or more of the various circuits of the processor 301 and the memory 302. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 301 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 301.
The processor 301 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 302 may be used to store data used by the processor in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of image interpretation in the first or second embodiment.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (9)

1. An image interpretation method, applied to a robot, includes:
determining relative position information between a release position of an image to which an image to be explained belongs and the position of the robot, and taking the relative position information as first relative position information;
determining the position information of the image to be explained in the image after the image to be explained is released, and taking the position information as second relative position information;
determining a relative position relation between the throwing position of the image to be explained and the position of the robot according to the first relative position information and the second relative position information;
determining limb actions to be executed according to the relative position relation;
and executing the limb action and explaining the image to be explained.
2. The image interpretation method according to claim 1, wherein determining the limb actions to be performed according to the relative position relationship specifically comprises:
determining a path for indicating the image to be explained according to the relative position relation;
and determining that the limb actions needing to be executed indicate the image to be explained according to the path.
3. The image interpretation method according to claim 1, wherein determining the relative position information between the release position of the image to which the image to be interpreted belongs and the position of the robot as the first relative position information specifically comprises:
determining distances from the position of the robot to an upper boundary, a lower boundary, a left boundary and a right boundary of a release region of the image, wherein the distance to the upper boundary is used as a first distance, the distance to the lower boundary is used as a second distance, the distance to the left boundary is used as a third distance, and the distance to the right boundary is used as a fourth distance;
determining an included angle formed from the position of the robot to the upper boundary and the lower boundary of the throwing area of the image as a first included angle;
determining an included angle formed by the left boundary and the right boundary from the position of the robot to the throwing area of the image as a second included angle;
and taking the first distance, the second distance, the third distance, the fourth distance, the first included angle and the second included angle as the first relative position information.
4. The image interpretation method according to claim 3, wherein the determining the position information of the image to be interpreted in the image after the image to be interpreted is delivered and serving as the second relative position information specifically comprises:
acquiring size information of a release region of the image to which the user belongs, and acquiring position information of the image to be explained in the image to which the user belongs;
determining the distance between the throwing area of the image to be explained and the left boundary or the right boundary and the distance between the throwing area of the image to be explained and the upper boundary or the lower boundary according to the size information of the throwing area of the image to be explained and the position information of the image to be explained in the image to be explained, taking the distance between the throwing area of the image to be explained and the left boundary or the right boundary as the transverse position information of the throwing position of the image to be explained, and taking the distance between the upper boundary or the lower boundary as the longitudinal position information of the throwing position of the image to be explained;
and using the transverse position information and the longitudinal position information as the second relative position information.
5. The image interpretation method according to claim 4, wherein the obtaining of the size information of the drop zone of the image comprises:
determining the distance between the upper boundary and the lower boundary by utilizing a trigonometric function according to the first distance, the second distance and the first included angle, and taking the obtained distance as first size information of the throwing area of the image;
determining the distance between the left boundary and the right boundary by utilizing a trigonometric function according to the third distance, the fourth distance and the second included angle, and taking the obtained distance as second size information of the throwing area of the image;
and taking the first size information and the second size information as the size information of the putting region of the image.
6. The image interpretation method according to claim 5, wherein determining a relative position relationship between the delivery position of the image to be interpreted and the position of the robot according to the first relative position information and the second relative position information includes:
determining a longitudinal deflection angle formed from the position of the robot to the throwing position of the image to be explained and to the upper boundary by utilizing a trigonometric function relation according to the first distance, the second distance, the first size information, the first included angle and the longitudinal position information;
determining a transverse deflection angle formed from the position of the robot to the throwing position of the image to be explained and to the left boundary by utilizing a trigonometric function relation according to the third distance, the fourth distance, the second size information, the second included angle and the transverse position information;
and taking the transverse deflection angle and the longitudinal deflection angle as the relative position relation between the release position of the image to be explained and the position of the robot.
7. The image interpretation method according to claim 2, wherein after determining that the body movement to be performed indicates the image to be interpreted according to the path, the image interpretation method further comprises:
determining third relative position information between the throwing position of the next image to be explained and the throwing position of the image to be explained;
determining a relative position relation between the release position of the next image to be explained and the position of the robot according to the third relative position information;
determining the next limb action to be executed according to the relative position relationship between the release position of the next image to be explained and the position of the robot;
and executing the next limb action and explaining the next image to be explained.
8. A terminal, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of image interpretation according to any one of claims 1 to 7.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of image interpretation according to any one of claims 1 to 7.
CN201910181131.7A 2019-03-11 2019-03-11 Image explanation method, terminal and computer readable storage medium Active CN109934867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910181131.7A CN109934867B (en) 2019-03-11 2019-03-11 Image explanation method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910181131.7A CN109934867B (en) 2019-03-11 2019-03-11 Image explanation method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109934867A CN109934867A (en) 2019-06-25
CN109934867B true CN109934867B (en) 2021-11-09

Family

ID=66986796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910181131.7A Active CN109934867B (en) 2019-03-11 2019-03-11 Image explanation method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109934867B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1481772A2 (en) * 2003-05-29 2004-12-01 Fanuc Ltd Robot system for controlling the movements of a robot utilizing a visual sensor
CN105666504A (en) * 2016-04-20 2016-06-15 广州蓝海机器人系统有限公司 Robot with professional explaining function
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN107223082A (en) * 2017-04-21 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of robot control method, robot device and robot device
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
CN109129507A (en) * 2018-09-10 2019-01-04 北京联合大学 A kind of medium intelligent introduction robot and explanation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1481772A2 (en) * 2003-05-29 2004-12-01 Fanuc Ltd Robot system for controlling the movements of a robot utilizing a visual sensor
CN105666504A (en) * 2016-04-20 2016-06-15 广州蓝海机器人系统有限公司 Robot with professional explaining function
CN105894873A (en) * 2016-06-01 2016-08-24 北京光年无限科技有限公司 Child teaching method and device orienting to intelligent robot
CN107223082A (en) * 2017-04-21 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of robot control method, robot device and robot device
CN107553505A (en) * 2017-10-13 2018-01-09 刘杜 Autonomous introduction system platform robot and explanation method
CN109129507A (en) * 2018-09-10 2019-01-04 北京联合大学 A kind of medium intelligent introduction robot and explanation method and system

Also Published As

Publication number Publication date
CN109934867A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN111174799B (en) Map construction method and device, computer readable medium and terminal equipment
CN110457414B (en) Offline map processing and virtual object display method, device, medium and equipment
EP4198694A1 (en) Positioning and tracking method and platform, head-mounted display system, and computer-readable storage medium
EP4116462A2 (en) Method and apparatus of processing image, electronic device, storage medium and program product
KR20180050702A (en) Image transformation processing method and apparatus, computer storage medium
US11967132B2 (en) Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
JP2021119507A (en) Traffic lane determination method, traffic lane positioning accuracy evaluation method, traffic lane determination apparatus, traffic lane positioning accuracy evaluation apparatus, electronic device, computer readable storage medium, and program
CN111612852A (en) Method and apparatus for verifying camera parameters
CN105116886A (en) Robot autonomous walking method
CN109813332B (en) Method and device for adding virtual guide line
CN111767853A (en) Lane line detection method and device
CN106375682B (en) Image processing method and device, movable equipment, unmanned aerial vehicle remote controller and system
WO2023273036A1 (en) Navigation method and apparatus, and electronic device and readable storage medium
CN111079079A (en) Data correction method and device, electronic equipment and computer readable storage medium
US11100670B2 (en) Positioning method, positioning device and nonvolatile computer-readable storage medium
CN110348351B (en) Image semantic segmentation method, terminal and readable storage medium
CN113601510B (en) Robot movement control method, device, system and equipment based on binocular vision
CN109934867B (en) Image explanation method, terminal and computer readable storage medium
CN109903308B (en) Method and device for acquiring information
US10664948B2 (en) Method and apparatus for processing omni-directional image
CN109816726B (en) Visual odometer map updating method and system based on depth filter
CN113561181B (en) Target detection model updating method, device and system
US10459533B2 (en) Information processing method and electronic device
CN114637372A (en) Portable display device with overlaid virtual information
CN113758481A (en) Grid map generation method, device, system, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210207

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.