CN114274184B - Logistics robot man-machine interaction method and system based on projection guidance - Google Patents

Logistics robot man-machine interaction method and system based on projection guidance Download PDF

Info

Publication number
CN114274184B
CN114274184B CN202111554660.0A CN202111554660A CN114274184B CN 114274184 B CN114274184 B CN 114274184B CN 202111554660 A CN202111554660 A CN 202111554660A CN 114274184 B CN114274184 B CN 114274184B
Authority
CN
China
Prior art keywords
projection area
projection
area
working
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111554660.0A
Other languages
Chinese (zh)
Other versions
CN114274184A (en
Inventor
苏瑞
衡进
孙贇
姚郁巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Terminus Technology Co Ltd
Original Assignee
Chongqing Terminus Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Terminus Technology Co Ltd filed Critical Chongqing Terminus Technology Co Ltd
Priority to CN202111554660.0A priority Critical patent/CN114274184B/en
Publication of CN114274184A publication Critical patent/CN114274184A/en
Application granted granted Critical
Publication of CN114274184B publication Critical patent/CN114274184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Manipulator (AREA)

Abstract

The application provides a logistics robot man-machine interaction method and system based on projection guidance, which belong to the technical field of robots, wherein the method comprises the following steps: according to the materials of different projection areas and the corresponding projection modes, projecting the working content into the projection areas; identifying the object or the mark in the projection area; and sending feedback information to the scheduling server to complete the man-machine interaction process. The system comprises: the device comprises an instruction receiving and transmitting module, a camera, an image processing module and a projection module. The application guides the user by adopting the projection technology to carry out dialogue communication with the robot, solves the problem that the projection possibly exists due to different materials of the projection area, and the user can not see the projected characters clearly, and realizes the smooth interaction process between the logistics robot and the user.

Description

Logistics robot man-machine interaction method and system based on projection guidance
Technical Field
The application belongs to the technical field of robots, and particularly relates to a logistics robot man-machine interaction method and system based on projection guidance.
Background
Logistics robots in the prior art often help people to receive and take express, brings great convenience to daily life, reduces the operation cost of logistics enterprises, however, as robots work gradually go deep, the difficulty of work is continuously increased, for example, whether articles are forbidden articles or not needs to be identified when receiving express, or the express needs to be disinfected when receiving express, or the robot needs to take an elevator to receive express, and the robots and users need to carry out effective human-computer interaction.
In the prior art, a touch screen is generally arranged on a robot and used for carrying out a human-computer interaction process, but the method is not suitable for a logistics robot, because the logistics robot needs to move and needs to move on a common road surface for a long time every day, the touch screen is very fragile, and the cost of replacing the touch screen again is very high, so the human-computer interaction mode of the touch screen is not a practical and feasible method; in addition, the conventional man-machine interaction method may also adopt a man-machine voice dialogue mode, however, the method is still not suitable for practical application of the logistics robot, because the logistics robot often has very large noise interference with the place where the user dialogues, such as markets, roads, etc., and the noise interference makes the robot unable to recognize the voice of the user, so the man-machine voice dialogue is still not a viable method in practical application.
Aiming at the problem of lack of a logistics robot man-machine interaction feasible scheme in the prior art, no effective solution has been proposed yet.
Disclosure of Invention
Aiming at the technical problems, the application provides a logistics robot man-machine interaction method and system based on projection guidance, which guides a user to perform dialogue communication with a robot by adopting a projection technology.
In a first aspect, the present application provides a logistic robot man-machine interaction method based on projection guidance, which completes daily work of a logistic robot by adopting one or more man-machine interaction processes as follows, comprising the following steps:
Step S1: receiving a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information;
Step S2: according to the working content, a camera is used for collecting data of a projection area;
Step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and the step S6 is performed;
step S5: if the projection area does not meet the size of the area designated by the working content, the feedback information is that the projection area does not meet the designated size, and the step S12 is performed;
Step S6: judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area;
Step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode;
step S8: placing the object or the mark to be identified into the projection area according to the working content in the projection area;
step S9: identifying an item or an identification in the projection area;
step S10: if the object or the identifier accords with the predefined object or the identifier, the feedback information is the object or the identifier, and a picture of the object or the identifier in the projection area is saved, and the step S12 is carried out;
step S11: if the article or the identifier does not conform to the predefined article or the identifier of the article, the feedback information is that the article or the identifier does not conform to the predefined article or the identifier, and a picture of the article in the projection area is saved, and the step S12 is carried out;
Step S12: and sending feedback information to the scheduling server to complete the man-machine interaction process.
Between step S10 and step S12 or between step S11 and step S12, further comprising:
judging the size of a working area according to the working content;
If the working area of the robot does not meet the specified size of the working content, the working area does not meet the requirement of prompting projection into the projection area, the camera feeds back the image of the projection area in real time until the working area meets the specified size of the working content, and specified work is completed according to the working content, wherein feedback information is that the specified work is completed;
If the robot work area meets the specified size of the work content, the specified work is completed according to the work content, and the feedback information is that the specified work is completed.
The working content comprises: projecting text and area size; the region size includes: projection area size or/and working area size.
The comparison table comprises various materials and modes suitable for projection of the corresponding materials, and the projection modes comprise projection brightness setting and projection text color setting.
The method for judging the material quality of the projection area to obtain the predicted material quality of the projection area comprises the following steps:
carrying out data enhancement on the projection area image to obtain a plurality of projection area images with enhanced data;
inputting the projection area images after the data enhancement into a pre-trained material identification model to obtain the probability that the projection area images correspond to each material;
summing the probabilities of the plurality of projection area images corresponding to the materials according to the material types to obtain the corresponding probability sum of each material;
and calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted projection area material.
The data enhancement is carried out on the projection area image to obtain a plurality of projection area images after data enhancement, and the method comprises the following steps:
randomly rotating the projection area image;
respectively adding random brightness offset and random contrast offset to the rotated image to obtain an offset image;
randomly overturning the offset image to obtain a single data enhancement image;
Repeating the steps until the preset times, thereby obtaining a plurality of projection area images with enhanced data.
The pre-trained material identification model comprises the following steps:
collecting a training image with a material label;
And training the training image with the material label by adopting a deep neural network to obtain a pre-trained material identification model.
The identifying an item or a logo in a projection area comprises:
Acquiring an image of an object in a projection area;
inputting the image of the object into a trained deep convolutional neural network model;
And identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the objects to be identified.
The training process of the training-completed deep convolutional neural network model is as follows:
Acquiring an article training image with article type, weight and data labels;
And training the article training image with the article type, the weight and the data label by adopting the deep convolutional neural network model to obtain a trained deep convolutional neural network model.
In a second aspect, the present application proposes a logistic robot-computer interaction system based on projection guidance, comprising:
the device comprises an instruction receiving and transmitting module, a camera, an image processing module and a projection module;
The method comprises the steps that a camera and a projection module are arranged around a logistics robot, and an instruction receiving and transmitting module and an image processing module are arranged in the logistics robot;
the instruction receiving and transmitting module, the camera, the image processing module and the projection module are sequentially connected in sequence;
The instruction receiving and transmitting module is used for receiving the working instruction transmitted by the scheduling server and transmitting feedback information to the scheduling server, wherein the working instruction comprises working content and feedback information;
The camera is used for collecting data of the projection area and sending the collected image to the image processing module;
The image processing module is used for judging the size of the projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information is that the projection area does not meet the specified size; judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area; identifying an item or an identification in the projection area; if the object or the mark accords with the predefined object or the mark, the feedback information is the object or the mark which accords with the predefined object or the mark, and the picture of the object or the mark in the projection area is saved; if the article or the identifier does not accord with the predefined article or the article identifier, the feedback information is that the article or the identifier does not accord with the predefined article or the identifier, and a picture of the article in the projection area is saved;
The projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
The beneficial technical effects are as follows:
The application provides a logistics robot man-machine interaction method and system based on projection guidance, which guides a user by adopting a projection technology to carry out dialogue communication with a robot, overcomes the problem that projection possibly exists due to different materials of projection areas, and the user can not clearly see projection characters, and realizes that the logistics robot smoothly completes daily work of the logistics robot due to smooth man-machine interaction.
Drawings
FIG. 1 is a flow chart of a logistic robot man-machine interaction method based on projection guidance according to an embodiment of the application;
FIG. 2 is a flow chart of a designated work area according to an embodiment of the present application;
FIG. 3 is a flow chart of a predicted projection area texture obtaining method according to an embodiment of the present application;
FIG. 4 is a flowchart of a projection area image after obtaining a plurality of data enhancement according to an embodiment of the present application;
FIG. 5 is a flowchart of a training process of a material recognition model according to an embodiment of the present application;
FIG. 6 is a flow chart of an identification process for identifying items or markers in a projection area in accordance with an embodiment of the present application;
FIG. 7 is a flow chart of a training process of a deep convolutional neural network model in an embodiment of the present application;
FIG. 8 is a schematic block diagram of a logistic robot-computer interaction system based on projection guidance according to an embodiment of the application;
FIG. 9 is a schematic diagram of an identified item being a contraband item in accordance with an embodiment of the present application;
FIG. 10 is a schematic view of a sterilization process according to an embodiment of the present application;
FIG. 11 is a schematic view of a projected wall of a robot according to an embodiment of the present application;
wherein, 1-upper package and 2-base.
The specific embodiment is as follows:
the application is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and are not intended to limit the scope of the present application.
In a first aspect, the present application provides a logistic robot man-machine interaction method based on projection guidance, which completes daily work of a logistic robot by adopting one or more man-machine interaction processes as follows, as shown in fig. 1, and includes the following steps:
Step S1: receiving a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information;
Step S2: according to the working content, a camera is used for collecting data of a projection area;
Step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and the step S6 is performed;
step S5: if the projection area does not meet the size of the area designated by the working content, the feedback information is that the projection area does not meet the designated size, and the step S12 is performed;
Step S6: judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area;
Step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode;
step S8: placing the object or the mark to be identified into the projection area according to the working content in the projection area;
step S9: identifying an item or an identification in the projection area;
step S10: if the object or the identifier accords with the predefined object or the identifier, the feedback information is the object or the identifier, and a picture of the object or the identifier in the projection area is saved, and the step S12 is carried out;
step S11: if the article or the identifier does not conform to the predefined article or the identifier of the article, the feedback information is that the article or the identifier does not conform to the predefined article or the identifier, and a picture of the article in the projection area is saved, and the step S12 is carried out;
Step S12: and sending feedback information to the scheduling server to complete the man-machine interaction process.
Between step S10 and step S12 or between step S11 and step S12, as shown in fig. 2, further includes:
step S100: judging the size of a working area according to the working content;
Step S101: if the working area of the robot does not meet the specified size of the working content, the working area does not meet the requirement of prompting projection into the projection area, the camera feeds back the image of the projection area in real time until the working area meets the specified size of the working content, and specified work is completed according to the working content, wherein feedback information is that the specified work is completed;
Step S102: if the robot work area meets the specified size of the work content, the specified work is completed according to the work content, and the feedback information is that the specified work is completed.
The working content comprises: projecting text and area size; the region size includes: projection area size or/and working area size.
The comparison table comprises various materials and modes suitable for projection of the corresponding materials, and the projection modes comprise projection brightness setting and projection text color setting.
The determining the material of the projection area to obtain the predicted material of the projection area, as shown in fig. 3, includes the following steps:
Step S6.1: carrying out data enhancement on the projection area image to obtain a plurality of projection area images with enhanced data;
Step S6.2: inputting the projection area images after the data enhancement into a pre-trained material identification model to obtain the probability that the projection area images correspond to each material;
Step S6.3: summing the probabilities of the plurality of projection area images corresponding to the materials according to the material types to obtain the corresponding probability sum of each material;
step S6.4: and calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted projection area material.
The data enhancement is a key step, because the acquired images are fewer, and if the images are directly input into the model for operation, a larger misjudgment rate can be obtained. Which algorithm is used to build the material recognition model can be chosen arbitrarily.
The data enhancement is performed on the projection area image to obtain a plurality of projection area images with enhanced data, as shown in fig. 4, including the following steps:
step S6.1.1: randomly rotating the projection area image;
step S6.1.2: respectively adding random brightness offset and random contrast offset to the rotated image to obtain an offset image;
Step S6.1.3: randomly overturning the offset image to obtain a single data enhancement image;
Step S6.1.4: repeating the steps until the preset times, thereby obtaining a plurality of projection area images with enhanced data.
The pre-trained material recognition model, as shown in fig. 5, includes the following steps:
step S6.2.1: collecting a training image with a material label;
step S6.2.2: and training the training image with the material label by adopting a deep neural network to obtain a pre-trained material identification model.
The identifying the item or the identifier in the projection area, as shown in fig. 6, includes:
step S9.1: acquiring an image of an object in a projection area;
step S9.2: inputting the image of the object into a trained deep convolutional neural network model;
Step S9.3: and identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the objects to be identified.
The training process of the trained deep convolutional neural network model is as follows, as shown in fig. 7:
Step S9.2.1: acquiring an article training image with article type, weight and data labels;
step S9.2.2: and training the article training image with the article type, the weight and the data label by adopting the deep convolutional neural network model to obtain a trained deep convolutional neural network model.
The identification of the weight active data label is a mature technical scheme in the prior art, and the identification is not repeated.
In a second aspect, the present application proposes a logistic robot-computer interaction system based on projection guidance, as shown in fig. 8, including:
the device comprises an instruction receiving and transmitting module, a camera, an image processing module and a projection module;
the camera and the projection module are arranged around the logistics robot, and the instruction receiving and transmitting module and the image processing module are arranged inside the logistics robot.
The instruction receiving and transmitting module, the camera, the image processing module and the projection module are sequentially connected in sequence, and the instruction receiving and transmitting module is respectively connected with the image processing module and the projection module;
The instruction receiving and transmitting module is used for receiving the working instruction transmitted by the scheduling server and transmitting feedback information to the scheduling server, wherein the working instruction comprises working content and feedback information;
The camera is used for collecting data of the projection area and sending the collected image to the image processing module;
The image processing module is used for judging the size of the projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information is that the projection area does not meet the specified size; judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area; identifying an item or an identification in the projection area; if the object or the mark accords with the predefined object or the mark, the feedback information is the object or the mark which accords with the predefined object or the mark, and the picture of the object or the mark in the projection area is saved; if the article or the identifier does not accord with the predefined article or the article identifier, the feedback information is that the article or the identifier does not accord with the predefined article or the identifier, and a picture of the article in the projection area is saved;
The projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
In the embodiment of the application, the robots are all provided with the movement modules, so that the robots can normally move, and as shown in fig. 9, 10 and 11, the robots are divided into an upper part 1 and a base 1, wherein the upper part 1 is used for accommodating express delivery, the base 2 is used for moving, and the projection modules are arranged around the base 2.
Embodiment 1 robot delivery:
for the process, the importance of the user is to confirm that the express received by the user is the express received by the user at the time, and if the user takes the wrong express, confusion can be caused, so that the user needs a verification code or a two-dimensional code for receiving the express information, and the verification code is used for identifying that the express is the express which the user needs to collect.
The robot receives a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information; the working content of the embodiment is to send the appointed express delivery to the appointed place, and meanwhile, the user receives the information of the express delivery to be fetched and attaches the unique appointed two-dimensional code of the express delivery to be fetched. The feedback information of the robot for delivering the express to finish the work should be that the appointed two-dimensional code is scanned by the camera of the robot.
The robot moves to a designated place according to the work content and opens the projection area to project the place to be reached onto the ground.
At the moment, a camera is used for collecting data of the projection area, and the size of the projection area is judged;
if the projection area meets the size of the area appointed by the working content, the feedback information is that the projection area meets the appointed size, and the next step is carried out; if the projection area does not meet the size of the area designated by the working content, the feedback information is that the projection area does not meet the designated size, after the feedback information is transmitted to the scheduling server, the scheduling server instructs the robot to move the position, the position is continuously adjusted, a place with less people is found, namely, the position meeting the size of the projection area is found, and then the material of the projection area is detected;
Judging the material of the projection area to obtain the predicted material of the projection area; searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode; at this time, the robot projects red characters according to a reference table, the illumination intensity is 800, the address of the express delivery is projected into the projection area, the standby robot reaches the appointed place, and the projection content is: please put your express delivery identification code into the projection area.
The user puts the two-dimensional code into the projection area; the robot identifies a two-dimensional code in the projection area; if the images are matched, feeding back information is in accordance with the predefined objects or the marks, and storing the images of the objects or the marks in the projection area, so that the express sending work is completed;
If the two-dimension codes are not matched, the feedback information is not in accordance with the predefined objects or the identification, the pictures of the objects in the projection area are stored, the robot sends the feedback information to the dispatching server, and the dispatching server determines how the next operation should be performed. The general dispatch server can send information to enable the user to show the two-dimensional code again, if the two-dimensional code is not matched for three times, the robot is reminded to complete work, and the robot returns to a dispatch room.
Embodiment 2 robot pick-up express and disinfection process:
as shown in fig. 9 and 10, the focus of this process is how to guide the logistics robot to strive for human-machine interaction by projection so that the user can give the express delivery to the robot with confidence and sterilize the fast progression.
The robot receives a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information; the work content is to take the express delivery to the appointed place, identify the express delivery as a non-forbidden express delivery type and sterilize the express delivery.
The robot uses a camera to acquire data of a projection area; judging the size of the projection area; if the projection area meets the size of the area appointed by the working content, the feedback information is that the projection area meets the appointed size, and the next step is carried out; if the projection area does not meet the size of the area designated by the working content, the feedback information is that the projection area does not meet the designated size, and the robot moves to a place meeting the projection area;
Judging the material of the projection area to obtain the predicted material of the projection area; searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode; the text projected in this embodiment is "please put the waiting express into the projection area".
The user puts the express delivery into the projection area;
The robot identifying an item or a logo in the projection area; if the item or the identifier accords with the predefined item, the feedback information is in accordance with the predefined item, which means that the express is not forbidden, and a picture of the item or the identifier in the projection area is saved; if the object or the mark does not accord with the predefined object, the feedback information is not in accord with the predefined object, which means that the express is forbidden object, and the picture of the object in the projection area is saved;
If the express is a non-forbidden article, the robot can sterilize the express, and a user needs to be reminded of 'please keep away from the area where the express is located by more than 1 meter' because a large space is needed for sterilization, namely, the robot can sterilize the express. Because the projection can be carried out on the periphery of the upper part 1, when the express delivery is carried out on one side of the upper part 1 and then the projection on the side is influenced, the projection on the other side can be started.
Meanwhile, the robot judges the size of the working area;
If the working area of the robot does not meet the specified size of the working content, the working area does not meet the requirement of prompting projection into a projection area, a user is continuously reminded of 'please keep away from the area where the express delivery is located by 1 meter, namely, disinfection is performed for fast-forwarding', a camera is adopted to feed back the image of the projection area in real time until the working area meets the specified size of the working content, disinfection work is completed according to the working content, and feedback information is that the disinfection work is completed;
and sending feedback information to the scheduling server to complete the man-machine interaction process.
Example 3 robot elevator ride:
As shown in fig. 10, the present embodiment focuses on how the robot performs man-machine interaction with projection while riding an elevator, and projects a floor to which it is about to reach onto a wall of the elevator in the elevator.
The robot receives a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information; the working content is 10 layers 1001 room express delivery, at this moment the robot is outside the elevator, the elevator needs to install the intelligent communication module so as to communicate with the robot, the robot informs the elevator through the intelligent communication module, the robot needs to reach 10 layers, and when the elevator comes in front of the robot, the robot uses the camera to acquire data of a projection area; judging the size of the projection area; if the projection area meets the requirement that the user can go to the elevator, the feedback information is that the projection area meets the specified size, and the movement operation from taking the elevator to 10 floors is executed; if the projection area does not meet the size of the area appointed by the working content, the feedback information is that the projection area does not meet the appointed size to the scheduling server; at this time, the dispatch server can inform personnel in the elevator to avoid by using projection characters according to specific conditions, so that the robot can enter, or the dispatch server can enable the robot to wait for the next elevator until the space condition of taking the elevator can be met.
Judging the material quality of the projection area after the robot can take an elevator to obtain the predicted material quality of the projection area; and searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting information of the robot to reach 10 layers into the projection area according to the corresponding projection mode. When the robot reaches 10 floors, other working contents are similar to those of embodiment 1, namely, the user puts the object to be identified or the mark into the projection area; the robot identifying an item or a logo in the projection area; if the article or the mark accords with the predefined article, the feedback information accords with the predefined article, a picture of the article or the mark in the projection area is saved, and the feedback information is sent to the scheduling server, so that the man-machine interaction process is completed.
While the applicant has described and illustrated the embodiments of the present invention in detail with reference to the drawings, it should be understood by those skilled in the art that the above embodiments are only preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not to limit the scope of the present invention, but any improvements or modifications based on the spirit of the present invention should fall within the scope of the present invention.

Claims (5)

1. The logistics robot man-machine interaction method based on projection guidance is characterized in that one or more man-machine interaction processes are adopted to complete daily work of the logistics robot, and the method comprises the following steps:
Step S1: receiving a working instruction transmitted by a scheduling server, wherein the working instruction comprises working content and feedback information;
Step S2: according to the working content, a camera is used for collecting data of a projection area;
Step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and the step S6 is performed;
step S5: if the projection area does not meet the size of the area designated by the working content, the feedback information is that the projection area does not meet the designated size, and the step S12 is performed;
Step S6: judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area;
Step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode;
step S8: placing the object or the mark to be identified into the projection area according to the working content in the projection area;
step S9: identifying an item or an identification in the projection area;
step S10: if the object or the identifier accords with the predefined object or the identifier, the feedback information is the object or the identifier, and a picture of the object or the identifier in the projection area is saved, and the step S12 is carried out;
step S11: if the article or the identifier does not conform to the predefined article or the identifier of the article, the feedback information is that the article or the identifier does not conform to the predefined article or the identifier, and a picture of the article in the projection area is saved, and the step S12 is carried out;
step S12: sending feedback information to the scheduling server to finish the man-machine interaction process;
between step S10 and step S12 or between step S11 and step S12, further comprising:
judging the size of a working area according to the working content;
If the working area of the robot does not meet the specified size of the working content, the working area does not meet the requirement of prompting projection into the projection area, the camera feeds back the image of the projection area in real time until the working area meets the specified size of the working content, and specified work is completed according to the working content, wherein feedback information is that the specified work is completed;
if the robot work area meets the specified size of the work content, finishing specified work according to the work content, wherein the feedback information is that the specified work is finished;
The working content comprises: projecting text and area size; the region size includes: projection area size or/and working area size;
The comparison table comprises various materials and modes suitable for projection of the corresponding materials, and the projection modes comprise setting of projection brightness and setting of projection text colors;
the method for judging the material quality of the projection area to obtain the predicted material quality of the projection area comprises the following steps:
carrying out data enhancement on the projection area image to obtain a plurality of projection area images with enhanced data;
inputting the projection area images after the data enhancement into a pre-trained material identification model to obtain the probability that the projection area images correspond to each material;
summing the probabilities of the plurality of projection area images corresponding to the materials according to the material types to obtain the corresponding probability sum of each material;
Calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted projection area material;
The data enhancement is carried out on the projection area image to obtain a plurality of projection area images after data enhancement, and the method comprises the following steps:
randomly rotating the projection area image;
respectively adding random brightness offset and random contrast offset to the rotated image to obtain an offset image;
randomly overturning the offset image to obtain a single data enhancement image;
Repeating the steps until the preset times, thereby obtaining a plurality of projection area images with enhanced data.
2. The method for interacting physical distribution robots based on projection guidance according to claim 1, wherein the pre-trained material recognition model comprises the following steps:
collecting a training image with a material label;
And training the training image with the material label by adopting a deep neural network to obtain a pre-trained material identification model.
3. The projection guidance-based logistics robot man-machine interaction method of claim 1, wherein said identifying an item or logo in a projection area comprises:
Acquiring an image of an object in a projection area;
inputting the image of the object into a trained deep convolutional neural network model;
And identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the objects to be identified.
4. The logistic robot man-machine interaction method based on projection guidance according to claim 3, wherein,
The training process of the training-completed deep convolutional neural network model is as follows:
Acquiring an article training image with article type, weight and data labels;
And training the article training image with the article type, the weight and the data label by adopting the deep convolutional neural network model to obtain a trained deep convolutional neural network model.
5. A logistic robot man-machine interaction system based on projection guidance for implementing the method of claim 1, comprising: the device comprises an instruction receiving and transmitting module, a camera, an image processing module and a projection module;
The method comprises the steps that a camera and a projection module are arranged around a logistics robot, and an instruction receiving and transmitting module and an image processing module are arranged in the logistics robot;
the instruction receiving and transmitting module, the camera, the image processing module and the projection module are sequentially connected in sequence;
The instruction receiving and transmitting module is used for receiving the working instruction transmitted by the scheduling server and transmitting feedback information to the scheduling server, wherein the working instruction comprises working content and feedback information;
The camera is used for collecting data of the projection area and sending the collected image to the image processing module;
The image processing module is used for judging the size of the projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information is that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information is that the projection area does not meet the specified size; judging the material quality of the projection area according to the working content to obtain the predicted material quality of the projection area; identifying an item or an identification in the projection area; if the object or the mark accords with the predefined object or the mark, the feedback information is the object or the mark which accords with the predefined object or the mark, and the picture of the object or the mark in the projection area is saved; if the article or the identifier does not accord with the predefined article or the article identifier, the feedback information is that the article or the identifier does not accord with the predefined article or the identifier, and a picture of the article in the projection area is saved;
The projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
CN202111554660.0A 2021-12-17 2021-12-17 Logistics robot man-machine interaction method and system based on projection guidance Active CN114274184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111554660.0A CN114274184B (en) 2021-12-17 2021-12-17 Logistics robot man-machine interaction method and system based on projection guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111554660.0A CN114274184B (en) 2021-12-17 2021-12-17 Logistics robot man-machine interaction method and system based on projection guidance

Publications (2)

Publication Number Publication Date
CN114274184A CN114274184A (en) 2022-04-05
CN114274184B true CN114274184B (en) 2024-05-24

Family

ID=80872936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111554660.0A Active CN114274184B (en) 2021-12-17 2021-12-17 Logistics robot man-machine interaction method and system based on projection guidance

Country Status (1)

Country Link
CN (1) CN114274184B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system
CN201955771U (en) * 2010-11-15 2011-08-31 中国科学院深圳先进技术研究院 Human-computer interaction system
CN106064383A (en) * 2016-07-19 2016-11-02 东莞市优陌儿智护电子科技有限公司 The white wall localization method of a kind of intelligent robot projection and robot
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device
CN106903695A (en) * 2017-01-16 2017-06-30 北京光年无限科技有限公司 It is applied to the projection interactive method and system of intelligent robot
CN107239170A (en) * 2017-06-09 2017-10-10 江苏神工智能科技有限公司 One kind listing guidance machine people
KR20170142820A (en) * 2016-06-20 2017-12-28 연세대학교 산학협력단 Ar system using mobile projection technique and operating method thereof
KR20180038326A (en) * 2016-10-06 2018-04-16 엘지전자 주식회사 Mobile robot
CN108818572A (en) * 2018-08-29 2018-11-16 深圳市高大尚信息技术有限公司 A kind of projection robot and its control method
WO2019085716A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Mobile robot interaction method and apparatus, mobile robot and storage medium
CN110580426A (en) * 2018-06-08 2019-12-17 速感科技(北京)有限公司 human-computer interaction method of robot and robot
KR20200029208A (en) * 2018-09-10 2020-03-18 경북대학교 산학협력단 Projection based mobile robot control system and method using code block
CN111153300A (en) * 2019-12-31 2020-05-15 深圳优地科技有限公司 Ladder taking method and system for robot, robot and storage medium
JP2020113102A (en) * 2019-01-15 2020-07-27 日本電気通信システム株式会社 Projection image specification device, diagram projection system, method for specifying projection image, and program
CN212541106U (en) * 2020-08-10 2021-02-12 陕西红星闪闪网络科技有限公司 Holographic accompanying tour guide robot
WO2021047232A1 (en) * 2019-09-11 2021-03-18 苏宁易购集团股份有限公司 Interaction behavior recognition method, apparatus, computer device, and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983662B2 (en) * 2012-08-03 2015-03-17 Toyota Motor Engineering & Manufacturing North America, Inc. Robots comprising projectors for projecting images on identified projection surfaces
EP3615281A4 (en) * 2017-04-28 2021-01-13 Southie Autonomy Works, Inc. Automated personalized feedback for interactive learning applications
US20190015992A1 (en) * 2017-07-11 2019-01-17 Formdwell Inc Robotic construction guidance
KR20190106866A (en) * 2019-08-27 2019-09-18 엘지전자 주식회사 Robot and method of providing guidance service by the robot

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201955771U (en) * 2010-11-15 2011-08-31 中国科学院深圳先进技术研究院 Human-computer interaction system
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system
KR20170142820A (en) * 2016-06-20 2017-12-28 연세대학교 산학협력단 Ar system using mobile projection technique and operating method thereof
CN106064383A (en) * 2016-07-19 2016-11-02 东莞市优陌儿智护电子科技有限公司 The white wall localization method of a kind of intelligent robot projection and robot
CN106228982A (en) * 2016-07-27 2016-12-14 华南理工大学 A kind of interactive learning system based on education services robot and exchange method
CN106303476A (en) * 2016-08-03 2017-01-04 纳恩博(北京)科技有限公司 The control method of robot and device
KR20180038326A (en) * 2016-10-06 2018-04-16 엘지전자 주식회사 Mobile robot
CN106903695A (en) * 2017-01-16 2017-06-30 北京光年无限科技有限公司 It is applied to the projection interactive method and system of intelligent robot
CN107239170A (en) * 2017-06-09 2017-10-10 江苏神工智能科技有限公司 One kind listing guidance machine people
WO2019085716A1 (en) * 2017-10-31 2019-05-09 腾讯科技(深圳)有限公司 Mobile robot interaction method and apparatus, mobile robot and storage medium
CN110580426A (en) * 2018-06-08 2019-12-17 速感科技(北京)有限公司 human-computer interaction method of robot and robot
CN108818572A (en) * 2018-08-29 2018-11-16 深圳市高大尚信息技术有限公司 A kind of projection robot and its control method
KR20200029208A (en) * 2018-09-10 2020-03-18 경북대학교 산학협력단 Projection based mobile robot control system and method using code block
JP2020113102A (en) * 2019-01-15 2020-07-27 日本電気通信システム株式会社 Projection image specification device, diagram projection system, method for specifying projection image, and program
WO2021047232A1 (en) * 2019-09-11 2021-03-18 苏宁易购集团股份有限公司 Interaction behavior recognition method, apparatus, computer device, and storage medium
CN111153300A (en) * 2019-12-31 2020-05-15 深圳优地科技有限公司 Ladder taking method and system for robot, robot and storage medium
CN212541106U (en) * 2020-08-10 2021-02-12 陕西红星闪闪网络科技有限公司 Holographic accompanying tour guide robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于全息投影的交互向导系统探讨;韩若冰;江松;郝以凤;张尚书;刘杰;刘琳琳;;今日印刷;20200710(第07期);全文 *
基于投影仪摄像机系统的人机交互关键技术研究;吕昊;张成元;;科学技术创新;20200405(第10期);全文 *

Also Published As

Publication number Publication date
CN114274184A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN108885459B (en) Navigation method, navigation system, mobile control system and mobile robot
CN108910381B (en) Article conveying equipment and article conveying method
US9875502B2 (en) Shopping facility assistance systems, devices, and methods to identify security and safety anomalies
JP5106544B2 (en) System and method for communicating status information
CN109844784B (en) Adaptive process for guiding human performed inventory tasks
CN104809609B (en) Intelligent Warehouse Management System and management method
WO2017223242A1 (en) Item tracking using a dynamic region of interest
AU2020216392B2 (en) Robot dwell time minimization in warehouse order fulfillment operations
CN106682418A (en) Intelligent access system and access method on basis of robot
US20200182634A1 (en) Providing path directions relating to a shopping cart
CN107598934A (en) A kind of intelligent robot foreground application system and method
US11459176B2 (en) System and method of providing delivery of items from one container to another container
CN111461362A (en) Intelligent barrel replacing system
US11097897B1 (en) System and method of providing delivery of items from one container to another container via robot movement control to indicate recipient container
CN114274184B (en) Logistics robot man-machine interaction method and system based on projection guidance
US20230334414A1 (en) Automated system for management of receptacles
CN115562276A (en) Path planning method, device, equipment and computer readable storage medium
CN109909166A (en) A kind of intelligent repository Automated Sorting System and the method for sorting based on the sorting system
CN109377127A (en) A kind of 3 D stereo unmanned warehouse
CN110610341A (en) Intelligent warehouse, piece sending method of intelligent warehouse and piece taking method of intelligent warehouse
CN113592376B (en) Intelligent access information management system applied to express post house
CN207046202U (en) A kind of automated warehousing sorting station
US11597596B1 (en) System and method of providing an elevator system for mobile robots
CN111489117A (en) Article distribution method and system based on visual computing interaction
GB2549188B (en) Assignment of a motorized personal assistance apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant