CN114274184A - Logistics robot man-machine interaction method and system based on projection guidance - Google Patents
Logistics robot man-machine interaction method and system based on projection guidance Download PDFInfo
- Publication number
- CN114274184A CN114274184A CN202111554660.0A CN202111554660A CN114274184A CN 114274184 A CN114274184 A CN 114274184A CN 202111554660 A CN202111554660 A CN 202111554660A CN 114274184 A CN114274184 A CN 114274184A
- Authority
- CN
- China
- Prior art keywords
- projection
- projection area
- area
- work
- identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000003993 interaction Effects 0.000 title claims abstract description 40
- 239000000463 material Substances 0.000 claims abstract description 78
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 16
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000004891 communication Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000004659 sterilization and disinfection Methods 0.000 description 6
- 230000002354 daily effect Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 239000010426 asphalt Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004579 marble Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001954 sterilising effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Landscapes
- Manipulator (AREA)
Abstract
The application provides a logistics robot man-machine interaction method and system based on projection guidance, which belong to the technical field of robots, and the method comprises the following steps: projecting the working content into the projection area according to the material of different projection areas and the corresponding projection mode; identifying an item or an identifier in the projection area; and sending the feedback information to the scheduling server to complete the man-machine interaction process. The system comprises: the device comprises an instruction transceiving module, a camera, an image processing module and a projection module. The application adopts the projection technology to guide the user to carry out conversation and communication with the robot, overcomes the problem that the user can not clearly see the projection characters because the projection area is different in material, and realizes the smooth interaction process between the logistics robot and the user.
Description
Technical Field
The application belongs to the technical field of robots, and particularly relates to a logistics robot and robot interaction method and system based on projection guidance.
Background
Commodity circulation robot often helps people to receive the express delivery among the prior art, get the express delivery, very big convenience has been brought for daily life, and the cost of commodity circulation enterprise operation has been reduced, however along with the work of robot deepening gradually, the degree of difficulty of work constantly increases, whether need discerning article when receiving the express delivery for example is forbidden article, perhaps need give the express delivery when receiving the express delivery and carry out disinfection processing, perhaps the robot need take the elevator and collect the express delivery, these all need the robot and user to carry out human-computer interaction effectively.
In the prior art, a touch screen is generally arranged on a robot, and a human-computer interaction process is carried out by using the touch screen, but the method is not suitable for a logistics robot, because the logistics robot needs to move on a common road surface for a long time every day, the touch screen is very fragile, and the cost for replacing the touch screen again is very high, so that the touch screen human-computer interaction mode is not a practical and feasible method; in addition, the conventional human-computer interaction method can also adopt a human-computer voice conversation mode, however, the method is still not suitable for the physical distribution robot in the practical application process, because the physical distribution robot often has very large noise interference with the place where the user talks, such as the market, the road and the like, and the noise interference makes the robot unable to recognize the voice of the user, so in the practical application, the human-computer voice conversation is still not a feasible method.
Aiming at the problem that a feasible logistics robot human-computer interaction scheme is lacked in the prior art, an effective solution is not provided.
Disclosure of Invention
Aiming at the technical problems, the application provides a logistics robot man-machine interaction method and system based on projection guidance, and a projection technology is adopted to guide a user to carry out dialogue communication with a robot.
In a first aspect, the present application provides a projection guidance-based logistics robot human-machine interaction method, which completes daily work of a logistics robot by one or more of the following human-machine interaction processes, and includes the following steps:
step S1: receiving a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information;
step S2: according to the working content, using a camera to acquire data of a projection area;
step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the work content, the feedback information indicates that the projection area meets the specified size, and the step S6 is executed;
step S5: if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size, and the step S12 is executed;
step S6: judging the material of the projection area according to the work content to obtain the predicted material of the projection area;
step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content to the projection area according to the corresponding projection mode;
step S8: according to the work content in the projection area, putting the object or the mark to be identified into the projection area;
step S9: identifying an item or a logo in the projection area;
step S10: if the object or the identifier conforms to the predefined object or identifier, the feedback information is that the object or the identifier conforms to the predefined object or identifier, and the picture of the object or the identifier in the projection area is saved, and the process goes to step S12;
step S11: if the item or the identifier does not accord with the predefined item or the identifier of the item, the feedback information is that the predefined item or the identifier is not accorded, and the picture of the item in the projection area is saved, go to step S12;
step S12: and sending the feedback information to the scheduling server to complete the man-machine interaction process.
Between step S10 and step S12 or between step S11 and step S12, further comprising:
judging the size of a working area according to the working content;
if the robot work area does not meet the specified size of the work content, the work area does not meet the prompt and is projected into the projection area, the camera feeds back images of the projection area in real time until the work area meets the specified size of the work content, the specified work is completed according to the work content, and the feedback information indicates that the specified work is completed;
and if the robot work area meets the specified size of the work content, completing the specified work according to the work content, and feeding back information that the specified work is completed.
The working content comprises the following steps: projecting characters and area size; the region sizes include: a projection area size or/and a working area size.
The comparison table comprises various materials and modes suitable for projection corresponding to the materials, and the projection modes comprise setting of projection brightness and setting of projection character colors.
The method for judging the material of the projection area to obtain the predicted material of the projection area comprises the following steps:
performing data enhancement on the projection area images to obtain a plurality of projection area images subjected to data enhancement;
inputting the projection area images subjected to data enhancement into a pre-trained material identification model to obtain the probability of the projection area images corresponding to each material;
according to the material category, summing the probabilities of the plurality of projection area images corresponding to each material to obtain the corresponding probability sum of each material;
and calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted material of the projection area.
The data enhancement of the projection area image is carried out to obtain a plurality of projection area images after data enhancement, and the method comprises the following steps:
randomly rotating the projection area image;
respectively adding random brightness deviation and random contrast deviation to the rotated image to obtain an image after deviation;
randomly overturning the image after the offset so as to obtain a single data enhanced image;
and repeating the steps until the preset times, thereby obtaining a plurality of projection area images after data enhancement.
The pre-trained material recognition model comprises the following steps:
collecting a training image with a material label;
and training the training image with the material label by adopting a deep neuron network to obtain a pre-trained material identification model.
The identifying of the object or the mark in the projection area comprises:
acquiring an image of an article in a projection area;
inputting the image of the article into a trained deep convolutional neural network model;
and identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the object to be identified.
The training process of the trained deep convolutional neural network model is as follows:
acquiring an article training image with article type, weight and data labels;
and training the article training image with the article type, weight and data label by adopting a deep convolutional neural network model to obtain a trained deep convolutional neural network model.
In a second aspect, the present application provides a logistics robot human-machine interaction system based on projection guidance, including:
the system comprises an instruction transceiving module, a camera, an image processing module and a projection module;
the method comprises the following steps that cameras and projection modules are installed around a logistics robot, and an instruction transceiving module and an image processing module are installed inside the logistics robot;
the instruction transceiving module, the camera, the image processing module and the projection module are sequentially connected;
the instruction receiving and sending module is used for receiving a work instruction transmitted by the dispatching server and sending feedback information to the dispatching server, and the work instruction comprises work content and feedback information;
the camera is used for acquiring data of the projection area and sending the acquired image to the image processing module;
the image processing module is used for judging the size of a projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information indicates that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size; judging the material of the projection area according to the work content to obtain the predicted material of the projection area; identifying an item or a logo in the projection area; if the object or the identifier accords with the predefined object or identifier, the feedback information is that the object or the identifier accords with the predefined object or identifier, and a picture of the object or the identifier in the projection area is saved; if the object or the identifier does not accord with the predefined object or the object identifier, the feedback information is that the predefined object or the identifier is not accorded, and the picture of the object in the projection area is saved;
and the projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
The beneficial technical effects are as follows:
the application provides a logistics robot man-machine interaction method and system based on projection guidance, a projection technology is adopted to guide a user to carry out interactive communication with a robot, the problem that the user may not clearly see projection characters due to different materials of projection areas in the projection is solved, and the logistics robot can smoothly complete daily work of the logistics robot due to smooth man-machine interaction.
Drawings
Fig. 1 is a flowchart of a logistics robot human-machine interaction method based on projection guidance according to an embodiment of the present application;
FIG. 2 is a flowchart of a designated work area of an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for obtaining predicted projection region material according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of obtaining a plurality of data-enhanced projection region images according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a process of training a texture recognition model according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of an embodiment of a process for identifying an item or logo in a projected area;
FIG. 7 is a flowchart of a deep convolutional neural network model training process according to an embodiment of the present application;
FIG. 8 is a schematic block diagram of a logistics robot interaction system based on projection guidance according to an embodiment of the application;
FIG. 9 is a schematic diagram of an identified item of an embodiment of the present application being a contraband item;
FIG. 10 is a schematic view of a sterilization process according to an embodiment of the present application;
FIG. 11 is a schematic view of a robot projection wall according to an embodiment of the present application;
wherein, 1-upper part and 2-base.
The specific implementation mode is as follows:
the present application is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present application is not limited thereby.
In a first aspect, the present application provides a projection guidance-based logistics robot human-machine interaction method, which completes daily work of a logistics robot by using one or more of the following human-machine interaction processes, as shown in fig. 1, including the following steps:
step S1: receiving a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information;
step S2: according to the working content, using a camera to acquire data of a projection area;
step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the work content, the feedback information indicates that the projection area meets the specified size, and the step S6 is executed;
step S5: if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size, and the step S12 is executed;
step S6: judging the material of the projection area according to the work content to obtain the predicted material of the projection area;
step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content to the projection area according to the corresponding projection mode;
step S8: according to the work content in the projection area, putting the object or the mark to be identified into the projection area;
step S9: identifying an item or a logo in the projection area;
step S10: if the object or the identifier conforms to the predefined object or identifier, the feedback information is that the object or the identifier conforms to the predefined object or identifier, and the picture of the object or the identifier in the projection area is saved, and the process goes to step S12;
step S11: if the item or the identifier does not accord with the predefined item or the identifier of the item, the feedback information is that the predefined item or the identifier is not accorded, and the picture of the item in the projection area is saved, go to step S12;
step S12: and sending the feedback information to the scheduling server to complete the man-machine interaction process.
Between step S10 and step S12 or between step S11 and step S12, as shown in fig. 2, further includes:
step S100: judging the size of a working area according to the working content;
step S101: if the robot work area does not meet the specified size of the work content, the work area does not meet the prompt and is projected into the projection area, the camera feeds back images of the projection area in real time until the work area meets the specified size of the work content, the specified work is completed according to the work content, and the feedback information indicates that the specified work is completed;
step S102: and if the robot work area meets the specified size of the work content, completing the specified work according to the work content, and feeding back information that the specified work is completed.
The working content comprises the following steps: projecting characters and area size; the region sizes include: a projection area size or/and a working area size.
The comparison table comprises various materials and modes suitable for projection corresponding to the materials, and the projection modes comprise setting of projection brightness and setting of projection character colors.
The method for judging the material of the projection area to obtain the predicted material of the projection area, as shown in fig. 3, includes the following steps:
step S6.1: performing data enhancement on the projection area images to obtain a plurality of projection area images subjected to data enhancement;
step S6.2: inputting the projection area images subjected to data enhancement into a pre-trained material identification model to obtain the probability of the projection area images corresponding to each material;
step S6.3: according to the material category, summing the probabilities of the plurality of projection area images corresponding to each material to obtain the corresponding probability sum of each material;
step S6.4: and calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted material of the projection area.
The data enhancement is a key step, and because the acquired images are fewer, if the image input model is directly operated, a larger misjudgment rate can be obtained. And which algorithm is adopted to establish the material identification model can be selected at will.
The data enhancement of the projection area image to obtain a plurality of data-enhanced projection area images, as shown in fig. 4, includes the following steps:
step S6.1.1: randomly rotating the projection area image;
step S6.1.2: respectively adding random brightness deviation and random contrast deviation to the rotated image to obtain an image after deviation;
step S6.1.3: randomly overturning the image after the offset so as to obtain a single data enhanced image;
step S6.1.4: and repeating the steps until the preset times, thereby obtaining a plurality of projection area images after data enhancement.
The pre-trained material recognition model, as shown in fig. 5, includes the following steps:
step S6.2.1: collecting a training image with a material label;
step S6.2.2: and training the training image with the material label by adopting a deep neuron network to obtain a pre-trained material identification model.
The identifying of the object or the mark in the projection area, as shown in fig. 6, includes:
step S9.1: acquiring an image of an article in a projection area;
step S9.2: inputting the image of the article into a trained deep convolutional neural network model;
step S9.3: and identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the object to be identified.
As shown in fig. 7, the training process of the trained deep convolutional neural network model is as follows:
step S9.2.1: acquiring an article training image with article type, weight and data labels;
step S9.2.2: and training the article training image with the article type, weight and data label by adopting a deep convolutional neural network model to obtain a trained deep convolutional neural network model.
For the identification of the article type and the weight activity data label, the technical scheme is mature in the prior art, and is not repeated in the application.
In a second aspect, the present application provides a logistics robot-human-machine interaction system based on projection guidance, as shown in fig. 8, including:
the system comprises an instruction transceiving module, a camera, an image processing module and a projection module;
a camera and a projection module are installed around the logistics robot, and an instruction transceiving module and an image processing module are installed inside the logistics robot.
The instruction transceiver module, the camera, the image processing module and the projection module are sequentially connected, and the instruction transceiver module is respectively connected with the image processing module and the projection module;
the instruction receiving and sending module is used for receiving a work instruction transmitted by the dispatching server and sending feedback information to the dispatching server, and the work instruction comprises work content and feedback information;
the camera is used for acquiring data of the projection area and sending the acquired image to the image processing module;
the image processing module is used for judging the size of a projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information indicates that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size; judging the material of the projection area according to the work content to obtain the predicted material of the projection area; identifying an item or a logo in the projection area; if the object or the identifier accords with the predefined object or identifier, the feedback information is that the object or the identifier accords with the predefined object or identifier, and a picture of the object or the identifier in the projection area is saved; if the object or the identifier does not accord with the predefined object or the object identifier, the feedback information is that the predefined object or the identifier is not accorded, and the picture of the object in the projection area is saved;
and the projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
In the embodiments of the present application, the robot has a motion module so that it can move normally, as shown in fig. 9, 10, and 11, the robot is divided into a top 1 and a base 1, wherein the top 1 is used for holding the express, the base 2 is used for moving, and the projection module is installed around the base 2.
the user adopts the logistics robot to receive the express delivery, and to this process, its focus is the express delivery of information is received for the user at that time for the express delivery of confirming the user receipt, if take wrong express delivery, then can cause the confusion, so the user receives express delivery information and needs a verification code, or two-dimensional code for the express delivery that this express delivery of discernment is the user and need to collect.
The robot receives a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information; the working content of the embodiment is that the user sends the designated express at the designated place, receives the information of the express to be taken, and attaches the unique designated two-dimensional code of the express. The feedback information of the robot for sending the express to complete the work is that the two-dimensional code is specified to be scanned by a camera of the robot.
The robot moves to a designated place according to the work content and opens the projection area to project the upcoming place onto the ground.
At the moment, a camera is used for carrying out data acquisition on the projection area, and the size of the projection area is judged;
if the projection area meets the area size specified by the working content, the feedback information indicates that the projection area meets the specified size, and the next step is carried out; if the projection area does not meet the area size specified by the work content, the feedback information indicates that the projection area does not meet the specified size, after the feedback information is transmitted to the scheduling server, the scheduling server gives an instruction to the robot to move the position of the robot, continuously adjusts the position, finds a place with few people, namely, the position meeting the size of the projection area, and then detects the material of the projection area;
judging the material of the projection area to obtain the predicted material of the projection area; searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content to the projection area according to the corresponding projection mode; at this time, generally, the ground of the asphalt road or the marble floor tile, the robot projects the address for taking the express delivery into the projection area by using the illumination intensity of 800 and the red projection characters according to the comparison table, and the standby robot reaches the designated place, so that the projection contents are as follows: please put your express identification code into the projection area.
A user puts the two-dimensional code into a projection area; the robot identifies the two-dimensional code in the projection area; if the matching is carried out, the feedback information is in accordance with the predefined object or identifier, and the picture of the object or identifier in the projection area is stored, so that the work of sending the express is completed;
if the two-dimensional codes do not match, the feedback information is that the two-dimensional codes do not match the predefined articles or identifiers, the images of the articles in the projection area are stored, the robot sends the feedback information to the scheduling server, and the scheduling server determines how to operate next step. And the general scheduling server sends information to enable the user to show the two-dimensional code again, and if the two-dimensional code is not matched for three times, the robot is reminded to finish work and returns to the scheduling room.
as shown in fig. 9 and 10, the process focuses on how to strive for human-computer interaction by the projection-guided logistics robot, so that the user can give express delivery to the robot with confidence and sterilize the express delivery.
The robot receives a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information; the work content is that the express is taken to a specified place, the express is identified as a non-prohibited express type, and the express is disinfected.
The robot uses a camera to acquire data of a projection area; judging the size of a projection area; if the projection area meets the area size specified by the working content, the feedback information indicates that the projection area meets the specified size, and the next step is carried out; if the projection area does not meet the area size specified by the work content, the feedback information indicates that the projection area does not meet the specified size, and the robot moves to a place meeting the projection area;
judging the material of the projection area to obtain the predicted material of the projection area; searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content to the projection area according to the corresponding projection mode; the text projected by this embodiment is "please put the to-be-sent express into the projection area".
The user puts the express into a projection area;
the robot identifies an article or an identifier in the projection area; if the object or the identifier conforms to the predefined object, the feedback information indicates that the express is a non-prohibited object, and a picture of the object or the identifier in the projection area is stored; if the object or the identifier does not accord with the predefined object, the feedback information is that the object does not accord with the predefined object, which indicates that the express is a prohibited object, and the picture of the object in the projection area is stored;
if the express delivery is non-contraband, then the robot will disinfect the express delivery, because of the great space of disinfection needs, need remind the user this moment "please keep away from the regional 1 meters in express delivery place, will disinfect for the express delivery soon". Because of the 1 all around projections of facial make-up, when the express delivery was in 1 one side of facial make-up and then when influencing this side projection, can start the opposite side and carry out the projection.
Meanwhile, the robot judges the size of a working area;
if the robot work area does not meet the specified size of the work content, the work area does not meet the prompt and is projected into the projection area, a user is continuously reminded to 'please keep away from the area where the express is located by 1 meter, the express is to be disinfected', a camera is adopted to feed back images of the projection area in real time until the work area meets the specified size of the work content, the disinfection work is completed according to the work content, and the feedback information indicates that the disinfection work is completed;
and sending the feedback information to the scheduling server to complete the man-machine interaction process.
Example 3 robot elevator:
as shown in fig. 10, the present embodiment focuses on how the robot performs human-computer interaction by using projection when taking an elevator, and projects a floor to which the robot is about to reach onto a wall of the elevator in the elevator.
The robot receives a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information; the working content is that the express delivery is carried out in 1001 room with 10 floors, at the moment, the robot is already outside the elevator, the elevator needs to be provided with an intelligent communication module so as to be capable of communicating with the robot, the robot informs the elevator through the intelligent communication module, the robot needs to reach 10 floors, and when the elevator comes in front of the robot, the robot uses a camera to carry out data acquisition on a projection area; judging the size of a projection area; if the projection area meets the requirement that the user can go up to the elevator, the feedback information indicates that the projection area meets the specified size, and the movement operation of taking the elevator to 10 floors is executed; if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size and is sent to the scheduling server; at the moment, the dispatching server can appoint the robot to inform people in the elevator of avoiding according to specific conditions by using the projection characters, so that the robot enters, or the dispatching server can enable the robot to wait for the next elevator, until the space condition for taking the elevator can be met.
After the robot can take the elevator, judging the material of the projection area to obtain the predicted material of the projection area; and searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the information that the robot is about to reach 10 layers into the projection area according to the corresponding projection mode. After the robot reaches 10 layers, the other work contents are similar to those of embodiment 1, namely, a user puts an article or a mark to be identified into a projection area; the robot identifies an article or an identifier in the projection area; if the object or the identifier accords with the predefined object, the feedback information is that the object or the identifier accords with the predefined object, the picture of the object or the identifier in the projection area is stored, and the feedback information is sent to the scheduling server to complete the human-computer interaction process.
The present applicant has described and illustrated embodiments of the present invention in detail with reference to the accompanying drawings, but it should be understood by those skilled in the art that the above embodiments are merely preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not for limiting the scope of the present invention, and on the contrary, any improvement or modification made based on the spirit of the present invention should fall within the scope of the present invention.
Claims (10)
1. A logistics robot man-machine interaction method based on projection guidance is characterized in that the logistics robot daily work is completed by adopting one or more of the following man-machine interaction processes, and the method comprises the following steps:
step S1: receiving a work instruction transmitted by a scheduling server, wherein the work instruction comprises work content and feedback information;
step S2: according to the working content, using a camera to acquire data of a projection area;
step S3: judging the size of a projection area according to the working content;
step S4: if the projection area meets the area size specified by the work content, the feedback information indicates that the projection area meets the specified size, and the step S6 is executed;
step S5: if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size, and the step S12 is executed;
step S6: judging the material of the projection area according to the work content to obtain the predicted material of the projection area;
step S7: searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content to the projection area according to the corresponding projection mode;
step S8: according to the work content in the projection area, putting the object or the mark to be identified into the projection area;
step S9: identifying an item or a logo in the projection area;
step S10: if the object or the identifier conforms to the predefined object or identifier, the feedback information is that the object or the identifier conforms to the predefined object or identifier, and the picture of the object or the identifier in the projection area is saved, and the process goes to step S12;
step S11: if the item or the identifier does not accord with the predefined item or the identifier of the item, the feedback information is that the predefined item or the identifier is not accorded, and the picture of the item in the projection area is saved, go to step S12;
step S12: and sending the feedback information to the scheduling server to complete the man-machine interaction process.
2. The projection guidance-based logistics robot interaction method of claim 1, wherein between step S10 and step S12 or between step S11 and step S12, further comprising:
judging the size of a working area according to the working content;
if the robot work area does not meet the specified size of the work content, the work area does not meet the prompt and is projected into the projection area, the camera feeds back images of the projection area in real time until the work area meets the specified size of the work content, the specified work is completed according to the work content, and the feedback information indicates that the specified work is completed;
and if the robot work area meets the specified size of the work content, completing the specified work according to the work content, and feeding back information that the specified work is completed.
3. The projection guidance-based logistics robot human-machine interaction method of claim 1, wherein the work content comprises: projecting characters and area size; the region sizes include: a projection area size or/and a working area size.
4. The logistics robot interaction method based on projection guidance as claimed in claim 1, wherein the lookup table comprises various materials and modes suitable for projection corresponding to the materials, and the projection modes comprise setting of projection brightness and setting of projection character color.
5. The method for logistics robot interaction based on projection guidance as claimed in claim 1, wherein the step of judging the material of the projection area to obtain the predicted material of the projection area comprises the following steps:
performing data enhancement on the projection area images to obtain a plurality of projection area images subjected to data enhancement;
inputting the projection area images subjected to data enhancement into a pre-trained material identification model to obtain the probability of the projection area images corresponding to each material;
according to the material category, summing the probabilities of the plurality of projection area images corresponding to each material to obtain the corresponding probability sum of each material;
and calculating the maximum value of the probability sum, wherein the material corresponding to the maximum value is the predicted material of the projection area.
6. The projection guidance-based logistics robot interaction method as claimed in claim 5, wherein the data enhancement is performed on the projection area image to obtain a plurality of data-enhanced projection area images, and the method comprises the following steps:
randomly rotating the projection area image;
respectively adding random brightness deviation and random contrast deviation to the rotated image to obtain an image after deviation;
randomly overturning the image after the offset so as to obtain a single data enhanced image;
and repeating the steps until the preset times, thereby obtaining a plurality of projection area images after data enhancement.
7. The projection guidance-based logistics robot human-computer interaction method as claimed in claim 5, wherein the pre-trained material recognition model comprises the following steps:
collecting a training image with a material label;
and training the training image with the material label by adopting a deep neuron network to obtain a pre-trained material identification model.
8. The projection guidance-based logistics robot human-machine interaction method of claim 1, wherein the identifying of the item or the logo in the projection area comprises:
acquiring an image of an article in a projection area;
inputting the image of the article into a trained deep convolutional neural network model;
and identifying the image by using the deep convolutional neural network model to obtain the type, weight and number of the object to be identified.
9. The projection guidance-based logistics robot interaction method of claim 8,
the training process of the trained deep convolutional neural network model is as follows:
acquiring an article training image with article type, weight and data labels;
and training the article training image with the article type, weight and data label by adopting a deep convolutional neural network model to obtain a trained deep convolutional neural network model.
10. A logistics robot human-computer interaction system based on projection guidance is characterized by comprising: the system comprises an instruction transceiving module, a camera, an image processing module and a projection module;
the method comprises the following steps that cameras and projection modules are installed around a logistics robot, and an instruction transceiving module and an image processing module are installed inside the logistics robot;
the instruction transceiving module, the camera, the image processing module and the projection module are sequentially connected;
the instruction receiving and sending module is used for receiving a work instruction transmitted by the dispatching server and sending feedback information to the dispatching server, and the work instruction comprises work content and feedback information;
the camera is used for acquiring data of the projection area and sending the acquired image to the image processing module;
the image processing module is used for judging the size of a projection area according to the working content; if the projection area meets the area size specified by the working content, the feedback information indicates that the projection area meets the specified size, and if the projection area does not meet the area size specified by the working content, the feedback information indicates that the projection area does not meet the specified size; judging the material of the projection area according to the work content to obtain the predicted material of the projection area; identifying an item or a logo in the projection area; if the object or the identifier accords with the predefined object or identifier, the feedback information is that the object or the identifier accords with the predefined object or identifier, and a picture of the object or the identifier in the projection area is saved; if the object or the identifier does not accord with the predefined object or the object identifier, the feedback information is that the predefined object or the identifier is not accorded, and the picture of the object in the projection area is saved;
and the projection module is used for searching a corresponding projection mode in a comparison table according to the predicted projection area material, and projecting the working content into the projection area according to the corresponding projection mode.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111554660.0A CN114274184B (en) | 2021-12-17 | 2021-12-17 | Logistics robot man-machine interaction method and system based on projection guidance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111554660.0A CN114274184B (en) | 2021-12-17 | 2021-12-17 | Logistics robot man-machine interaction method and system based on projection guidance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114274184A true CN114274184A (en) | 2022-04-05 |
CN114274184B CN114274184B (en) | 2024-05-24 |
Family
ID=80872936
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111554660.0A Active CN114274184B (en) | 2021-12-17 | 2021-12-17 | Logistics robot man-machine interaction method and system based on projection guidance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114274184B (en) |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102012740A (en) * | 2010-11-15 | 2011-04-13 | 中国科学院深圳先进技术研究院 | Man-machine interaction method and system |
CN201955771U (en) * | 2010-11-15 | 2011-08-31 | 中国科学院深圳先进技术研究院 | Human-computer interaction system |
US20140039677A1 (en) * | 2012-08-03 | 2014-02-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Robots Comprising Projectors For Projecting Images On Identified Projection Surfaces |
CN106064383A (en) * | 2016-07-19 | 2016-11-02 | 东莞市优陌儿智护电子科技有限公司 | The white wall localization method of a kind of intelligent robot projection and robot |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
CN106303476A (en) * | 2016-08-03 | 2017-01-04 | 纳恩博(北京)科技有限公司 | The control method of robot and device |
CN106903695A (en) * | 2017-01-16 | 2017-06-30 | 北京光年无限科技有限公司 | It is applied to the projection interactive method and system of intelligent robot |
CN107239170A (en) * | 2017-06-09 | 2017-10-10 | 江苏神工智能科技有限公司 | One kind listing guidance machine people |
KR20170142820A (en) * | 2016-06-20 | 2017-12-28 | 연세대학교 산학협력단 | Ar system using mobile projection technique and operating method thereof |
KR20180038326A (en) * | 2016-10-06 | 2018-04-16 | 엘지전자 주식회사 | Mobile robot |
US20180311818A1 (en) * | 2017-04-28 | 2018-11-01 | Rahul D. Chipalkatty | Automated personalized feedback for interactive learning applications |
CN108818572A (en) * | 2018-08-29 | 2018-11-16 | 深圳市高大尚信息技术有限公司 | A kind of projection robot and its control method |
US20190015992A1 (en) * | 2017-07-11 | 2019-01-17 | Formdwell Inc | Robotic construction guidance |
WO2019085716A1 (en) * | 2017-10-31 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Mobile robot interaction method and apparatus, mobile robot and storage medium |
CN110580426A (en) * | 2018-06-08 | 2019-12-17 | 速感科技(北京)有限公司 | human-computer interaction method of robot and robot |
US20200012293A1 (en) * | 2019-08-27 | 2020-01-09 | Lg Electronics Inc. | Robot and method of providing guidance service by the robot |
KR20200029208A (en) * | 2018-09-10 | 2020-03-18 | 경북대학교 산학협력단 | Projection based mobile robot control system and method using code block |
CN111153300A (en) * | 2019-12-31 | 2020-05-15 | 深圳优地科技有限公司 | Ladder taking method and system for robot, robot and storage medium |
JP2020113102A (en) * | 2019-01-15 | 2020-07-27 | 日本電気通信システム株式会社 | Projection image specification device, diagram projection system, method for specifying projection image, and program |
CN212541106U (en) * | 2020-08-10 | 2021-02-12 | 陕西红星闪闪网络科技有限公司 | Holographic accompanying tour guide robot |
WO2021047232A1 (en) * | 2019-09-11 | 2021-03-18 | 苏宁易购集团股份有限公司 | Interaction behavior recognition method, apparatus, computer device, and storage medium |
-
2021
- 2021-12-17 CN CN202111554660.0A patent/CN114274184B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201955771U (en) * | 2010-11-15 | 2011-08-31 | 中国科学院深圳先进技术研究院 | Human-computer interaction system |
CN102012740A (en) * | 2010-11-15 | 2011-04-13 | 中国科学院深圳先进技术研究院 | Man-machine interaction method and system |
US20140039677A1 (en) * | 2012-08-03 | 2014-02-06 | Toyota Motor Engineering & Manufacturing North America, Inc. | Robots Comprising Projectors For Projecting Images On Identified Projection Surfaces |
KR20170142820A (en) * | 2016-06-20 | 2017-12-28 | 연세대학교 산학협력단 | Ar system using mobile projection technique and operating method thereof |
CN106064383A (en) * | 2016-07-19 | 2016-11-02 | 东莞市优陌儿智护电子科技有限公司 | The white wall localization method of a kind of intelligent robot projection and robot |
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
CN106303476A (en) * | 2016-08-03 | 2017-01-04 | 纳恩博(北京)科技有限公司 | The control method of robot and device |
KR20180038326A (en) * | 2016-10-06 | 2018-04-16 | 엘지전자 주식회사 | Mobile robot |
CN106903695A (en) * | 2017-01-16 | 2017-06-30 | 北京光年无限科技有限公司 | It is applied to the projection interactive method and system of intelligent robot |
US20180311818A1 (en) * | 2017-04-28 | 2018-11-01 | Rahul D. Chipalkatty | Automated personalized feedback for interactive learning applications |
CN107239170A (en) * | 2017-06-09 | 2017-10-10 | 江苏神工智能科技有限公司 | One kind listing guidance machine people |
US20190015992A1 (en) * | 2017-07-11 | 2019-01-17 | Formdwell Inc | Robotic construction guidance |
WO2019085716A1 (en) * | 2017-10-31 | 2019-05-09 | 腾讯科技(深圳)有限公司 | Mobile robot interaction method and apparatus, mobile robot and storage medium |
CN110580426A (en) * | 2018-06-08 | 2019-12-17 | 速感科技(北京)有限公司 | human-computer interaction method of robot and robot |
CN108818572A (en) * | 2018-08-29 | 2018-11-16 | 深圳市高大尚信息技术有限公司 | A kind of projection robot and its control method |
KR20200029208A (en) * | 2018-09-10 | 2020-03-18 | 경북대학교 산학협력단 | Projection based mobile robot control system and method using code block |
JP2020113102A (en) * | 2019-01-15 | 2020-07-27 | 日本電気通信システム株式会社 | Projection image specification device, diagram projection system, method for specifying projection image, and program |
US20200012293A1 (en) * | 2019-08-27 | 2020-01-09 | Lg Electronics Inc. | Robot and method of providing guidance service by the robot |
WO2021047232A1 (en) * | 2019-09-11 | 2021-03-18 | 苏宁易购集团股份有限公司 | Interaction behavior recognition method, apparatus, computer device, and storage medium |
CN111153300A (en) * | 2019-12-31 | 2020-05-15 | 深圳优地科技有限公司 | Ladder taking method and system for robot, robot and storage medium |
CN212541106U (en) * | 2020-08-10 | 2021-02-12 | 陕西红星闪闪网络科技有限公司 | Holographic accompanying tour guide robot |
Non-Patent Citations (2)
Title |
---|
吕昊;张成元;: "基于投影仪摄像机系统的人机交互关键技术研究", 科学技术创新, no. 10, 5 April 2020 (2020-04-05) * |
韩若冰;江松;郝以凤;张尚书;刘杰;刘琳琳;: "基于全息投影的交互向导系统探讨", 今日印刷, no. 07, 10 July 2020 (2020-07-10) * |
Also Published As
Publication number | Publication date |
---|---|
CN114274184B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109844784B (en) | Adaptive process for guiding human performed inventory tasks | |
CN109081028B (en) | Robot-based article conveying method and system | |
US10392190B1 (en) | System and method of providing delivery of items from one container to another container in a hybrid environment | |
US20180243776A1 (en) | Intelligent flexible hub paint spraying line and process | |
CN107598934A (en) | A kind of intelligent robot foreground application system and method | |
EP4163231A1 (en) | Warehousing system, goods consolidation method and device, material box moving device, and control terminal | |
CN106682418A (en) | Intelligent access system and access method on basis of robot | |
HK1029780A1 (en) | Method for use of an elevator installation. | |
US12026659B2 (en) | Automated system for management of receptacles | |
US10589932B1 (en) | System and method of providing delivery of items from one container to another container | |
CN113657565A (en) | Robot cross-floor moving method and device, robot and cloud server | |
CN109544081B (en) | Logistics sorting mode matching method and system | |
KR102494841B1 (en) | Building providing delivery service using robots performing delivery task | |
CN114274184A (en) | Logistics robot man-machine interaction method and system based on projection guidance | |
CN111573445A (en) | Non-contact man-machine interaction system for elevator | |
US11905130B2 (en) | System and method of providing delivery of items from one container to another container using a suction attachment system | |
CN118052421A (en) | Unmanned forklift data management method and system based on artificial intelligence | |
CN103771072B (en) | A kind of voice Picking System based on speech recognition shelf and method | |
CN112008727A (en) | Elevator-taking robot key control method based on bionic vision and elevator-taking robot | |
US11597596B1 (en) | System and method of providing an elevator system for mobile robots | |
CN111489117A (en) | Article distribution method and system based on visual computing interaction | |
CN114442608B (en) | Office building logistics robot and control method thereof | |
CN111047246A (en) | Article transportation method and device and storage medium | |
CN214987861U (en) | Storage goods picking system | |
CN113470259A (en) | Shared intelligent household service robot and use method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |