WO2020168896A1 - 用于无人配送机器人的手势控制方法、装置、设备及介质 - Google Patents

用于无人配送机器人的手势控制方法、装置、设备及介质 Download PDF

Info

Publication number
WO2020168896A1
WO2020168896A1 PCT/CN2020/073599 CN2020073599W WO2020168896A1 WO 2020168896 A1 WO2020168896 A1 WO 2020168896A1 CN 2020073599 W CN2020073599 W CN 2020073599W WO 2020168896 A1 WO2020168896 A1 WO 2020168896A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
gesture
delivery robot
user
unmanned delivery
Prior art date
Application number
PCT/CN2020/073599
Other languages
English (en)
French (fr)
Inventor
徐志浩
王建伟
李雨倩
刘懿
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Publication of WO2020168896A1 publication Critical patent/WO2020168896A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This application relates to the field of unmanned delivery technology, and in particular to a gesture control method, device, electronic equipment and computer readable medium for an unmanned delivery robot.
  • Unmanned distribution robot distribution will become the future development trend.
  • the existing unmanned delivery robot pick-up program is mainly based on entering the verification code to open the box and pick up the goods.
  • entering the verification code to unpack the goods has the following technical defects:
  • the present application provides a gesture control method, device, electronic device and computer readable medium for an unmanned delivery robot, which can shorten the pick-up time, improve the flexibility of the pick-up process, and avoid verification code retrieval.
  • the verification code is lost or forgotten, and the verification code input device is exposed to the outside of the car body.
  • a gesture control method for an unmanned delivery robot includes: recognizing the user's gesture to generate a recognition result; determining the operation mode of the unmanned delivery robot, wherein The modes include a driving mode and a gesture pickup mode; and when the recognition result is credible and the operation mode is a gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation.
  • moving the unmanned delivery robot to the user to perform a predetermined operation further includes: obtaining The current state of the unmanned delivery robot cargo box; and when the recognition result is credible, the operation mode is the gesture pickup mode, and the current state of the cargo box meets a second predetermined condition, the The operating mode is changed to gesture pickup mode.
  • the credible recognition result includes: acquiring a predetermined number of multiple first gestures within a predetermined time; recognizing the multiple first gestures to generate multiple first gestures Recognition result; and when the reliability of the plurality of first recognition results is greater than a predetermined ratio, it is determined that the recognition result is credible.
  • that the current state of the cargo box satisfies the second predetermined condition includes: when it is determined that the cargo box is in the closed state, confirming that the current state of the cargo box meets the second predetermined condition .
  • moving the unmanned delivery robot to the user to perform a predetermined operation further includes: The recognition result determines the gesture recognition flag; the mode flag is determined according to the operating mode; and when the gesture recognition flag and the mode flag meet a third predetermined condition, the unmanned delivery robot moves to the The user performs a predetermined operation.
  • the gesture recognition flag and the mode flag satisfying a third predetermined condition include: indicating that the recognition result is credible in the gesture recognition flag, and the mode When the flag indicates that the operating mode is a gesture pickup mode, it is determined that the gesture recognition flag and the mode flag meet a third predetermined condition.
  • moving the unmanned delivery robot to the user to perform a predetermined operation includes: the unmanned delivery robot sends out a voice prompting the user to pick up the goods, and moves to the user; Acquiring the vehicle speed of the unmanned delivery robot; and when the vehicle speed is zero, sending a box opening instruction to the container of the unmanned delivery robot.
  • the vehicle speed includes the current rear wheel speed, the current front wheel rotation angle, and the front wheel rotation angle of the previous sampling period.
  • the recognition result and the operation mode satisfying the predetermined condition further includes: when the current state of the cargo box is not all closed, sending a voice prompt to prompt to close the store.
  • the cargo box The cargo box.
  • Parallel processing is performed on the recognition result, the operating mode, and the vehicle speed to determine whether they meet a predetermined condition.
  • a gesture control device for an unmanned delivery robot.
  • the device includes: a gesture recognition module for recognizing a user's gesture to generate a recognition result; a mode module for determining An operation mode of the unmanned delivery robot; and a judgment module, configured to move the unmanned delivery robot to the user to perform a predetermined operation when the recognition result is credible and the operation mode is a gesture pickup mode.
  • an electronic device includes: one or more processors; a storage device for storing one or more programs; when the one or more programs are The one or more processors execute, so that the one or more processors implement any one of the aforementioned gesture control methods for an unmanned delivery robot.
  • a computer-readable medium is provided with a computer program stored thereon, wherein the program is characterized in that, when the program is executed by a processor, it implements any of the aforementioned The gesture control method of the delivery robot.
  • the pick-up time can be shortened, the flexibility of the pick-up process can be improved, and the verification code can be avoided in the pick-up scheme. Loss or forgotten codes, and the verification code input device exposed to the outside of the car body caused the equipment to be easily damaged.
  • Fig. 1 is a system block diagram showing a gesture control method and device for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 2 is a flow chart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 3 is a flowchart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 4 is a flowchart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 5 is a flow chart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 6 is a flow chart showing a gesture control method for an unmanned delivery robot according to another exemplary embodiment.
  • Fig. 7 is a flow chart showing a gesture control method for an unmanned delivery robot according to another exemplary embodiment.
  • Fig. 8 is a block diagram showing a gesture control device for an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 9 is a block diagram showing a gesture control device for an unmanned delivery robot according to another exemplary embodiment.
  • Fig. 10 is a block diagram showing an electronic device used for gesture control of an unmanned delivery robot according to an exemplary embodiment.
  • Fig. 1 is a system block diagram showing a gesture control method and device for an unmanned delivery robot according to an exemplary embodiment.
  • the server 105 may be a server that provides various services, such as a background management server (just an example) that supports a gesture control system for an unmanned delivery robot operated by a user using the terminal devices 101, 102, and 103.
  • the background management server can analyze and process the received gesture control request and other data, and feed back the processing results (such as box opening instructions, commands for sending voice prompts-only examples) to the terminal device.
  • the server 105 can, for example, recognize the user’s gesture to generate a recognition result; the server 105 can, for example, determine the operation mode of the unmanned delivery robot, where the operation mode includes a driving mode and a gesture pickup mode; the server 105 can, for example, determine the recognition result.
  • the operation mode is a gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation.
  • the server 105 may be a physical server, or may be composed of multiple servers, for example, a part of the server 105 may be used as the gesture control task submission system for unmanned delivery robots in this application, for obtaining The task of gesture control commands for the unmanned delivery robot; and part of the server 105 can also be used, for example, as the gesture control system for the unmanned delivery robot in this application, for recognizing the user’s gestures and generating recognition results;
  • the operation mode of the human-vehicle wherein the operation mode includes a driving mode and a gesture pickup mode; and when the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned delivery robot moves to the user To perform scheduled operations.
  • the gesture control method for the unmanned delivery robot provided in the embodiment of the present application can be executed by the server 105. Accordingly, the device for gesture control of the unmanned delivery robot can be set in the server 105.
  • the request end provided to the user for submitting gesture control tasks and obtaining gesture control results is generally located in the terminal devices 101, 102, and 103.
  • Fig. 2 is a flow chart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • unmanned delivery robots include but are not limited to unmanned vehicles, unmanned aerial vehicles and other forms.
  • the pick-up time can be shortened, the flexibility of the pick-up process can be improved, and the verification code loss or forgotten and verification code input can be avoided in the verification code pick-up scheme
  • the equipment is exposed to the outside of the car body and the equipment is easily damaged.
  • FIG. 2 a gesture control method for an unmanned vehicle in an exemplary embodiment of the present application will be described.
  • step S210 the user's gesture is recognized to generate a recognition result.
  • the user wants to pick up the goods, he can make a corresponding pick-up gesture to the unmanned vehicle when the unmanned vehicle approaches.
  • the gesture is used to verify the identity of the user.
  • the recognition result of the gesture is credible, and when the user is not the user waiting for the unmanned vehicle At this time, the recognition result of the gesture is unreliable.
  • Recognition of gestures can be performed, for example, by recognizing pictures. For example, when the user makes a gesture, the user can take multiple gesture photos and perform image recognition on the multiple gesture photos to determine whether the user's gesture is a pick-up gesture.
  • the operation mode of the unmanned vehicle is determined, where the operation mode may include a driving mode and a gesture pickup mode.
  • the operation mode may include a driving mode and a gesture pickup mode.
  • unmanned vehicles will adopt different strategies to treat the surrounding environment. For example, in the driving mode, when an unmanned vehicle encounters a pedestrian, it will choose to go around and continue driving. The vehicle will be in a different operation mode during operation. In these modes, unmanned vehicles will adopt different strategies to deal with the surrounding environment.
  • the gesture pickup mode when the unmanned vehicle encounters an obstacle, it will first determine whether the obstacle is the target pedestrian, and if it is, it will wait for the pedestrian to make a preset action to pick up the goods, and according to the pedestrian's picking gesture as It unpacks the goods, otherwise it will follow the target pedestrian.
  • step S230 when the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned vehicle moves to the user to perform a predetermined operation.
  • the recognition result is credible, it indicates that the user is the target user, that is, the user who needs to unpack the goods.
  • the operating mode is the gesture pickup mode, factors such as the operating speed of the unmanned vehicle meet the conditions for unpacking and picking up the user, such as but not limited to stopping the unmanned vehicle and detecting that the unmanned vehicle is parked in a safe place.
  • the current state of the unmanned vehicle container can also be obtained; and when the recognition result is credible, the operation mode is the gesture pickup mode, and the current state of the container satisfies the second Under predetermined conditions, the operating mode is changed to a gesture pickup mode.
  • the current state of the unmanned vehicle cargo box can indicate whether the cargo box is closed.
  • a sensor that can detect the position of the door of an unmanned vehicle can be installed in the cargo box of an unmanned vehicle, and sensor signals can be collected in each sampling period. The collected door position signal can indicate whether the cargo box of the unmanned vehicle is closed, that is, State the current status of the unmanned vehicle cargo box.
  • the step of judging that the recognition result is credible may include: acquiring a predetermined number of multiple first gestures within a predetermined time; recognizing the multiple first gestures to generate multiple first recognition results; and When the credibility of the multiple first recognition results is greater than a predetermined ratio, it is determined that the recognition results are credible.
  • a predetermined number of pictures can be taken within a predetermined time by the camera, that is, multiple first gestures. Among them, the predetermined time and the predetermined number can be determined according to the parameters of the camera or the actual situation.
  • the image recognition method may be used to recognize multiple first gestures to determine whether they are box-opening gestures and generate multiple first recognition results.
  • the first recognition result of the first gesture is considered credible.
  • the predetermined ratio may be a percentage value, and when the percentage of credible results among the plurality of first recognition results is greater than the predetermined ratio, the recognition result is considered credible.
  • that the current state of the cargo box satisfies the second predetermined condition includes: when it is determined that the cargo box is in the closed state, confirming that the current state of the cargo box meets the second predetermined condition.
  • step S230 may further include: determining a gesture recognition flag according to the recognition result; determining a mode flag according to the operating mode; and determining when the gesture recognition flag and the mode flag meet a third predetermined When conditions are met, the unmanned vehicle moves to the user to perform a predetermined operation.
  • the gesture recognition flag and the mode flag can be Boolean values, but the present invention does not specifically limit the specific data form of the gesture recognition flag and the mode flag. For example, when the recognition result is credible, the gesture recognition flag can take the value 1, and when the recognition result is not credible, the gesture recognition flag can take the value 0.
  • the mode flag may take the value 1, and when the operation mode is not the gesture pickup mode, the mode flag may take the value 0.
  • the gesture recognition flag indicates that the recognition result is credible
  • the mode flag indicates that the operation mode is the gesture pickup mode
  • moving the unmanned vehicle to the user to perform a predetermined operation may include: the unmanned vehicle issues a voice prompting the user to pick up the goods, and moves to the user; obtains the speed of the unmanned vehicle; and When the vehicle speed is zero, a box opening instruction is sent to the cargo box of the unmanned vehicle.
  • the unmanned vehicle detects that the current user is far away from him, it can move to the front of the current user. And obtain the unmanned vehicle speed in real time, when the speed is zero, send the unpacking instruction to the unmanned vehicle's cargo box for the user to pick up the goods.
  • the vehicle speed may include a current rear wheel speed, a current front wheel rotation angle, and a front wheel rotation angle of the previous sampling period. Wherein, when the rear wheel speed is zero and the difference between the current front wheel angle and the front wheel angle of the previous sampling period is zero, it can be judged that the speed of the unmanned vehicle is zero, that is, the unmanned vehicle has stopped.
  • the recognition result, operating mode and vehicle speed are processed in parallel to determine whether they meet predetermined conditions.
  • the recognition result, operating mode, and vehicle speed are passed as parallel threads for logical judgment to determine whether they meet predetermined conditions.
  • the current state of the container can also be used as a parallel thread to make the above judgment.
  • a voice prompt tone may be sent to prompt to close the cargo box.
  • the current state of the cargo box is not fully closed, it indicates that there is a cargo box in the open state. For example, you can send a voice prompt to prompt the user to close the box; for another example, you can set a delay time, after the delay time has not detected that the box is fully closed, the voice prompt will be sent to prompt the user to close and open Cargo box.
  • the unmanned vehicle is controlled by using gesture recognition for unmanned vehicle delivery, and logically judges the current state of the container, the operating mode of the unmanned vehicle, and the speed of the vehicle. Unpack the car and pick up the goods.
  • the gesture control method for the unmanned delivery robot of this application can shorten the pick-up time, improve the flexibility of the pick-up process, and avoid the loss or forgetting of the verification code and the verification code input device being exposed to the verification code pick-up solution. Defects in which the equipment is easily damaged due to the outside of the car body.
  • Fig. 3 is a flowchart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • the gesture control method for unmanned vehicles may include:
  • Step S310 Acquire a predetermined number of multiple first gestures within a predetermined time.
  • the predetermined time may be a unit time, for example, a user's gesture may be captured by a high-speed camera, and the predetermined number may be the number of photos that the high-speed camera can take in a unit time. What is acquired is the multiple first gestures. It should be understood that the predetermined time and the predetermined number can be set according to actual conditions, and the present invention is not limited to the above examples.
  • Step S320 Recognizing the multiple first gestures to generate multiple first recognition results.
  • a plurality of first gestures can be recognized by an image recognition method, determine whether they are gestures for unpacking and picking up goods, and generate a plurality of first recognition results.
  • the multiple first recognition results may be gestures for unpacking and picking up goods or gestures for not unpacking and picking up goods.
  • Step S330 When the credibility of the multiple first recognition results is greater than a predetermined ratio, it is determined that the recognition results are credible. Among the multiple first recognition results, the proportion of the result of the gesture of unpacking and picking up the goods is the credibility. When the credibility is greater than the predetermined proportion, the recognition result is considered credible, that is, the user's gesture is Unpacking and picking up gesture.
  • the specific value of the predetermined ratio can be determined by experiment.
  • Fig. 4 is a flowchart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment. 4, the gesture control method for unmanned vehicles may include:
  • Step S410 Determine a gesture recognition flag according to the recognition result.
  • the gesture recognition flag can be a Boolean value. When the recognition result is credible, the gesture recognition flag can take the value 1, and when the recognition result is not credible, the gesture recognition flag can take the value 0.
  • Step S420 Determine a mode flag according to the operating mode.
  • the mode flag can be a Boolean value.
  • the mode flag can take the value 1
  • the mode flag can take the value 0.
  • Step S430 when the gesture recognition flag and the mode flag meet a third predetermined condition, the unmanned vehicle moves to the user to perform a predetermined operation.
  • the third predetermined condition may be that the values of the gesture recognition flag and the mode flag satisfy certain conditions. For example in steps S410 and S420, the third predetermined condition may be that the gesture recognition flag takes a value of 1, and the mode flag takes a value of 1.
  • a voice prompt tone may be sent to prompt to close the cargo box.
  • a query instruction can be sent to the cargo box to query the current status of the cargo box. If the current status returned by the cargo box is not all closed, it indicates that the cargo box is in an open state. At this time, a voice prompt to prompt the user to close the box can be sent.
  • Fig. 5 is a flow chart showing a gesture control method for an unmanned delivery robot according to an exemplary embodiment.
  • the method for the unmanned vehicle to move to the user to perform a predetermined operation may include:
  • step S510 the unmanned vehicle sends a voice prompt to the user to pick up the goods, and moves to the user.
  • Step S520 Obtain the speed of the unmanned vehicle.
  • the vehicle speed may include the current rear wheel speed, the current front wheel angle, and the front wheel angle of the previous sampling period.
  • the rear wheel speed can indicate the driving speed of the unmanned vehicle
  • the front wheel rotation angle may indicate the driving direction of the unmanned vehicle.
  • Step S530 when the vehicle speed is zero, send a box unpacking instruction to the cargo box of the unmanned vehicle.
  • the vehicle speed is zero may be a situation where the rear wheel speed is zero, and the difference between the current front wheel rotation angle and the front wheel rotation angle of the previous sampling period is zero.
  • Fig. 6 is a flow chart showing a gesture control method for an unmanned delivery robot according to another exemplary embodiment.
  • the gesture control method for unmanned vehicles may include:
  • Step S610 It is judged whether the gesture recognition result is credible.
  • the credible judgment can be judged by the percentage of the box-opening gesture as the recognition result in the unit time. For example, when 10 gesture recognition results are passed in per unit time, among the gesture recognition results, 9 are recognized as box-opening gestures, and 1 is recognized as non-box-opening gestures. A predetermined ratio can be set, such as 80%. Then the gesture recognition result is that the percentage of the box opening gesture is 90%, and the gesture recognition result is judged to be credible if it is greater than the predetermined ratio 80%, if credible, go to step S620, otherwise return to step S610 to execute the next cycle.
  • step S620 the temporary mode is set as the unboxing mode, and the timing is started for 8 seconds to reset the temporary mode as the current mode.
  • the temporary mode is an additional variable, which is not a sensor detection value. The change of the temporary mode value will not change the current operating mode of the unmanned vehicle.
  • the temporary mode can be input to the gesture logic unit, and step S610 is returned to execute the next cycle.
  • Step S630 Obtain the current status of the cargo box, and compare the previous box status obtained in real time to determine whether the box status has changed from unpacking to closed, if yes, mark the change of the box to position 1, if otherwise mark the change of the box Position 0.
  • the container change flag can be input into the gesture logic unit, and step S630 is returned to execute the next cycle.
  • Step S640 Obtain the current operating mode of the unmanned vehicle and compare it with the priority of the temporary mode. It is also possible to directly determine the priority of it and the unboxing mode, which is not particularly limited in the present invention. Among them, the priority has been exemplified in the foregoing embodiment, and will not be repeated here.
  • the default value of the temporary mode can be the mode with the highest priority. If the current operating mode has a higher priority than the temporary mode, return to step S640 to execute the next cycle. If the priority of the current operating mode is lower than the temporary mode, the priority flag is set to 1, and the priority flag is input to the gesture logic unit, and step S640 is returned to execute the next cycle.
  • Step S650 in the gesture logic unit, if it is judged that the temporary mode is the unpacking mode, and the container change flag is 1, and the priority flag is 1, then the operation mode of the unmanned vehicle is changed to the unpacking mode, otherwise Return to step S650 to execute the next cycle.
  • Fig. 7 is a flow chart showing a gesture control method for an unmanned delivery robot according to another exemplary embodiment.
  • the gesture control method for unmanned vehicles may include:
  • the gesture recognition result, the current vehicle speed and the current operating mode of the unmanned vehicle are used as parallel threads, and three parallel messages are passed into the cargo box logic unit in real time.
  • the container status thread there is a container status thread in the container logic unit, which will run synchronously with the above three parallel threads.
  • Step 705 Determine whether the gesture recognition result is credible. The foregoing embodiment has described the credibility determination method in detail, and will not be repeated here. If the gesture recognition result is credible, go to step S720, otherwise return to step S705, and execute the next cycle.
  • step S710 the gesture recognition mark position 1 is passed to the cargo box logic unit, and the step S705 is returned to execute the next cycle.
  • Step S715 Analyze the current speed of the unmanned vehicle. Analyze the current vehicle speed into the current rear wheel speed and the front wheel angle, and keep the front wheel angle of the previous sampling period (if the current vehicle speed is acquired for the first time, the previous front wheel angle is 0). The parsed current vehicle speed is transferred to the cargo box logic unit, and returns to step S715 to execute the next cycle.
  • Step S720 Obtain the current operating mode of the unmanned vehicle, and determine whether the current operating mode is the box-opening mode, if it is, execute step S725, otherwise, return to step S720 to execute the next cycle.
  • Step S725 Set the mode identification position to 1, and start timing for 1s, update the unmanned vehicle operating mode to the current latest operating mode after 1s, return to step S720, and execute the next cycle.
  • 1s is the delay time set according to the actual situation, but the present invention is not limited to this.
  • Step S730 in the cargo box logic unit, the output results of steps S710, S715, and S725 will be received.
  • the container status thread in the container logic unit will first issue a query command to the container to query the status of the container, and the container will return the current status to the container logic unit. If the current container is all closed, execute Step S745. If the state is not all closed, step S735 is executed.
  • step S735 the voice module is called to realize utterance after a delay of 5 seconds to prompt the user to close the box.
  • Step S740 the position of the gesture recognition mark is 0, return to step S730, and execute the next cycle.
  • step S745 if it is determined that the gesture recognition flag is 1 and the mode flag is 1, then step S750 is executed; otherwise, step S730 is returned to execute the next cycle.
  • step S750 a voice prompts the user to pick up the goods, and the unmanned vehicle will continue to slowly approach the user.
  • the unmanned vehicle stops.
  • step S755 by analyzing the vehicle speed obtained in step S715, it is determined whether the current rear wheel speed, the current front wheel angle, and the previous front wheel angle are all zero. If yes, execute step S760, otherwise return to step S730, execute the next cycle, and the unmanned vehicle will continue to walk until it approaches the user and stops.
  • step S760 if the unmanned vehicle has approached the user and parked in front of the user, it will start to try 3 unpacking actions. During this period, an unpacking instruction will be sent to the cargo box, and the box will feedback the current box status. If it is opened within 3 times If the box is not opened, step S765 is executed. If the box cannot be opened, then return to step S730 to execute the next cycle.
  • Step S765 after opening the box, clear all the previous states, such as the mode flag, the gesture recognition flag, and the current vehicle speed, and return to step S730 to execute the next cycle.
  • the unmanned vehicle is controlled by using gesture recognition for unmanned vehicle delivery, and logically judges the current state of the container, the operating mode of the unmanned vehicle, and the speed of the vehicle. Unpack the car and pick up the goods.
  • the gesture control method for the unmanned delivery robot of this application can shorten the pick-up time, improve the flexibility of the pick-up process, and avoid the loss or forgetting of the verification code and the verification code input device being exposed to the verification code pick-up solution. Defects in which the equipment is easily damaged due to the outside of the car body.
  • the gesture control method for unmanned delivery robots of this application proposes an autonomous vehicle to move to the vicinity of the user, and the user makes a pick-up gesture to the unmanned vehicle to pick up the goods while the unmanned vehicle is approaching.
  • the solution uses gesture logic and cargo box logic to identify and judge the correctness of the user’s gestures and whether the unmanned vehicle status meets the parking unboxing conditions.
  • the unmanned vehicle can autonomously move to the front of the user and park Open the box for users to pick up the goods.
  • Fig. 8 is a block diagram showing a gesture control device for an unmanned delivery robot according to an exemplary embodiment.
  • a gesture control device for an unmanned delivery robot may include a gesture recognition module 810, a mode module 820, and a judgment module 830.
  • the gesture recognition module 810 is used to recognize the user's gesture to generate a recognition result.
  • the recognition of the gesture can be, for example, by recognizing the picture.
  • the user can take multiple gesture photos and perform image recognition on the multiple gesture photos to determine whether the user's gesture is a pick-up gesture.
  • the mode module 820 is used to determine the operating mode of the unmanned vehicle.
  • the operating mode can include driving mode, gesture pickup mode and unpacking mode. Under these modes, unmanned vehicles will adopt different strategies to treat the surrounding environment.
  • the operation mode has been described in the foregoing embodiment, and will not be repeated here.
  • the judging module 830 is configured to move the unmanned vehicle to the user to perform a predetermined operation when the recognition result is credible and the operation mode is the gesture pickup mode.
  • the recognition result is credible, it indicates that the user is the target user, that is, the user who needs to unpack the goods.
  • the operating mode is the gesture pickup mode, factors such as the operating speed of the unmanned vehicle meet the conditions for unpacking and picking up the user, such as but not limited to stopping the unmanned vehicle and detecting that the unmanned vehicle is parked in a safe place.
  • the judging module 830 may be further configured to determine a gesture recognition flag according to the recognition result; determine a mode flag according to the operating mode; and when the gesture recognition flag and the mode flag meet a third predetermined When conditions are met, the unmanned vehicle moves to the user to perform a predetermined operation.
  • the gesture recognition is used for unmanned vehicle delivery, and the current state of the cargo box, the operation mode of the unmanned vehicle, and the vehicle speed are logically determined for use.
  • the gesture control device for the unmanned delivery robot of this application can shorten the pick-up time, improve the flexibility of the pick-up process, and avoid the loss or forgetting of the verification code in the verification code pick-up scheme, and the verification code input device being exposed to Defects in which the equipment is easily damaged due to the outside of the car body.
  • Fig. 9 is a block diagram showing a gesture control device for an unmanned delivery robot according to another exemplary embodiment.
  • a gesture control device for an unmanned vehicle may include: a cargo box state module 910, a cargo box logic module 920, a gesture logic module 930, and an unmanned vehicle mode module 940.
  • a user makes a gesture to the unmanned vehicle when it is convenient for the user, and the result of the gesture recognition is input to the gesture logic module 930, which is used to determine whether the gesture is a pickup gesture.
  • the gesture logic module 930 is also used to obtain the current operating mode of the unmanned vehicle (such as driving mode, gesture pickup mode, and unpacking mode) from the unmanned vehicle; and obtain the current operating mode of the cargo box from the cargo box logic module 920 status.
  • the gesture logic module 930 After receiving the above 3 inputs (current operating mode of the unmanned vehicle, current status of the cargo box, and gesture recognition result), the gesture logic module 930 will output a mode result, that is, whether the operating mode of the unmanned vehicle is changed to the unboxing mode .
  • the cargo box logic module 920 will also monitor the gesture recognition results, the current mode of the unmanned vehicle and the current speed from the unmanned vehicle. After the above three signals are processed by the logic unit, it will send out the cargo box. Query instructions or box opening instructions to complete the box opening operation and closing prompts.
  • the unmanned vehicle is controlled by using gesture recognition for unmanned vehicle delivery, and logically judges the current state of the container, the operating mode of the unmanned vehicle, and the speed of the vehicle. Unpack the car and pick up the goods.
  • the gesture control device for the unmanned delivery robot of this application can shorten the pick-up time, improve the flexibility of the pick-up process, and avoid the loss or forgetting of the verification code in the verification code pick-up scheme, and the verification code input device being exposed to Defects in which the equipment is easily damaged due to the outside of the car body.
  • the gesture control device for unmanned delivery robots of the present application proposes an autonomous vehicle to move to the vicinity of the user, and the user makes a pick-up gesture to the unmanned vehicle to pick up the goods when the unmanned vehicle is approaching.
  • the solution uses gesture logic and cargo box logic to identify and judge the correctness of the user's gestures and whether the unmanned vehicle status meets the conditions for parking and unpacking.
  • the unmanned vehicle can autonomously move to the front of the user and park Open the box for users to pick up the goods.
  • Fig. 10 is a block diagram showing an electronic device for an unmanned delivery robot according to an exemplary embodiment.
  • the electronic device 1000 according to this embodiment of the present application will be described below with reference to FIG. 10.
  • the electronic device 1000 shown in FIG. 10 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the computer system 1000 includes a central processing unit (CPU) 1001, which can follow a program stored in a read-only memory (ROM) 1002 or a program loaded from a storage part 1008 into a random access memory (RAM) 1003 And perform various appropriate actions and processing.
  • the central processing unit 1001 may perform the steps shown in one or more of FIGS. 2, 3, 4, 5, 6, and 7.
  • the central processing unit 1001 may perform the following methods: recognize the user's gestures to generate recognition results; determine the operation mode of the unmanned delivery robot, where the operation mode includes a driving mode and a gesture pickup mode; and When the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation.
  • the central processing unit 1001 executes "when the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation"
  • the following method may also be executed: obtaining the current state of the unmanned delivery robot cargo box; and when the recognition result is credible, the operation mode is the gesture pickup mode, and the current state of the cargo box satisfies the first 2. Under predetermined conditions, change the operating mode to a gesture pickup mode.
  • the central processing unit 1001 when the central processing unit 1001 executes "the recognition result is credible", it may execute the following method: acquire a predetermined number of multiple first gestures within a predetermined time; Recognizing the gesture to generate a plurality of first recognition results; and when the credibility of the plurality of first recognition results is greater than a predetermined ratio, it is determined that the recognition result is credible.
  • the central processing unit 1001 executes "the current state of the cargo box satisfies the second predetermined condition"
  • the following method may be executed: when it is judged that the cargo box is in the closed state, confirm the cargo box The current state of satisfies the second predetermined condition.
  • the central processing unit 1001 executes "when the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation"
  • the following method may also be executed: determining the gesture recognition flag according to the recognition result; determining the mode flag according to the operating mode; and when the gesture recognition flag and the mode flag meet a third predetermined condition, The unmanned delivery robot moves to the user to perform a predetermined operation.
  • the central processing unit 1001 executes "the gesture recognition flag and the mode flag satisfy the third predetermined condition"
  • the following method may be executed: the gesture recognition flag indicates the recognition The result is credible, and when the mode flag indicates that the operating mode is the gesture pickup mode, it is determined that the gesture recognition flag and the mode flag satisfy a third predetermined condition.
  • the central processing unit 1001 executes "the unmanned delivery robot moves to the user to perform a predetermined operation"
  • the following method may be executed: the unmanned delivery robot issues a voice prompting the user to pick up the goods, And move to the user; obtain the vehicle speed of the unmanned delivery robot; and when the vehicle speed is zero, send a box opening instruction to the container of the unmanned delivery robot.
  • the vehicle speed includes the current rear wheel speed, the current front wheel rotation angle, and the front wheel rotation angle of the previous sampling period.
  • the central processing unit 1001 when the central processing unit 1001 executes "the recognition result is credible and the operation mode is the gesture pickup mode", the central processing unit 1001 may also execute the following method: check the recognition result, the operation mode, and The vehicle speed is processed in parallel.
  • RAM 1003 various programs and data required for system operation are also stored, such as gesture recognition results, current operating status of unmanned vehicles, and so on.
  • the CPU 1001, the ROM 1002, and the RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004.
  • the following components are connected to the I/O interface 1005: an input part 1006 including a touch screen, a keyboard, etc.; an output part 1007 including a liquid crystal display (LCD) and speakers, etc.; a storage part 1008 including flash memory; and a storage part 1008 including a wireless network card, High-speed network card and other communication part 1009.
  • the communication section 1009 performs communication processing via a network such as the Internet.
  • the driver 1010 is also connected to the I/O interface 1005 as needed.
  • a removable medium 1011 such as a semiconductor memory, a magnetic disk, etc., is installed on the drive 1010 as needed, so that the computer program read from it is installed into the storage part 1008 as needed.
  • the software product can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.), including several instructions Used to make a computing device (which can be a personal computer, server, mobile terminal, or smart device, etc.) execute the method according to the embodiment of the present invention, such as Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 One or more of the steps shown.
  • a non-volatile storage medium which can be a CD-ROM, U disk, mobile hard disk, etc.
  • a computing device which can be a personal computer, server, mobile terminal, or smart device, etc.
  • several instructions in the non-volatile storage medium when executed by the processor, can implement the following methods: recognize the user's gestures to generate recognition results; determine the operating mode of the unmanned delivery robot, where: The operation mode includes a driving mode and a gesture pickup mode; and when the recognition result is credible and the operation mode is the gesture pickup mode, the unmanned delivery robot moves to the user to perform a predetermined operation.
  • the recognition result when several instructions in the non-volatile storage medium are executed by the processor "the recognition result is credible", the following method can be implemented: obtaining a predetermined number of multiple first commands within a predetermined time A gesture; recognizing the plurality of first gestures to generate a plurality of first recognition results; and when the credibility of the plurality of first recognition results is greater than a predetermined ratio, determining that the recognition result is credible.
  • the processor when several instructions in the non-volatile storage medium are executed by the processor "the current state of the cargo box satisfies the second predetermined condition", the following method can be implemented: When it is in the closed state, it is confirmed that the current state of the cargo box meets the second predetermined condition.
  • the unmanned delivery robot moves When the user performs a predetermined operation", the following method can be implemented: determining the gesture recognition flag according to the recognition result; determining the mode flag according to the operating mode; and determining the gesture recognition flag and the mode flag When the position meets the third predetermined condition, the unmanned delivery robot moves to the user to perform a predetermined operation.
  • the gesture recognition flag and the mode flag meet the third predetermined condition
  • the following method can be implemented: When the gesture recognition flag indicates that the recognition result is credible, and the mode flag indicates that the operation mode is a gesture pickup mode, it is determined that the gesture recognition flag and the mode flag satisfy a third predetermined condition .
  • the unmanned delivery robot moves to the user to perform a predetermined operation
  • the following method can be implemented: The human delivery robot issues a voice to prompt the user to pick up the goods and move to the user; obtain the speed of the unmanned delivery robot; and when the speed is zero, send the unboxing to the unmanned delivery robot's cargo box instruction.
  • the vehicle speed includes the current rear wheel speed, the current front wheel rotation angle, and the front wheel rotation angle of the previous sampling period.
  • the recognition result is credible and the operation mode is the gesture pickup mode
  • the following methods can also be implemented: The recognition result, the operating mode, and the vehicle speed are processed in parallel.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

一种用于无人配送机器人的手势控制方法、装置、电子设备及可读介质,能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。该方法包括:对用户的手势进行识别生成识别结果(S210);确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式(S220);以及在识别结果可信且运行模式为手势取货模式时,无人配送机器人移动至用户处进行预定操作(S230)。

Description

用于无人配送机器人的手势控制方法、装置、设备及介质
本公开要求申请日为2019年02月22日、申请号为201910135248.1、发明创造名称为《用于无人配送机器人的手势控制方法、装置、设备及介质》的中国发明专利申请的优先权。
技术领域
本申请涉及无人配送技术领域,尤其涉及一种用于无人配送机器人的手势控制方法、装置、电子设备及计算机可读介质。
背景技术
目前自动驾驶技术发展迅速,而无人驾驶技术应用在物流配送中也逐渐成为研究热点,无人配送机器人配送将成为未来的发展趋势。然而通过无人配送机器人进行取件却缺乏成熟的方案。现有的无人配送机器人取货方案以输入验证码开箱取货为主。然而,输入验证码开箱取货具有下述技术缺陷:
(1)用户需面对无人配送机器人,并于交互界面上手动输入验证码来开箱取货,需要一定的开箱时间,用户体验较差;
(2)遗失或忘记验证码将导致用户无法开箱取货;
(3)用于验证码输入的操作设备必须暴露在车体之外,易出现损坏,导致用户无法开箱取货。
发明内容
有鉴于此,本申请提供一种用于无人配送机器人的手势控制方法、装置、电子设备及计算机可读介质,能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。
本申请的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本申请的实践而习得。
根据本申请实施例的第一方面,提出一种用于无人配送机器人的手势控制方法,该方法包括:对用户的手势进行识别生成识别结果;确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
在本申请的一种示例性实施例中,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:获取所述无人配送机器人货箱的当前状态;以及在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。
在本申请的一种示例性实施例中,所述识别结果可信包括:在预定时间内获取预定个数的多个第一手势;对所述多个第一手势进行识别生成多个第一识别结果;以及在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
在本申请的一种示例性实施例中,所述货箱的当前状态满足第二预定条件包括:当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
在本申请的一种示例性实施例中,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:根据所述识别结果确定手势识别标识位;根据所述运行模式确定模式标识位;以及在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
在本申请的一种示例性实施例中,所述手势识别标识位以及所述模式标识位满足第三预定条件包括:在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
在本申请的一种示例性实施例中,所述无人配送机器人移动至所述用户处进行预定操作包括:所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;获取所述无人配送机器人的车速;以及在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
在本申请的一种示例性实施例中,所述车速包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。
在本申请的一种示例性实施例中,所述识别结果与所述运行模式满足预定条件还包括:在所述货箱的当前状态不是全部关闭状态时,发送语音提示音,以提示关闭所述货箱。对所述识别结果、所述运行模式以及所述车速进行并行处理,以判断其是否满足预定条件。
根据本申请实施例的第二方面,提出一种用于无人配送机器人的手势控制装置,该装置包括:手势识别模块,用于对用户的手势进行识别生成识别结果;模式模块,用于确定无人配送机器人的运行模式;以及判断模块,用于在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
根据本申请实施例的第三方面,提出一种电子设备,该电子设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述任一项所述的用于无人配送机器人的手势控制方法。
根据本申请实施例的第四方面,提出一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如上述任一项所述的用于无人配送机器人的手势控制方法。
根据本申请的用于无人配送机器人的手势控制方法、装置、电子设备及计算机可读介 质,能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。下面描述的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法及装置的系统框图。
图2是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图3是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图4是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图5是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图6是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图7是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。
图8是根据一示例性实施例示出的一种用于无人配送机器人的手势控制装置的框图。
图9是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制装置的框图。
图10是根据一示例性实施例示出的一种用于用于无人配送机器人的手势控制的电子设备的框图。
具体实施方式
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本发明将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在 下面的描述中,提供许多具体细节从而给出对本发明的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本发明的技术方案而省略特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知方法、装置、实现或者操作以避免模糊本发明的各方面。
附图仅为本发明的示意性图解,图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的内容和步骤,也不是必须按所描述的顺序执行。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。
下面结合附图对本发明示例实施方式进行详细说明。
图1是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法及装置的系统框图。
服务器105可以是提供各种服务的服务器,例如对用户利用终端设备101、102、103所进行操作的用于无人配送机器人的手势控制系统提供支持的后台管理服务器(仅为示例)。后台管理服务器可以对接收到的手势控制请求等数据进行分析等处理,并将处理结果(例如开箱指令、发送语音提示音的命令-仅为示例)反馈给终端设备。
服务器105可例如对用户的手势进行识别生成识别结果;服务器105可例如确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;服务器105可例如在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
服务器105可以是一个实体的服务器,还可例如为多个服务器组成,服务器105中的一部分可例如作为本申请中的用于无人配送机器人的手势控制任务提交系统,用于获取将要执行用于无人配送机器人的手势控制命令的任务;以及服务器105中的一部分还可例如作为本申请中的用于无人配送机器人的手势控制系统,用于对用户的手势进行识别生成识别结果;确定无人车的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
需要说明的是,本申请实施例所提供的用于无人配送机器人的手势控制方法可以由服务器105执行,相应地,用于无人配送机器人的手势控制的装置可以设置于服务器105中。而提供给用户用于提交手势控制任务与获取手势控制结果的请求端一般位于终端设备101、102、103中。
图2是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程 图。其中,无人配送机器人包括但不限于为无人车、无人机等形式。根据图2示出的用于无人车的手势控制方法,能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。下面,将参照图2,对本申请示例性实施例中的用于无人车的手势控制方法进行说明。
在步骤S210中,对用户的手势进行识别生成识别结果。其中,当用户想要取货时,可在无人车靠近时对无人车做出相应的取货手势。手势用于验证所述用户的身份,当所述用户为所述无人车的待取货用户时,手势的识别结果为可信,当所述用户不是所述无人车的待取货用户时,手势的识别结果为不可信。手势的识别可例如通过对图片进行识别。例如,在用户做出手势动作时,通过拍摄多张手势照片,并对多张手势照片进行图像识别,以判断用户的手势其是否为取货手势。
在步骤S220中,确定无人车的运行模式,其中,运行模式可包括行驶模式、手势取货模式。在这些模式下,无人车对待周围的环境,将会采取不同的策略。例如,在行驶模式下,无人车在遇到行人时,会选择绕行并继续行驶车在运行过程中将处于不同的运行模式。在这些模式下,无人车对待周围的环境将采取不同的策。在手势取货模式下,无人车在遇到障碍物时,会首先判断该障碍物是否是目标行人,如果是则会等待行人做出预设动作取货,并根据行人的取货手势为其开箱取货,否则会跟随目标行人。
在步骤S230中,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人车移动至所述用户处进行预定操作。当所述识别结果可信说明用户为目标用户,即有开箱取货需求的用户。运行模式为手势取货模式时,无人车的运行速度等因素满足为用户开箱取货的条件,例如但不限于为无人车停止运行,且检测到无人车停靠于安全地方等。
根据示例实施例,还可获取所述无人车货箱的当前状态;以及在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。其中,无人车货箱的当前状态可表示货箱是否关箱。例如,可在无人车的货箱内安装可检测箱门位置的传感器,并在每一采样周期内采集传感器信号,该采集的箱门位置信号可表示无人车货箱是否关闭,即所述无人车货箱的当前状态。
其中,判断所述识别结果可信的步骤可包括:在预定时间内获取预定个数的多个第一手势;对所述多个第一手势进行识别生成多个第一识别结果;以及在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。其中,可通过相机在预定时间内拍摄预定数量的图片,即多个第一手势。其中,预定时间与预定数量可根据相机的参数或实际情况确定。并可通过图像识别方法对多个第一手势进行识别,以判断其是否为开箱手势,生成多个第一识别结果。当识别到某一第一手势为开箱手势时,认为该第一手势的第一识别结果为可信。预定比例可为百分比数值,当多个第一识别结果中,可信的结果所占百分比大于该预定比例时,认为识别结果可信。其中,货箱的当前状态满足第二预定条件包括:当判断货箱为关闭状态时,确认货箱的当前状态满足第二预定条件。
根据示例实施例,步骤S230还可包括:根据所述识别结果确定手势识别标识位;根据所述运行模式确定模式标识位;以及在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人车移动至所述用户处进行预定操作。其中,手势识别标识位与模式标识位可为布尔值,但本发明对手势识别标识位与模式标识位的具体数据形式并不作特殊限定。例如,当识别结果可信时,手势识别标识位可取值为1,当识别结果不可信时,手势识别标识位可取值为0。又例如,当运行模式为手势取货模式时,模式标识位可取值为1,当运行模式不是手势取货模式时,模式标识位可取值为0。
其中,在手势识别标识位指示识别结果可信,以及模式标识位指示运行模式为手势取货模式时,确定手势识别标识位以及模式标识位满足第三预定条件。
其中,所述无人车移动至所述用户处进行预定操作可包括:所述无人车发出语音提示用户取货,并移动至所述用户处;获取所述无人车的车速;以及在所述车速为零时,向所述无人车的货箱发送开箱指令。其中,当无人车检测到当前用户与其距离较远时,可移动至当前用户前方。并实时获取无人车车速,在车速为零时,向无人车的货箱发送开箱指令,以供用户取货。
根据示例实施例,所述车速可包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。其中,当后轮速度为零、且当前前轮转角与上一采样周期的前轮转角的差值为零时,可判断为无人车的车速为零,即无人车已停车。
其中,对识别结果、运行模式以及车速进行并行处理,以判断其是否满足预定条件。例如,将识别结果、运行模式、车速作为并行线程传入进行逻辑判断,以判断其是否满足预定条件。此外,货箱的当前状态也可作为并行线程,以进行上述判断。
其中,还可在所述货箱的当前状态不是全部关闭状态时,发送语音提示音,以提示关闭所述货箱。其中,当所述货箱的当前状态不是全部关闭状态时,表明有货箱处于打开状态。例如,可发送语音提示音,用于提示用户关箱;又例如,可设定一延迟时间,在延迟时间后仍未检测到货箱为全部关闭时,发送语音提示音,以提示用户关闭打开的货箱。
根据本申请的用于无人配送机器人的手势控制方法,通过将手势识别用于无人车配送,并对货箱的当前状态、无人车的运行模式以及车速进行逻辑判断,以控制无人车开箱取货。本申请的用于无人配送机器人的手势控制方法能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。
图3是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。参照图3,用于无人车的手势控制方法可包括:
步骤S310,在预定时间内获取预定个数的多个第一手势。其中,预定时间可为单位时间,例如,可通过高速相机拍摄用户的手势,预定个数可为高速相机在单位时间内可拍摄的相片张数。所获取的即为多个第一手势。应该理解,预定时间与预定个数可根据实际 情况设定,本发明并不以上述举例为限。
步骤S320,对所述多个第一手势进行识别生成多个第一识别结果。其中,可通过图像识别方法对多个第一手势进行识别,判断其是否为开箱取货手势,并生成多个第一识别结果。多个第一识别结果可为是开箱取货手势或非开箱取货手势。
步骤S330,在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。其中,在多个第一识别结果中,是开箱取货手势的结果所占的比例为可信度,当可信度大于预定比例时,认为所述识别结果可信,即用户的手势为开箱取货手势。预定比例的具体数值可根据实验确定。
图4是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。参照图4,用于无人车的手势控制方法可包括:
步骤S410,根据所述识别结果确定手势识别标识位。其中,手势识别标识位可为布尔型数值。当所述识别结果为可信时,手势识别标识位可取值1,当所述识别结果不可信时,手势识别标识位可取值0。
步骤S420,根据所述运行模式确定模式标识位。其中,模式标识位可为布尔型数值。当运行模式为手势取货模式时,模式标识位可取值1,当运行模式为非手势取货模式时,模式标识位可取值0。
步骤S430,在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人车移动至所述用户处进行预定操作。其中,第三预定条件可为手势识别标识位与模式标识位的取值满足一定的条件。如步骤S410、S420中举例,第三预定条件可为手势识别标识位取值1,且模式标识位取值1。
根据示例实施例,还可在所述货箱的当前状态不是全部关闭状态时,发送语音提示音,以提示关闭所述货箱。其中,可向货箱发送以查询指令,以查询货箱的当前状态,若货箱返回的当前状态不是全部关闭状态,则表明有货箱处于开箱状态。此时可发送用于提示用户关箱的语音提示音。
图5是根据一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。参照图5,无人车移动至所述用户处进行预定操作的方法可包括:
步骤S510,所述无人车发出语音提示用户取货,并移动至所述用户处。
步骤S520,获取所述无人车的车速。其中,车速可包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。后轮速度可表示无人车的行驶速度,前轮转角可表示无人车的行驶方向。
步骤S530,在所述车速为零时,向所述无人车的货箱发送开箱指令。其中,车速为零可为后轮速度为零,且当前前轮转角与上一采样周期的前轮转角差值为零的情况。
图6是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。参照图6,用于无人车的手势控制方法可包括:
首先将手势识别结果,货箱的当前状态以及无人车当前的运行模式作为并行线程, 并将三个并行消息实时传入手势逻辑单元中。
步骤S610,判断手势识别结果是否可信。其中,可信的判断可通过在单位时间内识别结果为开箱手势所占的百分比来进行判断。例如,当单位时间内传入10个手势识别结果,其中手势识别结果中识别为开箱手势的为9个,识别为非开箱手势的为1个。可设置一预定比例,例如设为80%。则手势识别结果为开箱手势所占的百分比为90%,大于预定比例80%则判断手势识别结果可信,如果可信进入步骤S620,否则返回步骤S610,执行下一次循环。
步骤S620,设置临时模式为开箱模式,并开始计时8秒,以复位临时模式为当前模式。其中,临时模式为一附加变量,其并非传感器检测值。临时模式数值的改变并不会改变无人车当前的运行模式。可将临时模式输入手势逻辑单元,并返回步骤S610,执行下一次循环。
步骤S630,获取货箱的当前状态,并对比实时获取的之前的箱子状态,判断箱子状态是否由开箱变成关箱,如果是则将货箱变化标识位置1,如果否则将货箱变化标识位置0。可将货箱变化标识位输入手势逻辑单元,并返回步骤S630,执行下一次循环。
步骤S640,获取无人车当前的运行模式,并将其与临时模式的优先级作对比。也可直接判断其与开箱模式的优先级,本发明对此并不作特殊限定。其中,优先级已在前述实施例中举例说明,此处不再赘述。临时模式的默认取值可为优先级最高的模式。如果当前的运行模式优先级高于临时模式,则返回步骤S640,执行下一次循环。如果当前的运行模式优先级低于临时模式,则将优先级标识位置1,将优先级标识位输入手势逻辑单元,并返回步骤S640,执行下一次循环。
步骤S650,在手势逻辑单元中,若判断临时模式为开箱模式,且货箱变化标识位为1,且优先级标识位为1,则将无人车的运行模式更改为开箱模式,否则返回步骤S650,执行下一次循环。
图7是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制方法的流程图。参照图7,用于无人车的手势控制方法可包括:
首先将手势识别结果、当前车速以及无人车当前的运行模式作为并行线程,并将三个并行消息实时传入货箱逻辑单元中。其次在货箱逻辑单元中还有一个货箱状态线程,其将与上述三个并行线程同步运行。
步骤705,判断手势识别结果是否可信,前述实施例已对可信的判断方法进行了详细介绍,此处不再赘述。若手势识别结果可信,进入步骤S720,否则返回步骤S705,执行下一次循环。
步骤S710,将手势识别标识位置1,并将其传入货箱逻辑单元,返回步骤S705,执行下一次循环。
步骤S715,解析无人车的当前车速。将当前车速解析为当前后轮速度以及前轮转角,并且保留上一采样周期的前轮转角(如果是第一次获取当前车速,上一次的前轮转角为0)。 将解析后的当前车速传入货箱逻辑单元,并返回步骤S715,执行下一次循环。
步骤S720,获取无人车当前运行模式,并判断当前运行模式是否为开箱模式,如果是执行步骤S725,否则返回步骤S720,执行下一次循环。
步骤S725,将模式标识位置1,并开始计时1s,超过1s则更新无人车运行模式为当前最新运行模式,返回步骤S720,执行下一次循环。其中,1s为根据实际情况设定的延迟时间,但本发明并不以此为限。
步骤S730,在货箱逻辑单元中,将接收步骤S710、S715、S725的输出结果。此外,货箱逻辑单元中的货箱状态线程将首先对货箱发出一查询指令,以查询货箱状态,货箱将返回当前状态给货箱逻辑单元,如果当前货箱为全部关闭状态,执行步骤S745。如果不是全部关闭状态则执行步骤S735。
步骤S735,延迟5秒调用语音模块实现发声,以提示用户关箱。
步骤S740,手势识别标志位置0,返回步骤S730,执行下一次循环。
步骤S745,若判断手势识别标识位为1,且模式标识位为1,执行步骤S750,否则返回步骤S730,执行下一次循环。
步骤S750,语音提示用户取货,此时无人车将会继续缓慢的向用户靠近。当检测到用户在无人车附近时,无人车停车。
步骤S755,通过步骤S715解析得到的车速,判断当前后轮车速以及当前前轮的角度以及上一次的前轮角度是否均为0。如果是,执行步骤S760,否则返回步骤S730,执行下一次循环,无人车将会继续行走直到和用户靠近停车。
步骤S760,如果无人车已近走到用户跟前停车,则开始尝试3次开箱动作这期间,将会给货物箱发送开箱指令,箱子则会反馈当前箱子状态,如果在3次内开箱则执行步骤S765,如果未能开箱则返回步骤S730,执行下一次循环。
步骤S765,开箱后清空之前的全部状态,例如模式标识位、手势识别标识位以及当前车速,并返回步骤S730,执行下一次循环。
根据本申请的用于无人配送机器人的手势控制方法,通过将手势识别用于无人车配送,并对货箱的当前状态、无人车的运行模式以及车速进行逻辑判断,以控制无人车开箱取货。本申请的用于无人配送机器人的手势控制方法能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。综上,本申请的用于无人配送机器人的手势控制方法提出了一种由无人车自主移动到用户附近,用户在无人车靠近的过程中向无人车作出取货手势以取货的方案,该方案通过手势逻辑以及货箱逻辑识别并判断用户手势的正确性以及无人车状态是否符合停车开箱条件,能在用户验证成功后使无人车自主移动到用户前方,并停车开箱供用户取货。
图8是根据一示例性实施例示出的一种用于无人配送机器人的手势控制装置的框图。参照图8,用于无人配送机器人的手势控制装置可以包括:手势识别模块810、模式模块 820以及判断模块830。
在用于无人车的手势控制装置中,手势识别模块810用于对用户的手势进行识别生成识别结果。其中,手势的识别可例如通过对图片进行识别。例如,在用户做出手势动作时,通过拍摄多张手势照片,并对多张手势照片进行图像识别,以判断用户的手势其是否为取货手势。
模式模块820用于确定无人车的运行模式。其中,运行模式可包括行驶模式、手势取货模式以及开箱模式。在这些模式下,无人车对待周围的环境,将会采取不同的策略。前文实施例中已对运行模式进行了说明,此处不再赘述。
判断模块830用于在所述识别结果可信且所述运行模式为手势取货模式时,所述无人车移动至所述用户处进行预定操作。当所述识别结果可信说明用户为目标用户,即有开箱取货需求的用户。运行模式为手势取货模式时,无人车的运行速度等因素满足为用户开箱取货的条件,例如但不限于为无人车停止运行,且检测到无人车停靠于安全地方等。
根据示例实施例,判断模块830还可用于根据所述识别结果确定手势识别标识位;根据所述运行模式确定模式标识位;以及在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人车移动至所述用户处进行预定操作。
根据本申请的用于无人配送机器人的手势控制装置,通过将手势识别用于无人车配送,并对货箱的当前状态、无人车的运行模式以及车速进行逻辑判断,以供。本申请的用于无人配送机器人的手势控制装置能够缩短取货时间、提高取货流程的灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。
图9是根据另一示例性实施例示出的一种用于无人配送机器人的手势控制装置的框图。参照图9,用于无人车的手势控制装置可包括:货箱状态模块910、货箱逻辑模块920、手势逻辑模块930以及无人车模式模块940。
在用于无人车的手势控制装置中,用户方便的时候对无人车作出手势,手势识别的结果输入手势逻辑模块930中,手势逻辑模块930用于判断该手势是否为取货手势。此外,手势逻辑模块930还用于从无人车中获取无人车当前的运行模式(例如行驶模式、手势取货模式以及开箱模式);以及从货箱逻辑模块920中获取货箱的当前状态。在接收到上述3个输入(无人车当前运行模式,货箱当前状态,手势识别结果)之后,手势逻辑模块930将会输出一个模式结果,即无人车的运行模式是否更改为开箱模式。
与此同时,货箱逻辑模块920还将从无人车中监听手势识别结果、无人车当前所处模式以及当前车速,以上3个信号经过该逻辑单元的处理之后,将会对货箱发出查询指令或者开箱指令来完成对箱子的开箱操作和关箱提示。
根据本申请的用于无人配送机器人的手势控制装置,通过将手势识别用于无人车配送,并对货箱的当前状态、无人车的运行模式以及车速进行逻辑判断,以控制无人车开箱取货。本申请的用于无人配送机器人的手势控制装置能够缩短取货时间、提高取货流程的 灵活性,并可避免验证码取货方案中,验证码遗失或忘记、以及验证码输入设备暴露于车体之外导致的设备易损坏的缺陷。综上,本申请的用于无人配送机器人的手势控制装置提出了一种由无人车自主移动到用户附近,用户在无人车靠近的过程中向无人车作出取货手势以取货的方案,该方案通过手势逻辑以及货箱逻辑识别并判断用户手势的正确性以及无人车状态是否符合停车开箱条件,能在用户验证成功后使无人车自主移动到用户前方,并停车开箱供用户取货。
图10是根据一示例性实施例示出的一种用于无人配送机器人的电子设备的框图。
下面参照图10来描述根据本申请的这种实施方式的电子设备1000。图10显示的电子设备1000仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,计算机系统1000包括中央处理单元(CPU)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从储存部分1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。例如,中央处理单元1001可以执行如图2、图3、图4、图5、图6、图7中的一个或多个所示的步骤。
在示例性实施例中,中央处理单元1001可以执行如下方法:对用户的手势进行识别生成识别结果;确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
在示例性实施例中,中央处理单元1001在执行“在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作”时,还可执行如下方法:获取所述无人配送机器人货箱的当前状态;以及在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。
在示例性实施例中,中央处理单元1001在执行“所述识别结果可信”时,可以执行如下方法:在预定时间内获取预定个数的多个第一手势;对所述多个第一手势进行识别生成多个第一识别结果;以及在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
在示例性实施例中,中央处理单元1001在执行“所述货箱的当前状态满足第二预定条件”时,可以执行如下方法:当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
在示例性实施例中,中央处理单元1001在执行“在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作”时,还可以执行如下方法:根据所述识别结果确定手势识别标识位;根据所述运行模式确定模式标识位;以及在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
在示例性实施例中,中央处理单元1001在执行“所述手势识别标识位以及所述模式 标识位满足第三预定条件”时,可以执行如下方法:在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
在示例性实施例中,中央处理单元1001在执行“所述无人配送机器人移动至所述用户处进行预定操作”时,可以执行如下方法:所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;获取所述无人配送机器人的车速;以及在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
在示例性实施例中,所述车速包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。
在示例性实施例中,中央处理单元1001在执行“所述识别结果可信且所述运行模式为手势取货模式”时,还可执行如下方法:对所述识别结果、所述运行模式以及所述车速进行并行处理。
在RAM 1003中,还存储有系统操作所需的各种程序和数据,例如手势识别结果、无人车当前的运行状态等。CPU 1001、ROM 1002以及RAM 1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
以下部件连接至I/O接口1005:包括触摸屏、键盘等的输入部分1006;包括诸如液晶显示器(LCD)等以及扬声器等的输出部分1007;包括闪存等的储存部分1008;以及包括诸如无线网卡、高速网卡等的通信部分1009。通信部分1009经由诸如因特网的网络执行通信处理。驱动器1010也根据需要连接至I/O接口1005。可拆卸介质1011,诸如半导体存储器、磁盘等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入储存部分1008。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,本发明实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中,包括若干指令用以使得一台计算设备(可以是个人计算机、服务器、移动终端、或者智能设备等)执行根据本发明实施例的方法,例如图2、图3、图4、图5、图6、图7中的一个或多个所示的步骤。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行时可以实现如下方法:对用户的手势进行识别生成识别结果;确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作”时,还可以实现如下方法:获取所述无人配送机器人货箱的当前状态;以及在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预 定条件时,将所述运行模式更改为手势取货模式。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“所述识别结果可信”时,可以实现如下方法:在预定时间内获取预定个数的多个第一手势;对所述多个第一手势进行识别生成多个第一识别结果;以及在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“所述货箱的当前状态满足第二预定条件”时,可以实现如下方法:当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作”时,可以实现如下方法:根据所述识别结果确定手势识别标识位;根据所述运行模式确定模式标识位;以及在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“所述手势识别标识位以及所述模式标识位满足第三预定条件”时,可以实现如下方法:在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“所述无人配送机器人移动至所述用户处进行预定操作”时,可以实现如下方法:所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;获取所述无人配送机器人的车速;以及在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
在示例性实施例中,所述车速包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。
在示例性实施例中,该非易失性存储介质中的若干指令在被处理器执行“所述识别结果可信且所述运行模式为手势取货模式”时,还可以实现如下方法:对所述识别结果、所述运行模式以及所述车速进行并行处理。
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其他实施例。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本发明未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由权利要求指出。
应当理解的是,本发明并不限于这里已经示出的详细结构、附图方式或实现方法,相 反,本发明意图涵盖包含在所附权利要求的精神和范围内的各种修改和等效设置。

Claims (28)

  1. 一种用于无人配送机器人的手势控制方法,其特征在于,包括:
    对用户的手势进行识别生成识别结果;
    确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及
    在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
  2. 如权利要求1所述的方法,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    获取所述无人配送机器人货箱的当前状态;以及
    在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。
  3. 如权利要求1所述的方法,其特征在于,所述识别结果可信包括:
    在预定时间内获取预定个数的多个第一手势;
    对所述多个第一手势进行识别生成多个第一识别结果;以及
    在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
  4. 如权利要求2所述的方法,其特征在于,所述货箱的当前状态满足第二预定条件包括:
    当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
  5. 如权利要求1所述的方法,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    根据所述识别结果确定手势识别标识位;
    根据所述运行模式确定模式标识位;以及
    在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
  6. 如权利要求5所述的方法,其特征在于,所述手势识别标识位以及所述模式标识位满足第三预定条件包括:
    在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
  7. 如权利要求5所述的方法,其特征在于,所述无人配送机器人移动至所述用户处进行预定操作包括:
    所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;
    获取所述无人配送机器人的车速;以及
    在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
  8. 如权利要求7所述的方法,其特征在于,所述车速包括当前后轮速度、当前前轮 转角以及上一采样周期的前轮转角。
  9. 如权利要求7所述的方法,其特征在于,所述识别结果可信且所述运行模式为手势取货模式还包括:
    对所述识别结果、所述运行模式以及所述车速进行并行处理。
  10. 一种用于无人配送机器人的手势控制装置,其特征在于,包括:
    手势识别模块,用于对用户的手势进行识别生成识别结果;
    模式模块,用于确定无人配送机器人的运行模式;以及
    判断模块,用于在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
  11. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现下述用于无人配送机器人的手势控制方法:
    对用户的手势进行识别生成识别结果;
    确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及
    在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
  12. 如权利要求11所述的电子设备,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    获取所述无人配送机器人货箱的当前状态;以及
    在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。
  13. 如权利要求11所述的电子设备,其特征在于,所述识别结果可信包括:
    在预定时间内获取预定个数的多个第一手势;
    对所述多个第一手势进行识别生成多个第一识别结果;以及
    在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
  14. 如权利要求12所述的电子设备,其特征在于,所述货箱的当前状态满足第二预定条件包括:
    当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
  15. 如权利要求11所述的电子设备,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    根据所述识别结果确定手势识别标识位;
    根据所述运行模式确定模式标识位;以及
    在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
  16. 如权利要求15所述的电子设备,其特征在于,所述手势识别标识位以及所述模式标识位满足第三预定条件包括:
    在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
  17. 如权利要求15所述的电子设备,其特征在于,所述无人配送机器人移动至所述用户处进行预定操作包括:
    所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;
    获取所述无人配送机器人的车速;以及
    在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
  18. 如权利要求17所述的电子设备,其特征在于,所述车速包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。
  19. 如权利要求17所述的电子设备,其特征在于,所述识别结果可信且所述运行模式为手势取货模式还包括:
    对所述识别结果、所述运行模式以及所述车速进行并行处理。
  20. 一种计算机可读介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现下述用于无人配送机器人的手势控制方法:
    对用户的手势进行识别生成识别结果;
    确定无人配送机器人的运行模式,其中,运行模式包括行驶模式、手势取货模式;以及
    在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作。
  21. 如权利要求20所述的计算机可读介质,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    获取所述无人配送机器人货箱的当前状态;以及
    在所述识别结果可信,且所述运行模式为手势取货模式,且所述货箱的当前状态满足第二预定条件时,将所述运行模式更改为手势取货模式。
  22. 如权利要求20所述的计算机可读介质,其特征在于,所述识别结果可信包括:
    在预定时间内获取预定个数的多个第一手势;
    对所述多个第一手势进行识别生成多个第一识别结果;以及
    在所述多个第一识别结果的可信度大于预定比例时,确定所述识别结果可信。
  23. 如权利要求21所述的计算机可读介质,其特征在于,所述货箱的当前状态满足第二预定条件包括:
    当判断所述货箱为关闭状态时,确认所述货箱的当前状态满足第二预定条件。
  24. 如权利要求20所述的计算机可读介质,其特征在于,在所述识别结果可信且所述运行模式为手势取货模式时,所述无人配送机器人移动至所述用户处进行预定操作还包括:
    根据所述识别结果确定手势识别标识位;
    根据所述运行模式确定模式标识位;以及
    在所述手势识别标识位以及所述模式标识位满足第三预定条件时,所述无人配送机器人移动至所述用户处进行预定操作。
  25. 如权利要求24所述的计算机可读介质,其特征在于,所述手势识别标识位以及所述模式标识位满足第三预定条件包括:
    在所述手势识别标识位指示所述识别结果可信,以及所述模式标识位指示所述运行模式为手势取货模式时,确定所述手势识别标识位以及所述模式标识位满足第三预定条件。
  26. 如权利要求24所述的计算机可读介质,其特征在于,所述无人配送机器人移动至所述用户处进行预定操作包括:
    所述无人配送机器人发出语音提示用户取货,并移动至所述用户处;
    获取所述无人配送机器人的车速;以及
    在所述车速为零时,向所述无人配送机器人的货箱发送开箱指令。
  27. 如权利要求26所述的计算机可读介质,其特征在于,所述车速包括当前后轮速度、当前前轮转角以及上一采样周期的前轮转角。
  28. 如权利要求26所述的计算机可读介质,其特征在于,所述识别结果可信且所述运行模式为手势取货模式还包括:
    对所述识别结果、所述运行模式以及所述车速进行并行处理。
PCT/CN2020/073599 2019-02-22 2020-01-21 用于无人配送机器人的手势控制方法、装置、设备及介质 WO2020168896A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910135248.1A CN109828576B (zh) 2019-02-22 2019-02-22 用于无人配送机器人的手势控制方法、装置、设备及介质
CN201910135248.1 2019-02-22

Publications (1)

Publication Number Publication Date
WO2020168896A1 true WO2020168896A1 (zh) 2020-08-27

Family

ID=66864230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073599 WO2020168896A1 (zh) 2019-02-22 2020-01-21 用于无人配送机器人的手势控制方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN109828576B (zh)
WO (1) WO2020168896A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020192A1 (zh) * 2021-08-19 2023-02-23 北京京东乾石科技有限公司 配送方法、无人配送控制装置、无人配送装置和存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109828576B (zh) * 2019-02-22 2022-09-06 北京京东乾石科技有限公司 用于无人配送机器人的手势控制方法、装置、设备及介质
CN112306053B (zh) * 2019-08-09 2024-08-20 北京京东乾石科技有限公司 无人车控制方法
CN113236070B (zh) * 2021-04-20 2022-05-17 北京三快在线科技有限公司 无人车的舱门控制方法、装置、存储介质及无人车
CN113393151B (zh) * 2021-06-30 2024-05-10 深圳优地科技有限公司 收货人识别方法、送货机器人及计算机存储介质
CN113505691B (zh) * 2021-07-09 2024-03-15 中国矿业大学(北京) 一种煤岩识别方法及识别可信度指示方法
CN113977597B (zh) * 2021-10-08 2023-06-16 深兰机器人产业发展(河南)有限公司 配送机器人的控制方法及相关装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004126800A (ja) * 2002-09-30 2004-04-22 Secom Co Ltd 搬送ロボット及び搬送ロボットを用いた搬送システム。
EP1671758A1 (en) * 2004-12-14 2006-06-21 Honda Motor Co., Ltd. Robot system for carrying an item with receiving / passing motion deciding means based on external force detection
CN105252535A (zh) * 2015-10-29 2016-01-20 李伟 物流机器人配送系统
US9459620B1 (en) * 2014-09-29 2016-10-04 Amazon Technologies, Inc. Human interaction with unmanned aerial vehicles
CN107093040A (zh) * 2017-03-03 2017-08-25 北京小度信息科技有限公司 信息生成方法及装置
CN108983980A (zh) * 2018-07-27 2018-12-11 河南科技大学 一种移动机器人基本运动手势控制方法
CN109828576A (zh) * 2019-02-22 2019-05-31 北京京东尚科信息技术有限公司 用于无人配送机器人的手势控制方法、装置、设备及介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106388B (zh) * 2011-11-15 2017-02-08 中国科学院深圳先进技术研究院 图像识别方法和系统
CN106295599A (zh) * 2016-08-18 2017-01-04 乐视控股(北京)有限公司 车辆的控制方法和装置
CN106227219A (zh) * 2016-09-28 2016-12-14 深圳市普渡科技有限公司 一种基于送餐机器人的自助餐厅送餐方法
CN106853640A (zh) * 2017-03-21 2017-06-16 北京京东尚科信息技术有限公司 配送机器人控制方法和装置、配送机器人及控制系统
CN107526436A (zh) * 2017-07-17 2017-12-29 成都安程通科技有限公司 一种智能汽车的手势识别方法
CN107464323A (zh) * 2017-08-15 2017-12-12 杭州纳戒科技有限公司 共享物流箱开锁方法、装置以及服务器
CN207397090U (zh) * 2017-10-12 2018-05-22 北京真机智能科技有限公司 一种无人驾驶配送小车
CN107765855A (zh) * 2017-10-25 2018-03-06 电子科技大学 一种基于手势识别控制机器人运动的方法和系统
CN109109857A (zh) * 2018-09-05 2019-01-01 深圳普思英察科技有限公司 一种无人驾驶售货车及其停车方法与装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004126800A (ja) * 2002-09-30 2004-04-22 Secom Co Ltd 搬送ロボット及び搬送ロボットを用いた搬送システム。
EP1671758A1 (en) * 2004-12-14 2006-06-21 Honda Motor Co., Ltd. Robot system for carrying an item with receiving / passing motion deciding means based on external force detection
US9459620B1 (en) * 2014-09-29 2016-10-04 Amazon Technologies, Inc. Human interaction with unmanned aerial vehicles
CN105252535A (zh) * 2015-10-29 2016-01-20 李伟 物流机器人配送系统
CN107093040A (zh) * 2017-03-03 2017-08-25 北京小度信息科技有限公司 信息生成方法及装置
CN108983980A (zh) * 2018-07-27 2018-12-11 河南科技大学 一种移动机器人基本运动手势控制方法
CN109828576A (zh) * 2019-02-22 2019-05-31 北京京东尚科信息技术有限公司 用于无人配送机器人的手势控制方法、装置、设备及介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANGLIAN: "Non-official translation: The Workload Higher than Delivery Man, Upgrade and Debut of Jingdong Unmanned Delivery Vehicle", NON-OFFICIAL TRANSLATION: BAIDU DATABASE ONLINE, 3 November 2017 (2017-11-03) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020192A1 (zh) * 2021-08-19 2023-02-23 北京京东乾石科技有限公司 配送方法、无人配送控制装置、无人配送装置和存储介质

Also Published As

Publication number Publication date
CN109828576B (zh) 2022-09-06
CN109828576A (zh) 2019-05-31

Similar Documents

Publication Publication Date Title
WO2020168896A1 (zh) 用于无人配送机器人的手势控制方法、装置、设备及介质
US11383676B2 (en) Vehicles, vehicle door unlocking control methods and apparatuses, and vehicle door unlocking systems
EP3647129A1 (en) Vehicle, vehicle door unlocking control method and apparatus, and vehicle door unlocking system
US10249013B2 (en) Method and system for wireless payment of public transport fare
US10867393B2 (en) Video object detection
US11216655B2 (en) Electronic device and controlling method thereof
CN110837796B (zh) 图像处理方法及装置
CN104854536A (zh) 用于红外线无接触手势系统的装置和方法
WO2018191938A1 (zh) 一键开机处理方法及终端
WO2020007191A1 (zh) 活体识别检测方法、装置、介质及电子设备
US20230127549A1 (en) Method, mobile device, head-mounted display, and system for estimating hand pose
US20220410841A1 (en) Unlocking vehicle doors with facial recognition
CN114187637A (zh) 车辆控制方法、装置、电子设备及存储介质
CN112712498A (zh) 移动终端执行的车辆定损方法、装置、移动终端、介质
KR20220101960A (ko) 트렁크와 테일게이트의 열림 제어 장치 및 방법
CN109725722B (zh) 有屏设备的手势控制方法和装置
WO2023184846A1 (zh) 一种航向角确定方法及装置、控制器、车辆
CN116109889A (zh) 一种图像识别方法、装置、电子设备及存储介质
CN111383633A (zh) 语音识别连续性控制方法、装置、智能终端及存储介质
WO2019242249A1 (zh) 界面显示方法及电子设备
US20210005203A1 (en) Voice processing apparatus and voice processing method
KR102569010B1 (ko) 자율주행 전기자동차의 신호를 인식하여 배터리를 자동으로 충전하는 방법 및 전기차 충전 단말
US20240118102A1 (en) Method and apparatus with parking path determination
CN113610117B (zh) 基于深度数据的水下传感数据处理方法及系统
CN111860385A (zh) 目标识别方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20760349

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20760349

Country of ref document: EP

Kind code of ref document: A1