CN116229582B - Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition - Google Patents

Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition Download PDF

Info

Publication number
CN116229582B
CN116229582B CN202310499993.0A CN202310499993A CN116229582B CN 116229582 B CN116229582 B CN 116229582B CN 202310499993 A CN202310499993 A CN 202310499993A CN 116229582 B CN116229582 B CN 116229582B
Authority
CN
China
Prior art keywords
cargo
module
owner
goods
judging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310499993.0A
Other languages
Chinese (zh)
Other versions
CN116229582A (en
Inventor
林国义
杨晨
董瑞雪
张发明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN202310499993.0A priority Critical patent/CN116229582B/en
Publication of CN116229582A publication Critical patent/CN116229582A/en
Application granted granted Critical
Publication of CN116229582B publication Critical patent/CN116229582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention belongs to the technical field of logistics unmanned aerial vehicles, and particularly relates to a logistics unmanned aerial vehicle based on human body gesture recognition and a man-machine logistics interactive system. According to the invention, the human body gesture recognition can be carried out on the cargo owner, so that whether the cargo owner is in the receiving gesture is judged, the falling mode is determined, and the limitation on the cargo can be safely removed after the cargo owner hovers or the cargo is carried to the ground by combining with the setting of the falling parameters, so that the cargo can be ensured to be intact, the cargo owner can be gradually adapted to the weight of the cargo when receiving the cargo, the condition that the cargo owner is injured by the cargo is avoided, the gesture of the cargo owner is predicted according to the action information of the cargo owner in the process, the purpose of autonomously judging the falling mode is realized, and meanwhile, the hovering position of the unmanned aerial vehicle can be regulated in real time according to the predicted gesture, so that the cargo owner can be more convenient to take the cargo.

Description

Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition
Technical Field
The invention belongs to the technical field of logistics unmanned aerial vehicles, and particularly relates to a logistics unmanned aerial vehicle based on human body gesture recognition and a man-machine logistics interactive system.
Background
With the continuous development of logistics technology, people are not only limited to ground delivery when delivering goods, and for some small-sized goods, unmanned aerial vehicles can be used for delivery operation, and compared with ground delivery, the unmanned aerial vehicle has the advantages that most of terrains can be disregarded in the delivery process, meanwhile, the delivery speed is higher than that of ground delivery, and in the flying process of the unmanned aerial vehicle, of course, in order to ensure that the goods can be safely delivered to a cargo owner, a corresponding face recognition system type must be provided, so that the phenomenon of goods misdelivery is avoided.
The existing logistics unmanned aerial vehicle is characterized in that when goods are distributed, the goods are placed on the ground after face information is checked, if a receiver identification method based on face transfer and face identification is disclosed in the unmanned aerial vehicle distribution with the publication number of CN114220157A, after face identification verification is passed, the unmanned aerial vehicle falls to the ground near the geographic coordinate position corresponding to a receiver after receiving a confirmation instruction, and then releases the transported goods, so that the distribution task is completed and the goods take off and return, the delivery mode is single, the ground environment of a delivery point belongs to an uncontrollable area, fragile goods are easy to damage, at the moment, the logistics unmanned aerial vehicle is more suitable for a cargo owner to take away the goods from the lower part of the unmanned aerial vehicle, and then the conventional unmanned aerial vehicle needs to be stopped when the goods are dropped, needs to be started again after the goods are dropped, the power consumption of the unmanned aerial vehicle is large in a take-off stage, and the cruising ability of the battery is influenced to a certain extent.
Disclosure of Invention
The invention aims to provide a logistics unmanned aerial vehicle and a man-machine logistics interactive system based on human body gesture recognition, which can be used for carrying out human body gesture recognition on a cargo owner, determining the hovering position of the unmanned aerial vehicle, selecting ground cargo falling according to the human body gesture of the cargo owner, improving the flexibility of a cargo conveying mode, avoiding shutdown of the unmanned aerial vehicle in any cargo falling mode, and having small influence on a battery.
The technical scheme adopted by the invention is as follows:
a man-machine logistics interaction system based on human body gesture recognition comprises a data acquisition module, a feature extraction module, a first-level judgment module, a discharge judgment module, a motion capture module, a feature analysis module, a prediction module, a second-level judgment module and a central control module;
the data acquisition module is used for acquiring cargo information and a face image of a cargo owner and calibrating the face image of the cargo owner into a standard image;
the feature extraction module is used for extracting a human body image of a cargo owner, wherein the human body image comprises a face image and a joint image;
the first-level judging module is used for identifying a human body image of a cargo owner and determining a cargo falling mode according to an identification result, wherein the cargo falling mode comprises hovering cargo receiving and ground cargo falling;
the unloading judging module is used for setting a cargo falling parameter according to a cargo falling mode, and releasing the limitation of cargoes when the cargo falling parameter is met;
the motion capture module is used for collecting motion information of a cargo owner, decoding the motion information, outputting multi-frame continuous images and calibrating the multi-frame continuous images as images to be analyzed;
the feature analysis module is used for acquiring all images to be analyzed, inputting the images to be analyzed into the trend analysis model and outputting the action change trend of the cargo owner;
the prediction module is used for predicting the arm gesture of the cargo owner according to the motion change trend and outputting the arm gesture as a predicted gesture;
the secondary judging module is used for acquiring a predicted gesture and adjusting the hovering position of the unmanned aerial vehicle according to the predicted gesture;
the central control module is used for receiving and transmitting the circulation information among the data acquisition module, the feature extraction module, the primary judgment module, the motion capture module, the feature analysis module, the prediction module and the secondary judgment module.
In a preferred scheme, the system further comprises a communication module, wherein the communication module is used for reminding a cargo owner of selecting a cargo falling mode;
if the cargo owner selects hovering to receive cargo, the unmanned aerial vehicle hovers in front of the cargo owner;
if the cargo owner selects the ground to drop the cargo, the unmanned aerial vehicle places the cargo on the ground;
if the cargo owner does not respond, the primary judging module is used for identifying the human body image of the cargo owner, and the cargo falling mode is automatically determined according to the identification result.
In a preferred scheme, the feature extraction module comprises a first execution unit and a second execution unit, wherein the execution priority of the first execution unit is higher than that of the second execution unit;
capturing a face image of the cargo owner and uploading the face image to a first-level judging module when the first executing unit executes;
and when the second execution unit executes, capturing a joint image of the cargo owner and uploading the joint image to a first-level judging module, wherein the joint image comprises shoulder joint characteristics, elbow joint characteristics and wrist joint characteristics.
In a preferred scheme, the primary judging module comprises a first judging unit and a second judging unit, wherein the execution priority of the first judging unit is higher than that of the second judging unit;
the first judging unit is used for judging whether the face image is consistent with a standard image or not;
if the information is consistent, judging that the cargo owner information is accurate, and hovering the unmanned aerial vehicle in front of the cargo owner;
if the information is inconsistent, judging that the cargo owner information is inconsistent, and not executing a second judging unit;
the second judging unit is used for judging the position relation of the shoulder joint characteristics, the elbow joint characteristics and the wrist joint characteristics;
if the wrist joint feature is positioned above the elbow joint feature and the elbow joint feature is positioned above the shoulder joint feature, judging that the cargo owner selects hovering cargo receiving;
if the shoulder joint feature is located below the elbow joint feature, the owner is determined to select a ground drop.
In a preferred scheme, the goods falling parameter is set to be a pressure change parameter between goods and the unmanned aerial vehicle, the pressure change parameter is set according to the weight of the goods, and the value of the pressure change parameter is greater than one half of the weight of the goods.
In a preferred scheme, a standard posture image is preset in the secondary judging module, and the standard posture image is compared with the standard posture image after the predicted posture is acquired;
if the predicted gesture is consistent with the standard gesture image, the unmanned aerial vehicle hovers at a designated position;
and if the predicted gesture is inconsistent with the standard gesture image, generating a deviation distance, and adjusting the hovering position of the unmanned aerial vehicle to the position of the predicted gesture.
In a preferred scheme, the system further comprises a voice broadcasting module, wherein the voice broadcasting module is used for broadcasting reminding acceptance statements.
The invention also provides a physical distribution unmanned aerial vehicle based on human body gesture recognition, which is applied to the human-computer physical distribution interaction system based on human body gesture recognition, and comprises the following components:
the device comprises a machine body, wherein a clamping mechanism for bearing goods is arranged below the machine body, a tension sensor is arranged on the clamping mechanism, and the tension sensor is used for detecting a tension value between the clamping mechanism and the loaded goods;
after the first-level judging module judges the goods falling mode, the unloading judging module sets goods falling parameters according to the goods falling mode, the goods falling parameters are pressure change values between the clamping mechanism and the loaded goods, and when the tension sensor detects that the stress of the clamping mechanism meets the goods falling parameters, the clamping mechanism releases the clamping of the goods.
In a preferred scheme, fixture includes grip block and two-way telescopic link, the grip block is provided with two, two the grip block all sets up in the below of fuselage, two-way telescopic link fixed mounting is in the bottom of fuselage, just two-way telescopic link's both ends respectively with the top fixed connection of two grip blocks.
In a preferred scheme, the grip blocks all set up to the L form, two be provided with between the grip block and bear the goods outer box, bear the both sides of goods outer box all offered the joint notch corresponding with the grip block.
The invention has the technical effects that:
according to the invention, the human body gesture recognition can be carried out on the cargo owner, so that whether the cargo owner is in the receiving gesture is judged, the cargo falling mode is determined, the limitation on the cargo can be safely removed after the cargo is hovered or carried to the ground by combining with the setting of the cargo falling parameters, the cargo can be ensured to be intact, the cargo owner can be gradually adapted to the weight of the cargo when receiving the cargo, the condition that the cargo owner is injured by the cargo is avoided, the gesture of the cargo owner can be predicted according to the action information of the cargo owner in the process, the purpose of autonomously judging the cargo falling mode is realized, the hovering position of the unmanned aerial vehicle can be regulated in real time according to the predicted gesture, the cargo owner can more conveniently take the cargo, and the unmanned aerial vehicle does not need to stop in any cargo falling mode, so that the influence on a battery is small.
Drawings
FIG. 1 is a block diagram of a system provided by the present invention;
FIG. 2 is a system operational diagram provided by the present invention;
fig. 3 is a schematic diagram of the overall structure of the logistics unmanned aerial vehicle provided by the invention.
In the drawings, the list of components represented by the various numbers is as follows:
a body;
a clamping mechanism; 201. a clamping plate; 202. a bidirectional telescopic rod.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one preferred embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Further, in describing the embodiments of the present invention in detail, the cross-sectional view of the device structure is not partially enlarged to a general scale for convenience of description, and the schematic is only an example, which should not limit the scope of protection of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Referring to fig. 1 and 2, the invention provides a human-computer logistics interaction system based on human gesture recognition, which comprises a data acquisition module, a feature extraction module, a first-level judgment module, a discharge judgment module, a motion capture module, a feature analysis module, a prediction module, a second-level judgment module and a central control module;
the data acquisition module is used for acquiring cargo information and a face image of a cargo owner and calibrating the face image of the cargo owner into a standard image;
the feature extraction module is used for extracting a human body image of a cargo owner, wherein the human body image comprises a face image and a joint image;
the first-level judging module is used for identifying a human body image of a cargo owner and determining a cargo falling mode according to an identification result, wherein the cargo falling mode comprises hovering cargo receiving and ground cargo falling;
the unloading judgment module is used for setting a cargo falling parameter according to a cargo falling mode, and releasing the limitation of cargoes when the cargo falling parameter is met;
the motion capture module is used for collecting motion information of a cargo owner, decoding the motion information, outputting multi-frame continuous images and calibrating the multi-frame continuous images as images to be analyzed;
the feature analysis module is used for acquiring all the images to be analyzed, inputting the images to be analyzed into the trend analysis model and outputting the action change trend of the cargo owner;
the prediction module is used for predicting the arm gesture of the cargo owner according to the action change trend and outputting the arm gesture as a predicted gesture;
the second-level judging module is used for acquiring the predicted gesture and adjusting the hovering position of the unmanned aerial vehicle according to the predicted gesture;
the central control module is used for receiving and transmitting the transfer information among the data acquisition module, the feature extraction module, the primary judgment module, the motion capture module, the feature analysis module, the prediction module and the secondary judgment module.
As described above, in the logistics system, the conventional delivery method mostly adopts vehicle delivery, but the urban road is complex, so that the delay of the delivery of the goods is increased, but with the rapid development of the unmanned aerial vehicle technology, the unmanned aerial vehicle can deliver some small-sized goods, so that the efficiency of the delivery of the goods is correspondingly increased, and the corresponding labor cost can be reduced, the unmanned aerial vehicle always needs to place the goods on the ground in the process of delivering the goods, and the operation of the unmanned aerial vehicle is stopped, so that the unmanned aerial vehicle is prevented from being damaged by the propeller of the unmanned aerial vehicle to the consignee or the shipper, the starting and stopping frequency of the unmanned aerial vehicle is definitely increased, the operation energy consumption of the unmanned aerial vehicle is increased, the cruising ability of the unmanned aerial vehicle is reduced, based on the scheme, the unmanned aerial vehicle is controlled to hover in front of the cargo owner by adopting the method of identifying the human body gesture of the cargo owner, the unmanned aerial vehicle is not required to be stopped, the energy consumption is reduced, firstly, the information of the goods to be distributed and the face image of the goods owner corresponding to the goods are obtained through a data acquisition module, the face image is a standard image for the subsequent comparison process, the data acquisition module comprises an input unit, the input unit is used for obtaining the face image actively uploaded by the goods owner and calibrating the face image into a standard image, the upper limit of the face image actively uploaded by the goods owner is set to 3-5, the standard image is required to clearly show the five sense organs of the face of the human body, of course, before the goods are distributed, the standard image is required to be communicated with a receiver in advance so as to ensure that the goods can be accurately distributed to the goods owner, and meanwhile, the technical key point that the non-field needs to be protected is also required to be clear whether the goods owner changes the destination or not, firstly, collecting the human body image of the cargo owner, wherein the human body image comprises a face image and a joint image, the joint image is the arm and shoulder joint of the cargo owner, then judging whether the face image of the cargo owner is consistent with a standard image by utilizing a first-level judging module, continuously judging the position relation between the arm joint and the shoulder joint under the condition that the face image is consistent with the standard image, determining the falling mode according to the gesture of the cargo owner, so as to determine whether to carry out ground falling or hovering receiving, and when the cargo owner observes an unmanned aerial vehicle, making a receiving gesture possibly delayed, in order to ensure the delivery efficiency of the cargo, the embodiment also provides a motion capturing module for capturing the motion information of the cargo owner and combining with a characteristic analysis module to calculate the motion change trend of the cargo owner, finally, predicting the predicted gesture of the cargo owner by utilizing a predicting module, and then comparing the predicted gesture with a standard gesture image, and adjusting the hovering point position of the unmanned aerial vehicle in real time according to a comparison result, thereby completing the delivery work of the goods, directly placing the goods on the ground in front of the goods owner when the goods owner does not make the goods receiving gesture, determining whether the goods are stressed to meet the goods falling parameter or not by utilizing a goods unloading judging module when the clamping state of the goods is released, setting the goods falling parameter as a pressure change parameter between the goods and the unmanned aerial vehicle, and setting the pressure change parameter according to the weight of the goods, wherein the value of the pressure change parameter is greater than one half of the weight of the goods, and the purpose is that the goods can not incline after contacting the ground, and the goods owner can adapt to the weight of the goods when receiving the goods, so that the phenomenon that the goods suddenly drop to injure the goods owner is avoided.
In a preferred embodiment, the system further comprises a communication module, wherein the communication module is used for reminding a cargo owner of selecting a cargo falling mode;
if the cargo owner selects hovering to receive cargo, the unmanned aerial vehicle hovers in front of the cargo owner;
if the cargo owner selects the ground to drop the cargo, the unmanned aerial vehicle places the cargo on the ground;
if the cargo owner does not respond, the primary judging module is used for identifying the human body image of the cargo owner, and the cargo falling mode is automatically determined according to the identification result.
In this embodiment, before the goods are delivered, the communication module may remind the cargo owner to autonomously select the goods-dropping mode, because fragile goods or valuables may exist in the goods delivered by the unmanned aerial vehicle, it is necessary to communicate with the cargo owner in advance and verify the goods-dropping mode, and when the cargo owner does not respond, the goods-dropping mode is autonomously determined according to the determination result of the primary determination module.
Secondly, the feature extraction module comprises a first execution unit and a second execution unit, wherein the execution priority of the first execution unit is higher than that of the second execution unit;
capturing a face image of a cargo owner and uploading the face image to a first-level judging module when the first executing unit executes;
and when the second execution unit executes, capturing a joint image of the cargo owner and uploading the joint image to the first-level judging module, wherein the joint image comprises shoulder joint characteristics, elbow joint characteristics and wrist joint characteristics.
In the above, the front end of the unmanned aerial vehicle is equipped with a binocular camera, which is used for acquiring the face image and the joint image of the cargo owner, and the priority of acquiring the face image is higher than that of acquiring the joint image, after the unmanned aerial vehicle reaches the designated position, the binocular camera can acquire the human body image and the action information of the cargo, and upload the human body image to the primary judging module and the secondary judging module, so as to provide data support for whether to release the limitation of the cargo.
Secondly, the first-level judging module comprises a first judging unit and a second judging unit, and the execution priority of the first judging unit is higher than that of the second judging unit;
the first judging unit is used for judging whether the face image is consistent with the standard image or not;
if the information is consistent, judging that the cargo owner information is accurate, and hovering the unmanned aerial vehicle in front of the cargo owner;
if the information is inconsistent, judging that the cargo owner information is inconsistent, and not executing the second judging unit;
the second judging unit is used for judging the position relation of the shoulder joint characteristics, the elbow joint characteristics and the wrist joint characteristics;
if the wrist joint feature is positioned above the elbow joint feature and the elbow joint feature is positioned above the shoulder joint feature, judging that the cargo owner selects hovering cargo;
if the shoulder joint feature is located above the elbow joint feature, the owner is determined to select a ground drop.
In this embodiment, after the binocular camera collects the face image of the cargo owner, immediately uploading the face image to the first determining unit, the first determining unit compares the face image with the standard image to determine whether the comparison result is consistent, and if the comparison result is consistent, executing the second determining unit to determine the position relationship of the shoulder joint feature, the elbow joint feature and the wrist joint feature, when the wrist joint feature, the elbow joint feature and the shoulder joint feature are arranged in the order from high to low, determining that the cargo owner is in the receiving attitude, determining that the cargo owner selects to hover and receives the cargo, when the unmanned aerial vehicle hovers in front of the cargo owner with the cargo, and when the shoulder joint feature of the cargo owner is located above the elbow joint feature, determining that the cargo owner selects to land and receives the cargo, and if the cargo owner has a delay, then, combining the motion information of the cargo owner to be collected, outputting the motion change trend of the cargo owner, determining whether the wrist joint feature of the cargo owner is ascending, if the cargo owner selects to hover and otherwise determining that the cargo owner selects to hover and receives the cargo;
after the wrist, elbow and shoulder joint characteristics are determined, a virtual coordinate system is constructed with the shoulder joint characteristics as the origin, and the edge coordinate points of the wrist, elbow and shoulder joint characteristics are determined, and the ordinate of these edge coordinate points may be compared.
Secondly, a standard posture image is preset in the secondary judging module, and the standard posture image is compared with the predicted posture image after the predicted posture is acquired;
if the predicted gesture is consistent with the standard gesture image, the unmanned aerial vehicle hovers at a designated position;
if the predicted gesture is inconsistent with the standard gesture image, generating a deviation distance, and adjusting the hovering position of the unmanned aerial vehicle to the position of the predicted gesture.
In this embodiment, when predicting the arm posture of the owner, firstly, the motion capturing module obtains the motion information of the owner, the motion information is video information, after capturing the video information, the video information is input into the decoder for decoding processing, so that the healthy child can obtain multi-frame continuous joint images, obtain the edge coordinates of the shoulder joint feature, the elbow joint feature and the wrist joint feature in the joint images respectively, and calibrate the edge coordinates as the coordinates to be evaluated, and then call the trend analysis model to obtain the trend scoreAnalyzing the function, inputting the coordinate to be evaluated into a trend analysis function, and further obtaining a change trend value of the coordinate to be evaluated, wherein the coordinate to be evaluated is set as the ordinate of the wrist joint feature, the elbow joint feature and the shoulder joint feature, and the trend analysis function is as follows:wherein->Represents the trend value of the coordinate to be evaluated, +.>Representing the number of coordinates to be evaluated, +.>The number representing the coordinate to be evaluated does not participate in the actual operation, +.>And->The method comprises the steps that all the coordinates to be evaluated are represented, after the change trend value of the coordinates to be evaluated is obtained, the coordinates to be evaluated are immediately judged, if the change trend value of the coordinates to be evaluated is negative, the fact that a cargo owner is not in a receiving attitude is judged, the cargo owner selects a ground cargo falling mode, if the change trend value of the coordinates to be evaluated is positive, the fact that the cargo owner has a receiving trend is judged, the cargo owner selects hovering cargo receiving mode, the predicted attitude is calculated, the predicted attitude is a critical point coordinate when the wrist joint feature, the elbow joint feature and the shoulder joint feature of the cargo owner are arranged from high to low, if the predicted attitude is consistent with the standard image attitude, the hovering point of the unmanned aerial vehicle is not changed, otherwise, the deviation distance between the wrist joint feature in the standard image and the wrist joint feature in the predicted attitude is calculated, and the unmanned aerial vehicle can adjust the hovering point according to the deviation distance.
In a preferred embodiment, the system further comprises a voice broadcasting module, wherein the voice broadcasting module is used for broadcasting a prompt acceptance statement, such as statement of "please confirm good goods" or "please receive goods".
As shown in fig. 3, the invention further provides a physical distribution unmanned aerial vehicle based on human body gesture recognition, which is applied to the human body gesture recognition-based man-machine physical distribution interaction system, and comprises a machine body 1, wherein a clamping mechanism 2 for bearing goods is arranged below the machine body 1, a tension sensor is arranged on the clamping mechanism 2, and the tension sensor is used for detecting a tension value between the clamping mechanism 2 and the loaded goods;
after the first-level judging module judges the goods falling mode, the unloading judging module sets goods falling parameters according to the goods falling mode, the goods falling parameters are pressure change values between the clamping mechanism and the loaded goods, and when the tension sensor detects that the stress of the clamping mechanism 2 meets the goods falling parameters, the clamping mechanism releases clamping of the goods.
In the above, after the load-bearing goods fall to the ground or the palm of the owner, the load-bearing goods can receive the supporting force from the ground or the lifting force of the palm of the owner, at this time, the stress of the clamping mechanism 2 is reduced, correspondingly, the tension value of the tension sensor is also reduced, when the reduced value meets the falling parameters, the clamping mechanism 2 can release the clamping of the load-bearing goods, and then the owner can take away the goods.
In addition, fixture 2 includes grip block 201 and two-way telescopic link 202, and grip block 201 is provided with two, and two grip block 201 all set up in the below of fuselage 1, and two-way telescopic link 202 fixed mounting is in the bottom of fuselage 1, and two ends of two-way telescopic link 202 respectively with the top fixed connection of two grip block 201.
Further, two grip blocks 201 are all set to L shape, are provided with between two grip blocks 201 and bear the goods outer box, bear the goods outer box both sides all seted up the joint notch corresponding with grip block 201.
Specifically, after the reduction value of the tension sensor meets the falling parameters, the bidirectional telescopic rod 202 acts and drives the two clamping plates 201 to move towards two sides far away from the outer box body for carrying the goods, at this time, the outer box body for carrying the goods is released from being limited, and then the goods owner takes the goods again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention. Structures, devices and methods of operation not specifically described and illustrated herein, unless otherwise indicated and limited, are implemented according to conventional means in the art.

Claims (7)

1. The utility model provides a man-machine logistics interactive system based on human gesture discernment, includes data acquisition module, feature extraction module, one-level judgement module, discharge judgement module, motion capture module, feature analysis module, prediction module, second grade judgement module and well accuse module, its characterized in that:
the data acquisition module is used for acquiring cargo information and a face image of a cargo owner and calibrating the face image of the cargo owner into a standard image;
the feature extraction module is used for extracting a human body image of a cargo owner, wherein the human body image comprises a face image and a joint image;
the feature extraction module comprises a first execution unit and a second execution unit, wherein the execution priority of the first execution unit is higher than that of the second execution unit;
capturing a face image of the cargo owner and uploading the face image to a first-level judging module when the first executing unit executes;
capturing a joint image of the cargo owner and uploading the joint image to a first-level judging module when the second executing unit executes, wherein the joint image comprises shoulder joint characteristics, elbow joint characteristics and wrist joint characteristics;
the first-level judging module is used for identifying a human body image of a cargo owner and determining a cargo falling mode according to an identification result, wherein the cargo falling mode comprises hovering cargo receiving and ground cargo falling;
the first-stage judging module comprises a first judging unit and a second judging unit, wherein the execution priority of the first judging unit is higher than that of the second judging unit;
the first judging unit is used for judging whether the face image is consistent with a standard image or not;
if the information is consistent, judging that the cargo owner information is accurate, and hovering the unmanned aerial vehicle in front of the cargo owner;
if the information is inconsistent, judging that the cargo owner information is inconsistent, and not executing a second judging unit;
the second judging unit is used for judging the position relation of the shoulder joint characteristics, the elbow joint characteristics and the wrist joint characteristics;
if the wrist joint feature is positioned above the elbow joint feature and the elbow joint feature is positioned above the shoulder joint feature, judging that the cargo owner selects hovering cargo receiving;
if the shoulder joint feature is positioned below the elbow joint feature, judging that the owner selects the ground to drop goods;
the unloading judging module is used for setting a cargo falling parameter according to a cargo falling mode, wherein the cargo falling parameter is set as a pressure change parameter between the cargo and the unmanned aerial vehicle, and the pressure change parameter is set according to the weight of the cargo, wherein the value of the pressure change parameter is greater than one half of the weight of the cargo, and the limitation on the cargo is relieved when the cargo falling parameter is met;
the motion capture module is used for collecting motion information of a cargo owner, decoding the motion information, outputting multi-frame continuous images and calibrating the multi-frame continuous images as images to be analyzed;
the feature analysis module is used for acquiring all images to be analyzed, inputting the images to be analyzed into the trend analysis model and outputting the action change trend of the cargo owner;
the prediction module is used for predicting the arm gesture of the cargo owner according to the motion change trend and outputting the arm gesture as a predicted gesture;
the secondary judging module is used for acquiring a predicted gesture and adjusting the hovering position of the unmanned aerial vehicle according to the predicted gesture;
the central control module is used for receiving and transmitting the circulation information among the data acquisition module, the feature extraction module, the primary judgment module, the motion capture module, the feature analysis module, the prediction module and the secondary judgment module.
2. A human-computer logistic interaction system based on human gesture recognition according to claim 1, wherein: the system also comprises a communication module, wherein the communication module is used for reminding a cargo owner of selecting a cargo falling mode;
if the cargo owner selects hovering to receive cargo, the unmanned aerial vehicle hovers in front of the cargo owner;
if the cargo owner selects the ground to drop the cargo, the unmanned aerial vehicle places the cargo on the ground;
if the cargo owner does not respond, the primary judging module is used for identifying the human body image of the cargo owner, and the cargo falling mode is automatically determined according to the identification result.
3. A human-computer logistic interaction system based on human gesture recognition according to claim 1, wherein: the secondary judging module is internally preset with a standard posture image and compares the standard posture image with the predicted posture image after acquiring the predicted posture;
if the predicted gesture is consistent with the standard gesture image, the unmanned aerial vehicle hovers at a designated position;
and if the predicted gesture is inconsistent with the standard gesture image, generating a deviation distance, and adjusting the hovering position of the unmanned aerial vehicle to the position of the predicted gesture.
4. A human-computer logistic interaction system based on human gesture recognition according to claim 1, wherein: the voice broadcasting system further comprises a voice broadcasting module, wherein the voice broadcasting module is used for broadcasting reminding acceptance statement.
5. A physical distribution unmanned aerial vehicle based on human gesture recognition, applied to the human-computer physical distribution interactive system based on human gesture recognition of any one of claims 1 to 4, characterized in that: comprising the following steps:
the device comprises a machine body (1), wherein a clamping mechanism (2) for bearing goods is arranged below the machine body (1), a tension sensor is arranged on the clamping mechanism (2), and the tension sensor is used for detecting a tension value between the clamping mechanism (2) and the loaded goods;
after the first-level judging module judges the goods falling mode, the unloading judging module sets goods falling parameters according to the goods falling mode, the goods falling parameters are pressure change values between the clamping mechanism and the loaded goods, and when the tension sensor detects that the stress of the clamping mechanism (2) meets the goods falling parameters, the clamping mechanism releases the clamping of the goods.
6. The physical distribution unmanned aerial vehicle based on human gesture recognition according to claim 5, wherein: clamping mechanism (2) are including grip block (201) and two-way telescopic link (202), grip block (201) are provided with two, two grip block (201) all set up in the below of fuselage (1), two-way telescopic link (202) fixed mounting is in the bottom of fuselage (1), just the both ends of two-way telescopic link (202) respectively with the top fixed connection of two grip blocks (201).
7. The physical distribution unmanned aerial vehicle based on human gesture recognition according to claim 6, wherein: two grip blocks (201) all set up to the L form, two be provided with between grip block (201) and bear the goods outer box, bear the both sides of goods outer box and all seted up the joint notch corresponding with grip block (201).
CN202310499993.0A 2023-05-06 2023-05-06 Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition Active CN116229582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310499993.0A CN116229582B (en) 2023-05-06 2023-05-06 Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310499993.0A CN116229582B (en) 2023-05-06 2023-05-06 Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition

Publications (2)

Publication Number Publication Date
CN116229582A CN116229582A (en) 2023-06-06
CN116229582B true CN116229582B (en) 2023-08-04

Family

ID=86579065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310499993.0A Active CN116229582B (en) 2023-05-06 2023-05-06 Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition

Country Status (1)

Country Link
CN (1) CN116229582B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9459620B1 (en) * 2014-09-29 2016-10-04 Amazon Technologies, Inc. Human interaction with unmanned aerial vehicles
US9536216B1 (en) * 2014-12-18 2017-01-03 Amazon Technologies, Inc. Delivery of packages by unmanned aerial vehicles
CN106327128A (en) * 2016-08-26 2017-01-11 成都畅达网通讯科技有限公司 Intelligent one-stop freight tracking settlement system and tracking settlement method
CN106809393A (en) * 2016-10-21 2017-06-09 北京京东尚科信息技术有限公司 A kind of freight transportation method based on unmanned plane
CA2997077A1 (en) * 2017-03-06 2018-09-06 Walmart Apollo, Llc Apparatuses and methods for gesture-controlled unmanned aerial vehicles
CN109696923A (en) * 2017-10-20 2019-04-30 成都麦动信息技术有限公司 Intelligent delivery method and device
US20200202284A1 (en) * 2018-12-21 2020-06-25 Ford Global Technologies, Llc Systems, methods, and devices for item delivery using unmanned aerial vehicles
CN110458494A (en) * 2019-07-19 2019-11-15 暨南大学 A kind of unmanned plane logistics delivery method and system
CN114220157A (en) * 2021-12-30 2022-03-22 安徽大学 Method for identifying consignee in unmanned aerial vehicle distribution based on face correction and face identification
CN115063073B (en) * 2022-06-10 2024-04-16 安徽大学 Efficient and secret unmanned aerial vehicle collaborative distribution method
CN115050080A (en) * 2022-07-22 2022-09-13 安徽大学 Target identification method and system based on face fusion in multi-unmanned aerial vehicle cooperative distribution

Also Published As

Publication number Publication date
CN116229582A (en) 2023-06-06

Similar Documents

Publication Publication Date Title
RU2725681C9 (en) Detection of objects inside a vehicle in connection with maintenance
US8744642B2 (en) Driver identification based on face data
CN108388266A (en) A kind of UAV system for logistics delivery
WO2021212496A1 (en) Battery detection method and apparatus
CN108764456B (en) Airborne target identification model construction platform, airborne target identification method and equipment
CN106020227A (en) Control method and device for unmanned aerial vehicle
CN107709162A (en) Charging system based on aircraft from main boot
CN115202376A (en) Unmanned aerial vehicle patrols and examines electric power grid management and control platform based on individual soldier removes
CN109063532B (en) Unmanned aerial vehicle-based method for searching field offline personnel
CN205920057U (en) Detect fissured many rotor unmanned aerial vehicle testing platform system in structure surface
US20210304343A1 (en) Utilization of a fleet of unmanned aerial vehicles for delivery of goods
CN111275923A (en) Man-machine collision early warning method and system for construction site
CN110188935B (en) Time information determining method and processing equipment
CN106687371A (en) Unmanned aerial vehicle control method, device and system
CN113608542B (en) Control method and equipment for automatic landing of unmanned aerial vehicle
CN110929646A (en) Power distribution tower reverse-off information rapid identification method based on unmanned aerial vehicle aerial image
CN108698696A (en) active vehicle control system and method
CN110785721A (en) Control method of unmanned equipment and unmanned vehicle
CN116229582B (en) Logistics unmanned aerial vehicle and man-machine logistics interactive system based on human body gesture recognition
CN113867398A (en) Control method for palm landing of unmanned aerial vehicle and unmanned aerial vehicle
CN113744230B (en) Unmanned aerial vehicle vision-based intelligent detection method for aircraft skin damage
CN112698660B (en) Driving behavior visual perception device and method based on 9-axis sensor
US10589872B1 (en) Augmented weight sensing for aircraft cargo handling systems
WO2022014586A1 (en) Inspection method
CN109577720A (en) The automatically storing and taking vehicles method and system of automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230713

Address after: 541000 No.1 Jinji Road, Qixing District, Guilin City, Guangxi Zhuang Autonomous Region

Applicant after: GUILIN University OF ELECTRONIC TECHNOLOGY

Address before: Room 205B, block B, animation building, No.11 Xinghuo Road, Jiangbei new district, Nanjing, Jiangsu 210000

Applicant before: Nanjing Hongwu Software Technology Co.,Ltd.

Applicant before: GUILIN University OF ELECTRONIC TECHNOLOGY

GR01 Patent grant
GR01 Patent grant