CN113715019B - Robot control method, device, robot and storage medium - Google Patents

Robot control method, device, robot and storage medium Download PDF

Info

Publication number
CN113715019B
CN113715019B CN202111014129.4A CN202111014129A CN113715019B CN 113715019 B CN113715019 B CN 113715019B CN 202111014129 A CN202111014129 A CN 202111014129A CN 113715019 B CN113715019 B CN 113715019B
Authority
CN
China
Prior art keywords
elevator
target
robot
target image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111014129.4A
Other languages
Chinese (zh)
Other versions
CN113715019A (en
Inventor
马帅
唐旋来
杨亚运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Keenlon Intelligent Technology Co Ltd
Original Assignee
Shanghai Keenlon Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Keenlon Intelligent Technology Co Ltd filed Critical Shanghai Keenlon Intelligent Technology Co Ltd
Priority to CN202111014129.4A priority Critical patent/CN113715019B/en
Publication of CN113715019A publication Critical patent/CN113715019A/en
Application granted granted Critical
Publication of CN113715019B publication Critical patent/CN113715019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Elevator Control (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a robot control method, a robot control device, a robot and a storage medium, and relates to the field of control. The method comprises the following steps: after an elevator door of an area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the car; carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the target elevator riding number. The method and the device improve accuracy of the determination result, avoid overlong waiting time of the robot to the robot, and reduce influence of excessive invalid elevator taking on operation efficiency of the elevator, so that operation efficiency of the robot and the elevator is improved.

Description

Robot control method, device, robot and storage medium
Technical Field
The embodiment of the application relates to the field of control, in particular to a robot control method, a robot control device, a robot and a storage medium.
Background
With the continuous development of technology, robots have spread over many fields such as dining, medical treatment, hotels and logistics distribution. In order to enlarge the active area of a robot during operation of the robot, it is often necessary for the robot to take an elevator to provide service across floors.
In the prior art, when a robot takes an elevator, whether an elevator car is empty needs to be detected, and the elevator is taken for continuous operation under the condition that the elevator car is empty. However, the above-mentioned method will result in too long waiting time of the robot, and affect the working efficiency of the robot and the elevator.
Disclosure of Invention
The application provides a robot control method, a robot control device, a robot and a storage medium, so as to improve the operation efficiency of the robot and an elevator when the robot operates by taking the elevator.
In a first aspect, an embodiment of the present application provides a robot control method, which is performed by a robot, including:
after an elevator door of an area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the car;
carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the target elevator riding number.
In a second aspect, an embodiment of the present application further provides a robot control device configured to a robot, including:
the target image acquisition module is used for controlling the robot to move and rotate according to a preset path after the elevator door of the area where the robot is located is opened so as to acquire target images of at least two view field angles inside the car;
the target elevator riding number determining module is used for sequentially calibrating and tracking people on each target image, determining a target elevator riding number elevator riding control module in the car according to the calibration and tracking results, and carrying out elevator riding control on the robot according to the target elevator riding number.
In a third aspect, an embodiment of the present application further provides a robot, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a robot control method as provided by embodiments of the first aspect of the present application.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a robot control method as provided by embodiments of the first aspect of the present application.
After an elevator door of an area where a robot is located is opened, controlling the robot to move and rotate according to a preset path so as to collect targets of at least two view angles in a car; carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the number of target elevator riding people. According to the technical scheme, the robot is used for acquiring the target image and determining the number of the target passengers, so that the accuracy of the determination result is improved. Meanwhile, elevator taking control is performed on the robot based on the mode that whether the elevator taking people replace the elevator car is empty or not, so that the robot is prevented from being excessively long in waiting time, meanwhile, the influence of excessive elevator taking invalidity on the operation efficiency of the elevator is reduced, and the operation efficiency of the robot and the elevator is improved.
Drawings
Fig. 1 is a flowchart of a robot control method provided in an embodiment of the present application;
FIG. 2A is a flow chart of another robot control method provided in an embodiment of the present application;
FIG. 2B is a schematic diagram of a human root model and a component model provided in an embodiment of the present application;
FIG. 3 is a flow chart of another robot control method provided by an embodiment of the present application;
fig. 4 is a block diagram of a robot control device according to an embodiment of the present application;
fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a robot control method according to an embodiment of the present application, where the method is applicable to a scenario of robot elevator control. The method may be performed by a robot control device, which may be implemented in software and/or hardware and specifically configured in an electronic device. The electronic device may be provided inside the robot or may exist independently of the robot.
Referring to fig. 1, a robot control method includes:
s110, after an elevator door of an area where the robot is located is opened, controlling the robot to move and rotate according to a preset path, and collecting target images of at least two view field angles inside the car.
Wherein, the car is the structure that supplies the user to take in the elevator. The area where the robot is located includes floors to accurately position the elevator door. When at least two elevator zones are included in one floor, the zone in which the robot is located also includes an elevator zone identification.
The elevator car image acquisition device is arranged in the robot and is used for acquiring an image in the elevator car in real time or at fixed time after an elevator door in an area where the robot is located is opened, and taking the obtained image as a target image to be used as a determination basis of the number of people taking the elevator as a target. The image capturing device may be a camera that captures an image of a target at a set frequency, for example. For example, the set frequency may be 200 frames/second.
It can be understood that the image acquisition device arranged on the robot is used for acquiring the target image, so that the multiplexing of the image acquisition device of the robot is realized, the image acquisition device is not required to be additionally arranged in the car, and the hardware cost investment is reduced. Meanwhile, the robot determines the number of the subsequent target elevator taking people based on the target image acquired by the robot, and third-party data is not used, so that the reliability of the determination result of the target elevator taking people is higher, and the condition that the determination result of the target elevator taking people is inaccurate due to the fact that a third-party data source is false or data transmission is safe is avoided. In addition, additionally set up image acquisition device in elevator car, need with image data communication transmission to the robot, when wireless communication signal is unstable, will exist because image data can't in time transmit, cause the unsmooth problem of robot operation for the collection of target image does not possess the instantaneity. Furthermore, additionally arranging an image acquisition device or other sensors in the elevator car can reform the elevator equipment, so that the elevator equipment is difficult to land due to local damage.
The preset path can be a preset safe driving path of the robot in the elevator doorway setting area, so that the robot is controlled to move and rotate in the elevator doorway setting area according to the preset path, images in the elevator car can be collected from at least two view angles, a target image with multiple view angles is obtained, the richness and the comprehensiveness of the target image are improved, and the influence of person shielding on the accuracy of the determination result of the subsequent target elevator passengers is avoided.
For example, the robot may send a call request to the elevator central control terminal, so that the elevator central control terminal controls the car to stop and open the elevator door corresponding to the target elevator-taking area according to the target elevator-taking area in the call request.
In an alternative embodiment, the elevator door in the area where the robot is located is opened, and the detection device arranged near the elevator door in the area where the robot is located can determine the opening state of the elevator door and send an opening notification to the robot after opening, so as to trigger the robot to collect images. The detection device may be implemented by at least one device in the prior art, for example, an infrared detection device.
In another alternative embodiment, the elevator central control end can also determine the time information that the car reaches the area where the robot is located, and send an opening notification to the robot after accumulating the set time interval according to the time information so as to trigger the robot to collect images.
Because the additional detection device is arranged to raise the hardware cost, the method of starting and sending the notification through the elevator central control terminal depends on the reliability of the elevator central control terminal, meanwhile, bandwidth resources are occupied for data transmission, and the transmission delay and the data safety in the transmission process all have certain influence on the timing determination of the image acquisition of the robot. In order to improve accuracy of image acquisition timing and reduce unnecessary resource consumption, in a further alternative embodiment, the robot may further detect whether an obstacle exists in the direction of the elevator door through the obstacle detection module, and if not, the elevator door may be considered to be opened, otherwise, the elevator door may be considered to be closed.
The robot determines the number of the target elevator passengers according to the target image, and the definition of the target image can directly influence the accuracy of the determination result of the target elevator passengers. The illumination intensity of the target image acquisition environment directly influences the definition of the acquired target image. Therefore, the working state of the light supplementing unit in the robot can be controlled through the definition of the target image, so that the definition of the target image acquired later can be improved.
For example, the definition of the target image can be identified after the target image is acquired for the first time in the area where the robot is located; if the definition meets the definition condition, setting the working state of the light supplementing unit as an on state; otherwise, the working state of the light supplementing unit is set to be a closed state. The definition condition may be set by a definition area, which may be set or adjusted by a technician according to a requirement or an empirical value.
And S120, sequentially calibrating and tracking the characters of each target image, and determining the number of target passengers in the lift car according to the calibration and tracking results.
In an alternative embodiment, the person calibration and tracking are sequentially performed on each target image, and the number of the target passengers in the car is determined according to the calibration and tracking results, wherein the calibration of the newly added person and the tracking of the calibrated person can be performed on the target images sequentially; determining the current elevator riding number in the elevator car according to the calibration result and the tracking result in each target image; the current elevator riding number is directly used as the target elevator riding number.
In the elevator riding process, the condition that a user gets in and out of an elevator exists, so that the current elevator riding number is directly taken as the target elevator riding number, and certain deviation possibly exists, and the target elevator riding number can be determined according to the elevator riding number and the current elevator riding number.
It can be understood that the situation that the calibration target in the previous target image is offset may occur in the visual angle change process, so that the target images with different visual angles can be acquired in the visual angle movement process by controlling the movement and rotation of the robot, the addition of a new target or the tracking of a calibrated person can be performed according to the characteristics, the determination of the current passenger number is further realized, and the determination precision of the current passenger number is improved.
S130, carrying out elevator riding control on the robot according to the number of target elevator riding people.
For example, if the number of people on the target elevator is greater than the set number threshold, the robot is prohibited from entering the car, i.e. the robot is prohibited from taking the elevator; and if the number of the target elevator is not greater than the set number threshold, controlling the robot to travel to the car, namely allowing the robot to take an elevator.
After an elevator door of an area where a robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside a car; carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the number of target elevator riding people. According to the technical scheme, the robot is used for acquiring the target image and determining the number of the target passengers, so that the accuracy of the determination result is improved. Meanwhile, elevator taking control is performed on the robot based on the mode that whether the elevator taking people replace the elevator car is empty or not, so that the robot is prevented from being excessively long in waiting time, meanwhile, the influence of excessive elevator taking invalidity on the operation efficiency of the elevator is reduced, and the operation efficiency of the robot and the elevator is improved.
Example two
Fig. 2A is a flowchart of another robot control method provided in the embodiment of the present application, where the method refines the operation of "determining the target number of passengers in the car according to the target image" to "determining the current number of passengers in the car according to the target image" based on the above technical solutions; according to the elevator entering and exiting statistical result and the current elevator riding number, the target elevator riding number is determined, so that the accuracy of the target elevator riding number determination result is improved. It should be noted that, the foregoing embodiments may not be referred to in detail in the embodiments of the present application.
Referring to fig. 2A, a robot control method includes:
and S210, after the elevator door of the area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the elevator car.
S220, carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results.
In an alternative embodiment, the person calibration and tracking are sequentially performed on each target image, and the target number of passengers in the car is determined according to the calibration and tracking results, which may be: and respectively identifying the number of faces in each target image through a face detection model, so that the current elevator riding number in the car is determined according to the number of faces. The face detection model may be implemented based on a machine learning model.
In another alternative embodiment, the person calibration and tracking are sequentially performed on each target image, and the target number of passengers in the car is determined according to the calibration and tracking results, which may be: the number of faces in each target image is respectively identified in an edge detection mode, so that the current elevator riding number in the car is determined according to the number of faces.
In yet another alternative embodiment, the determining the current elevator taking number in the car from the target image may be: determining a directional gradient histogram (Histogram of Oriented Gradient, HOG) of the target image, thereby obtaining target feature data; and processing the target characteristic data by adopting a classification model to obtain pedestrian and non-pedestrian category predictions, and carrying out character calibration and subsequent tracking on the predicted object, so as to determine the current elevator taking number according to the calibration and tracking results. The classification model may be implemented based on a machine learning model, such as a support vector machine (Support Vector Machine, SVM), among others.
Because the target image is an image acquired by the robot outside the car, the situation that personnel are blocked exists, the current elevator riding number is determined in the mode, and the situation that the accuracy of a determination result is poor exists. In order to improve the accuracy of the current elevator people number determination result, in a further alternative embodiment, a deformable component model (Deformable Parts Model, DPM) can be further introduced in the process of extracting the characteristics of the target image so as to adapt to the conditions of shielding of the elevator people, deformation of the human body posture and the like.
It should be noted that, the DPM algorithm adopts improved HOG features, SVM classifiers and sliding window detection ideas, and adopts a multi-Component (Component) strategy for multi-view problems of the target to be detected (elevator riding personnel) in the target image, and adopts a Component model strategy based on a graph structure (Pictorial Structure) for deformation problems of the target itself. The number of passengers is automatically determined by multi-instance learning (Multiple-instance Learning) using the model type to which the sample belongs, the position of the component model, and the like as Latent variables (latency variables).
Wherein the DPM model includes a root model, at least two component models, and a loss of deviation of the component models relative to the root model.
See schematic of the human root model, component model and deflection losses shown in fig. 2B.
The root model in fig. 2B (a) belongs to a global template that is rough and covers the whole target, and is also called a root filter.
The component model in fig. 2B (B) belongs to a local template with higher resolution and covering a local area (such as a head, an arm, a leg, etc.) in the target, and is also called a component filter. The object of the human body is divided into 6 parts of a head, two upper limbs, two lower limbs and a foot. The resolution of the component model is higher than the resolution of the root model. For example, the resolution of the component model is twice that of the root model. To reduce model complexity, both the root model and the component model are axisymmetric.
The deviation loss of the component model in fig. 2B (c) with respect to the root model can be obtained by extracting HOG features from existing training samples such as a human body and limbs, and then training the extracted HOG features by an SVM classifier. The brighter region indicates that the larger the deviation loss cost is, the deviation loss of the rational position of the component model is 0.
For example, the target image may be processed according to a preset root model and each component model to obtain target response data; and (3) carrying out character calibration and tracking on the target image according to the target response data, so as to determine the current elevator taking number in the elevator car according to the calibration and tracking results.
It can be understood that the object image is processed by combining the root model with the component model, so that the object detection can be converted into detection and identification of different local components (head, arm, leg and the like) in the object, and the influence of deformation conditions of the object, such as squatting, sitting, standing and the like, caused by various postures of the object in the object image on the detection result is eliminated.
In a specific implementation manner, processing the target image according to a preset root model and each component model to obtain target response data includes: extracting features of the target image to obtain initial feature data; processing the initial characteristic data according to the root model and each component model respectively to obtain initial response data; and generating target response data according to each initial response data.
Specifically, extracting features of a target image to obtain initial global feature data; up-sampling the target image, and extracting features of the up-sampling result to obtain initial local feature data; carrying out convolution processing on the initial global feature data according to the root model to obtain target global response data; carrying out convolution processing on the initial local characteristic data by adopting each part model to obtain each initial local response data; respectively downsampling each initial local response data to obtain target local response data so that the resolution of each target local response data is the same as the resolution of the target global response data; and determining a weighted average value of the target global response data and each target local response data to obtain target response data.
For example, feature extraction is performed on the target image, and convolution processing may be performed on the target image by using a root model, so as to obtain initial global feature data. Correspondingly, the up-sampling result is subjected to feature extraction, and the up-sampling result can be subjected to convolution processing by adopting a root model, so that initial local feature data are obtained.
In one specific implementation, the person calibration and tracking of the target image according to the target response data includes: processing the target response data based on the classification model to obtain a classification result; and calibrating the newly added person according to the target position information corresponding to the classification result, and tracking the calibrated person. According to the information of the identified model, the information comprises position information, and by identifying each part of the human body according to the classification result, the position information of each part can be used for determining whether the human body is a person or not, so that the number of people is determined, and the problem of counting the number of people by mistake is avoided. Wherein the classification model may be implemented based on a machine learning model, such as an SVM model.
Specifically, according to the target response data, determining each local area in the target image, and determining whether each local area belongs to the same target; tracking the same target and calibrating different targets; and taking the final determined target number as the current elevator riding number.
Specifically, the DPM modified HOG cancels the Block (Block) in the original HOG, and only retains the Cell (Cell). In normalization, a region composed of the current unit and 4 units around the current unit is directly normalized. When calculating the gradient direction, a combination of signed (0-360 °) and unsigned (0-180 °) gradient directions may be calculated. Taking an 8×8 unit in the target image as an example, after normalization and truncation of the neighborhood, 4 unit groups are obtained, the corresponding obtained signed gradient direction histogram is a 4×18-dimensional matrix, and the unsigned gradient direction histogram is a 4×9-dimensional matrix. The signed gradient direction histogram with 4 multiplied by 18 dimensions is accumulated and summed according to columns to obtain an 18-dimensional vector; adding up the unsigned gradient direction histograms of 4×9 dimensions according to columns to obtain 9-dimensional vectors; adding up the unsigned gradient direction histograms of 4 multiplied by 9 according to rows to obtain a 4-dimensional vector; and combining the obtained 18-dimensional vector, 9-dimensional vector and 4-dimensional vector to obtain the feature vector of the unit.
For example, a local area including at least one of a head, an upper limb, a lower limb, a foot, and the like is determined by the target response data, and whether each local area belongs to the same target (human body) is determined based on target position information of each local area, thereby determining the current number of passengers in the car.
S230, determining the target elevator riding number according to the elevator entering and exiting statistical result and the current elevator riding number.
Illustratively, the input elevator statistics result can be counted by a counting device arranged in the car, near an elevator door or in a robot, and the counting device is used for counting the number of people entering and exiting the elevator to obtain the elevator entering and exiting statistics result; updating the current elevator riding number according to the elevator entering and exiting statistical result, and taking the updated current elevator riding number as the target elevator riding number.
S240, carrying out elevator riding control on the robot according to the number of target elevator riding people.
According to the method and the device, the determination operation of the target elevator people is refined into the determination of the current elevator people in the elevator car according to the target image, and the target elevator people are determined according to the elevator in-out statistical result and the current elevator people, so that the problem that elevator taking decisions of robots are affected due to inaccurate target elevator people due to personnel fluctuation in the elevator car after the current elevator people are determined is avoided.
Example III
Fig. 3 is a flowchart of a robot control method according to a third embodiment of the present application, where the operation of "determining a target passenger number in a car according to a calibration and tracking result" is subdivided into "determining a current passenger number in a car according to a calibration and tracking result" based on the above technical solutions; and determining the target elevator riding number' in the car according to the current elevator riding number and the historical elevator riding number so as to reduce the hardware cost of the target elevator riding number determination. It should be noted that, the foregoing embodiments may not be referred to in detail in the embodiments of the present application.
Referring to a flowchart of a robot control method shown in fig. 3, it includes:
and S310, after the elevator door of the area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the elevator car.
S320, sequentially calibrating and tracking the characters of each target image, and determining the current elevator taking number in the elevator car according to the calibrating and tracking results
The determining process of the number of passengers on the elevator can be seen in the foregoing embodiments, and will not be described herein.
S330, determining the target elevator riding number in the car according to the current elevator riding number and the historical elevator riding number.
The historical elevator riding number can be the current elevator riding number corresponding to each acquired target image after the elevator door is opened and before the current elevator riding number is determined.
Illustratively, determining the target boarding number in the car from the current boarding number and the historical boarding number may be: if the current elevator riding number and the historical elevator riding number tend to be stable, taking the current elevator riding number as a target elevator riding number; otherwise, taking the current elevator riding number as the historical elevator riding number, and re-determining the current elevator riding number according to the re-acquired target image.
The current elevator people and the historical elevator people are stable in area, and the existing difference value between the current elevator people and the adjacent historical elevator people can be set to be a threshold value. The threshold value may be determined by the skilled person according to the need or an empirical value, and may be, for example, 0. The adjacent historical elevator passengers can be the adjacent historical elevator passengers with set quantity or the historical elevator passengers determined in the adjacent set historical time period of the current elevator passengers. Wherein the set number or set history period may be set by a technician as needed or as experienced, or may be determined by a number of experimental adjustments. For example, the set number may be 2, and the set history period may be 3 seconds.
S340, carrying out elevator riding control on the robot according to the number of target elevator riding people.
According to the method, the target elevator riding number determining operation is refined into the steps of determining the current elevator riding number in the car according to the calibration and tracking results; according to the current elevator riding number and the historical elevator riding number, the target elevator riding number in the car is determined, so that the mode of additionally arranging a technical device is replaced by performing software processing through a robot, the accurate determination of the target elevator riding number can be realized, and the hardware cost is considered while the accuracy of the determination result of the target elevator riding number is improved.
Example IV
Fig. 4 is a block diagram of a robot control device according to an embodiment of the present application, where the device is suitable for use in a scenario of robot boarding control. The apparatus may be implemented in software and/or hardware and is specifically configured in an electronic device. The electronic device may be provided inside the robot or may exist independently of the robot.
Referring to fig. 4, a robot control device includes: a target image acquisition module 410, a target boarding population determination module 420, and a boarding control module 430. Wherein,
the target image acquisition module 410 is used for controlling the robot to move and rotate according to a preset path after an elevator door of an area where the robot is located is opened so as to acquire target images of at least two view field angles inside the car;
the target elevator people number determining module 420 is configured to perform person calibration and tracking on each target image in sequence, and determine a target elevator people number in the car according to the calibration and tracking results;
and the boarding control module 430 is configured to perform boarding control on the robot according to the target boarding number of people.
According to the embodiment of the application, after the elevator door of the area where the robot is located is opened by the target elevator taking person number determining module, the robot is controlled to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the elevator car; carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the number of target elevator riding people through an elevator riding control module. According to the technical scheme, the robot is used for acquiring the target image and determining the number of the target passengers, so that the accuracy of the determination result is improved. Meanwhile, elevator taking control is performed on the robot based on the mode that whether the elevator taking people replace the elevator car is empty or not, so that the robot is prevented from being excessively long in waiting time, meanwhile, the influence of excessive elevator taking invalidity on the operation efficiency of the elevator is reduced, and the operation efficiency of the robot and the elevator is improved.
In an alternative embodiment, the target boarding number determination module 420 includes:
the target response data determining unit is used for processing the target image according to a preset root model and each part model to obtain target response data;
and the person calibrating and tracking unit is used for carrying out person calibrating and tracking on the target image according to the target response data.
In an alternative embodiment, the target response data determining unit includes:
the initial characteristic data obtaining subunit is used for extracting the characteristics of the target image to obtain initial characteristic data;
an initial corresponding data obtaining subunit, configured to process the initial feature data according to the root model and each component model, to obtain initial response data;
and the target response data generation subunit is used for generating the target response data according to each initial response data.
In an alternative embodiment, the person calibration and tracking unit includes:
the classifying result obtaining subunit is used for processing the target response data based on the classifying model to obtain a classifying result;
and the character calibration and tracking subunit is used for calibrating the newly added character and tracking the calibrated character according to the target position information corresponding to the classification result.
In an alternative embodiment, the target image acquisition module 410 includes:
the current elevator riding number determining unit is used for determining the current elevator riding number in the car according to the calibration and tracking results;
the target elevator riding number determining unit is used for determining the target elevator riding number in the car according to the current elevator riding number and the historical elevator riding number.
In an alternative embodiment, the target boarding number determination unit includes:
a target boarding number determination subunit, configured to take the current boarding number as the target boarding number if the current boarding number and the historical boarding number tend to be stable; otherwise, taking the current elevator riding number as the historical elevator riding number, and re-determining the current elevator riding number according to the re-acquired target image.
In an alternative embodiment, the apparatus further comprises:
and the call request sending module is used for sending a call request to the elevator central control terminal so that the elevator central control terminal controls the lift car to stop and open the lift door corresponding to the target elevator taking area according to the target elevator taking area in the call request.
In an alternative embodiment, the target boarding number determination unit includes:
a current elevator riding number determining subunit, configured to determine a current elevator riding number in the car according to the target image;
and the target elevator taking number determining subunit is used for determining the target elevator taking number according to the elevator entering and exiting statistical result and the current elevator taking number.
In an alternative embodiment, the apparatus further comprises:
and the light supplementing unit control module is used for controlling the working state of the light supplementing unit in the robot according to the definition of the first acquired target image in the area where the robot is located.
The robot control device can execute the robot control method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the robot control method.
Example five
Fig. 5 is a structural diagram of a robot according to a fifth embodiment of the present application. Fig. 5 illustrates a block diagram of an exemplary robot 512 suitable for use in implementing embodiments of the present application. The robot 512 shown in fig. 5 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments herein.
As shown in fig. 5, robot 512 is in the form of a general purpose computing device. Components of robot 512 may include, but are not limited to: one or more processors or processing units 516, a system memory 528, a bus 518 that connects the various system components (including the system memory 528 and processing units 516).
Bus 518 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Robot 512 typically includes a variety of computer system readable media. Such media can be any available media that can be accessed by robot 512 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 530 and/or cache memory 532. The robot 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the present application.
A program/utility 540 having a set (at least one) of program modules 542 may be stored in, for example, memory 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 542 generally perform the functions and/or methods in the embodiments described herein.
The robot 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), one or more devices that enable a user to interact with the robot 512, and/or any devices (e.g., network card, modem, etc.) that enable the robot 512 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 522. Also, the robot 512 may communicate with one or more networks, such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet, through a network adapter 520. As shown, the network adapter 520 communicates with other modules of the robot 512 via the bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with robot 512, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 516 executes various functional applications and data processing by running at least one of other programs among a plurality of programs stored in the system memory 528, for example, implementing the robot control method provided in the embodiment of the present application.
Example six
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a robot control method provided by any embodiment of the present application, executed by a robot, including: after an elevator door of an area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view field angles inside the car; carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results; and carrying out elevator riding control on the robot according to the target elevator riding number.
Note that the above is only a preferred embodiment of the present application and the technical principle applied. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, the scope of which is defined by the scope of the appended claims.

Claims (8)

1. A robot control method, performed by a robot, comprising:
after an elevator door of an area where the robot is located is opened, controlling the robot to move and rotate according to a preset path so as to acquire target images of at least two view angles inside a car, wherein the preset path is a preset safe running path of the robot in the elevator door setting area; the acquisition of the target image is carried out by an image acquisition device arranged on the robot;
carrying out person calibration and tracking on each target image in sequence, and determining the number of target passengers in the lift car according to the calibration and tracking results;
carrying out elevator riding control on the robot according to the target elevator riding number;
the method for determining the target elevator riding number in the car according to the calibration and tracking results comprises the following steps:
determining the current elevator riding number in the elevator car according to the calibration and tracking results;
determining a target elevator riding number in the lift car according to the current elevator riding number and the historical elevator riding number;
wherein, according to current elevator number and historical elevator number, confirm the target elevator number in the car, include:
if the current boarding number and the historical boarding number tend to be stable, taking the current boarding number as the target boarding number; otherwise, taking the current elevator riding number as the historical elevator riding number, and re-determining the current elevator riding number according to the re-acquired target image; the historical elevator riding number is the current elevator riding number corresponding to each acquired target image after the elevator door is opened and before the current elevator riding number is determined.
2. The method of claim 1, wherein said sequentially performing person calibration and tracking on each of said target images comprises:
processing the target image according to a preset root model and each part model to obtain target response data;
and calibrating and tracking the person of the target image according to the target response data.
3. The method according to claim 2, wherein the processing the target image according to the preset root model and each component model to obtain target response data includes:
extracting features of the target image to obtain initial feature data;
processing the initial characteristic data according to the root model and each component model respectively to obtain initial response data;
and generating the target response data according to each initial response data.
4. The method of claim 2, wherein said performing person calibration and tracking on said target image based on said target response data comprises:
processing the target response data based on a classification model to obtain a classification result;
and calibrating the newly added person according to the target position information corresponding to the classification result, and tracking the calibrated person.
5. The method of claim 1, wherein the determining a target boarding number in the car from the target image comprises:
determining the current elevator riding number in the elevator car according to the target image;
and determining the target elevator riding number according to the elevator entering and exiting statistical result and the current elevator riding number.
6. A robot control device, which is disposed in a robot, comprising:
the system comprises a target image acquisition module, a control module and a control module, wherein the target image acquisition module is used for controlling the robot to move and rotate according to a preset path after an elevator door of an area where the robot is located is opened so as to acquire target images of at least two view field angles inside a car, and the preset path is a preset safe driving path of the robot in the elevator door setting area; the acquisition of the target image is carried out by an image acquisition device arranged on the robot;
the target elevator riding number determining module is used for sequentially carrying out person calibration and tracking on each target image and determining the target elevator riding number in the elevator car according to the calibration and tracking results;
the elevator taking control module is used for carrying out elevator taking control on the robot according to the target elevator taking number;
wherein, the target image acquisition module includes:
the current elevator riding number determining unit is used for determining the current elevator riding number in the car according to the calibration and tracking results;
the target elevator riding number determining unit is used for determining the target elevator riding number in the car according to the current elevator riding number and the historical elevator riding number;
wherein, the target boarding number of people determining unit includes:
a target boarding number determination subunit, configured to take the current boarding number as the target boarding number if the current boarding number and the historical boarding number tend to be stable; otherwise, taking the current elevator riding number as the historical elevator riding number, and re-determining the current elevator riding number according to the re-acquired target image; the historical elevator riding number is the current elevator riding number corresponding to each acquired target image after the elevator door is opened and before the current elevator riding number is determined.
7. A robot, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement a robot control method as claimed in any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a robot control method according to any one of claims 1-5.
CN202111014129.4A 2021-08-31 2021-08-31 Robot control method, device, robot and storage medium Active CN113715019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111014129.4A CN113715019B (en) 2021-08-31 2021-08-31 Robot control method, device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111014129.4A CN113715019B (en) 2021-08-31 2021-08-31 Robot control method, device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN113715019A CN113715019A (en) 2021-11-30
CN113715019B true CN113715019B (en) 2023-12-29

Family

ID=78679881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111014129.4A Active CN113715019B (en) 2021-08-31 2021-08-31 Robot control method, device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN113715019B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103287939A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Apparatus for measuring number of people in elevator, elevator having the apparatus, and elevator system including a plurality of elevators with the apparatus
CN108154130A (en) * 2017-12-29 2018-06-12 深圳市神州云海智能科技有限公司 A kind of detection method of target image, device and storage medium, robot
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region
CN113283328A (en) * 2021-05-19 2021-08-20 上海擎朗智能科技有限公司 Control method and device for moving equipment to elevator and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112693980A (en) * 2019-10-23 2021-04-23 奥的斯电梯公司 Robot elevator taking control method, system, elevator, robot system and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103287939A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Apparatus for measuring number of people in elevator, elevator having the apparatus, and elevator system including a plurality of elevators with the apparatus
CN108154130A (en) * 2017-12-29 2018-06-12 深圳市神州云海智能科技有限公司 A kind of detection method of target image, device and storage medium, robot
CN108932496A (en) * 2018-07-03 2018-12-04 北京佳格天地科技有限公司 The quantity statistics method and device of object in region
CN113283328A (en) * 2021-05-19 2021-08-20 上海擎朗智能科技有限公司 Control method and device for moving equipment to elevator and storage medium

Also Published As

Publication number Publication date
CN113715019A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
Jiang et al. FLYOLOv3 deep learning for key parts of dairy cow body detection
CN108875833B (en) Neural network training method, face recognition method and device
CN108224691B (en) A kind of air conditioner system control method and device
US9001199B2 (en) System and method for human detection and counting using background modeling, HOG and Haar features
US10007850B2 (en) System and method for event monitoring and detection
Likitlersuang et al. Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home
Guo et al. Fast object detection based on selective visual attention
US20130342636A1 (en) Image-Based Real-Time Gesture Recognition
CN111597969A (en) Elevator control method and system based on gesture recognition
US20230196587A1 (en) Method and system for tracking target part, and electronic device
CN110826372B (en) Face feature point detection method and device
WO2022199360A1 (en) Moving object positioning method and apparatus, electronic device, and storage medium
CN112379781B (en) Man-machine interaction method, system and terminal based on foot information identification
JP2021503139A (en) Image processing equipment, image processing method and image processing program
CN111985458A (en) Method for detecting multiple targets, electronic equipment and storage medium
CN113220114A (en) Embedded non-contact elevator key interaction method integrating face recognition
US11132778B2 (en) Image analysis apparatus, image analysis method, and recording medium
CN113715019B (en) Robot control method, device, robot and storage medium
CN113553992A (en) Escalator-oriented complex scene target tracking method and system
CN110781847A (en) Neural network action behavior recognition based method
CN115321322A (en) Control method, device and equipment for elevator car door and storage medium
US20200305801A1 (en) Determining health state of individuals
Herpers et al. SAVI: an actively controlled teleconferencing system
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
Lam et al. A real-time vision-based framework for human-robot interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant