CN112731919B - Crowd density estimation-based robot guiding method and system - Google Patents
Crowd density estimation-based robot guiding method and system Download PDFInfo
- Publication number
- CN112731919B CN112731919B CN202011381811.2A CN202011381811A CN112731919B CN 112731919 B CN112731919 B CN 112731919B CN 202011381811 A CN202011381811 A CN 202011381811A CN 112731919 B CN112731919 B CN 112731919B
- Authority
- CN
- China
- Prior art keywords
- information
- robot
- thermodynamic diagram
- central server
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000010586 diagram Methods 0.000 claims abstract description 109
- 230000009467 reduction Effects 0.000 claims abstract description 13
- 239000002245 particle Substances 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012952 Resampling Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0225—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a crowd density estimation-based robot guiding method and system, wherein the method comprises the following steps: the plurality of cameras respectively acquire video stream information and send the acquired video stream information to a central server; the robot acquires input destination information, calculates robot position information and sends the information to a central server; the method comprises the steps that a central server inputs a plurality of video stream information into a crowd density estimation network model, a two-dimensional graph group is output, the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram in the two-dimensional graph group is processed by PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located, all feasible paths are obtained according to the information, a greedy algorithm is utilized to carry out traversal solving on all the feasible paths according to the density information, and an optimal guiding path is obtained and is sent to a robot. The method ensures that pedestrians get better led experience in indoor public scenes of dense crowds.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a crowd density estimation-based robot guiding method and system.
Background
In our life, the demands of service robots in the service industry and the medical industry are rising year by year, and the service robots have huge application values of liberating productivity, developing productivity, meeting the consumption upgrading demands of people and the like. With the continuous development of artificial intelligence, the service robot has also made great development in the application of artificial intelligence technology, and has become more intelligent in deep learning, machine vision, semantic analysis and the like.
But as a service robot, the research direction of the intelligent guiding robot is still focused on the aspects of man-machine interaction and intelligent obstacle avoidance. The environment where the actual guiding robot is located is generally a public indoor place, the uncertainty of the traffic of people is brought to the public indoor place, and at present, the research is focused on the interaction with the guided people, and the experience influence brought by the traffic of people is ignored.
Disclosure of Invention
The invention provides a crowd density estimation-based robot guiding method and system, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In a first aspect, an embodiment of the present invention provides a method for guiding a robot based on crowd density estimation, the method including:
a plurality of cameras in different areas respectively acquire video stream information and send the acquired video stream information to a central server;
the robot acquires input destination information, calculates robot position information and sends the robot position information and the destination information to a central server;
the central server inputs a plurality of video stream information into a crowd density estimation network model respectively, outputs a two-dimensional graph group, wherein the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, and each thermodynamic diagram of the two-dimensional graph group is processed by utilizing PCA dimensionality reduction to obtain density information of an area where each thermodynamic diagram is located;
the central server acquires all feasible paths according to the robot position information, the destination information, the pre-stored map information and the two-dimensional map group;
the central server traverses and solves all feasible paths by using a greedy algorithm according to the density information to obtain an optimal guide path, and sends the guide path to the robot;
the robot controls the movement of the robot according to the received guide path.
Further, the method further comprises: in the moving process of the robot, the robot acquires the current gesture of the robot in real time, and the robot is controlled to move and avoid the obstacle by utilizing the SLAM navigation algorithm.
Further, robot position information is calculated using a particle filter algorithm.
Further, the step of reducing the dimension of each thermodynamic diagram of the two-dimensional diagram set by using PCA to obtain the density information of the region where each thermodynamic diagram is located includes:
converting each thermodynamic diagram into one-dimensional data by adopting PCA dimension reduction, wherein the one-dimensional data comprises thermodynamic diagram information;
multiplying a preset one-dimensional array with the one-dimensional data to obtain multiplied one-dimensional data, wherein the data in the preset one-dimensional array represents the weight of the data in the one-dimensional data;
and calculating the average density of the area as density information by squaring and adding the multiplied one-dimensional data and opening the root number.
Further, the central server performing traversal solving on all feasible paths by using a greedy algorithm according to the density information, and obtaining an optimal guiding path comprises:
according to the density information corresponding to each thermodynamic diagram, calculating the cost time passing through the area where each thermodynamic diagram is located:
c i =log(h i 2 +σ i 2 )*p i ;
wherein c i To pass the cost time of the region where the ith thermodynamic diagram is located, h i Representing density information, sigma, corresponding to the ith thermodynamic diagram i For a correction variable adjusted in advance, p i For a preset fixed value, i is more than or equal to 1 and less than or equal to N, wherein N is the number of thermodynamic diagrams;
calculating the cost time from the region where the ith thermodynamic diagram is located to the region where the jth thermodynamic diagram is located:
D[i,j]=(c i +c j )/2,i≠j;
D[i,j]=0;
wherein D [ i, j]Representing the cost time from the region of the ith thermodynamic diagram to the region of the jth thermodynamic diagram, c j To pass the cost of the region where the jth thermodynamic diagram is locatedTime, j is more than or equal to 1 and less than or equal to N;
and obtaining all the feasible paths, carrying out traversal solving on all the feasible paths according to the D [ i, j ] and a greedy algorithm, and taking the feasible path with the minimum cost time as the optimal guiding path.
In a second aspect, an embodiment of the present invention further provides a guidance robot system based on crowd density estimation, including: robot, multiple cameras, central server;
the cameras are positioned in different areas and are respectively used for acquiring video stream information and sending the acquired video stream information to the central server;
the robot is used for acquiring input destination information, calculating robot position information and sending the robot position information and the destination information to the central server; receiving a guide path sent by a central server, and controlling the movement of the robot according to the guide path;
the central server is used for inputting a plurality of video stream information into the crowd density estimation network model respectively, outputting a two-dimensional graph group, wherein the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, processing each thermodynamic diagram in the two-dimensional graph group by using PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located, acquiring all feasible paths according to the robot position information, the destination information, pre-stored map information and the two-dimensional graph group, performing traversal solving on all the feasible paths according to the density information by using a greedy algorithm, obtaining an optimal guiding path, and sending the guiding path to the robot.
Further, the robot includes: the device comprises a ZED camera, a voice input module, a touch screen, a processor, an industrial personal computer, an MCU controller and a laser radar sensor, wherein the processor is respectively connected with the ZED camera, the voice input module, the touch screen and the industrial personal computer, and the industrial personal computer is respectively connected with the processor, the MCU controller and the laser radar sensor;
the ZED camera is positioned at the uppermost part of the robot and is used for acquiring limb action images of a user;
the voice input module is used for acquiring voice instructions;
the touch screen is used for acquiring input of a user;
the processor is used for identifying a limb action image or a voice instruction of a user or input of the user to obtain a control instruction, wherein the voice instruction or the input of the user comprises destination information;
the industrial personal computer is used for processing the acquired information of the laser radar sensor and controlling the operation of the SLAM map building navigation frame according to the control instruction and the processed information of the laser radar sensor;
and the MCU is used for acquiring the processed information of the laser radar sensor, matching the processed information with pre-stored map information and calculating the position of the robot by using a particle filtering algorithm.
Further, the robot further comprises a wheel type odometer, the wheel type odometer is connected with the MCU controller, and the MCU controller is further used for acquiring the current gesture of the robot in real time according to the processed information of the laser radar sensor and the processed information of the wheel type odometer, and controlling the movement and obstacle avoidance of the robot by utilizing an SLAM navigation algorithm.
Further, the robot is an omni-directional wheeled robot, comprising a three-wheeled omni-directional chassis.
Further, the type of the laser radar sensor is RPLIDAR 2.
The robot guiding method and system based on crowd density estimation provided by the embodiment of the invention have the following beneficial effects: the crowd density estimation network is combined with the guiding robot, a guiding robot guiding system observed from a global view angle is designed, a plurality of thermodynamic diagrams are obtained according to a plurality of video stream information acquired by a plurality of cameras, density information of a plurality of areas is obtained by processing each thermodynamic diagram, an optimal guiding path is obtained according to the density information, good guiding experience of pedestrians in indoor public scenes of dense crowds is guaranteed, and the robot can automatically avoid obstacles by utilizing an SLAM navigation algorithm.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
Fig. 1 is a flowchart of a method for guiding a robot based on crowd density estimation according to an embodiment of the invention.
Fig. 2 is a schematic structural diagram of a guidance robot system based on crowd density estimation according to an embodiment of the invention.
Fig. 3 is a schematic structural diagram of a control system of a robot according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block diagrams are depicted as block diagrams, and logical sequences are shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than the block diagrams in the system. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Fig. 1 is a method for guiding a robot based on crowd density estimation, provided by an embodiment of the invention, including the following steps:
s11, a plurality of cameras in different areas respectively acquire video stream information, and the acquired video stream information is sent to a central server;
s12, the robot acquires input destination information, calculates the position information of the robot, and sends the position information of the robot and the destination information to a central server;
s13, the central server inputs a plurality of video stream information into a crowd density estimation network model respectively, a two-dimensional graph group is output, the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, and each thermodynamic diagram of the two-dimensional graph group is processed by utilizing PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located;
s14, the central server acquires all feasible paths according to the robot position information, the destination information, the pre-stored map information and the two-dimensional map group;
s15, the central server uses a greedy algorithm to carry out traversal solving on all feasible paths according to the density information to obtain an optimal guiding path, and the guiding path is sent to the robot;
s16, the robot controls the movement of the robot according to the received guiding path.
Specifically, cameras are respectively arranged in each indoor area, each camera is used for capturing video stream information of the area, and the obtained video stream information is sent to a central server.
The central server processes the information, can process the received video stream information, and sends the guide path obtained after the information processing to the robot, so that big data processing at the robot is avoided, and the calculation load of the robot is reduced.
The crowd density estimation network model adopts a model structure in paper Residual Regression with Semantic Prior for Crowd Counting, jia Wan et al, IEEE,2019, and a Residual Regression algorithm is utilized.
The crowd density estimation-based robot guiding method further comprises the following steps: in the moving process of the robot, the robot acquires the current gesture of the robot in real time, and the robot is controlled to move and avoid the obstacle by utilizing the SLAM navigation algorithm. Wherein the current pose of the robot comprises the current position and orientation of the robot. The robot can control the robot to avoid the obstacle in the moving process according to the current position and the direction of the robot and the guiding path. The control of the robot to move and avoid the obstacle by utilizing the SLAM navigation algorithm belongs to a common technical means in the field, and is not redundant.
Further, in step S12, the calculation of the robot position information is: and calculating the position information of the robot by using a particle filtering algorithm. The method comprises the following steps:
s21, randomly generating a total number of particle groups K on a map, and storing particle information by using a matrix;
s22, controlling the robot to move according to a motion equation, and measuring the position of the robot after reaching the next position to obtain the position of the robot;
s23, randomly generated particles move along with the movement of the robot, all the particles are brought into a motion equation to obtain the next position of the particles, meanwhile, the geometric distance between the position of each particle and the position of the robot obtained through measurement is calculated, a weight is added to each particle according to the difference of the distances, wherein the closer the distance is, the greater the weight is, and after the weights of all the particles are obtained, the normalization is carried out;
s24, resampling the particle swarm, resampling M particles to form a new particle swarm, wherein M is smaller than K, the new particle swarm retains particles with large weight, the particles with large weight are repeatedly pumped, and the particles with small weight are discarded;
s25, repeating the steps S23-S24 until the particles are converged, and calculating the position information of the robot through weighting average value of the converged particles.
Further, in step S13, processing each thermodynamic diagram of the two-dimensional diagram set by using PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located includes:
s31, adopting PCA to reduce dimension, and converting each thermodynamic diagram into one-dimensional data, wherein the one-dimensional data comprises thermodynamic diagram information;
s32, multiplying a preset one-dimensional array with the one-dimensional data to obtain multiplied one-dimensional data, wherein the data in the preset one-dimensional array represents the weight of the data in the one-dimensional data;
and S33, square adding the multiplied one-dimensional data and calculating the average density of the area in the form of root number to obtain density information.
Each thermodynamic diagram comprises a length, a width and an RGB color gamut, the thermodynamic diagram is converted into one-dimensional data, the thermodynamic diagram represents crowd density information of an area where the thermodynamic diagram is located, and therefore the one-dimensional data comprises the thermodynamic diagram information. Each thermodynamic diagram comprises crowd density information of a plurality of sub-areas, the plurality of sub-areas form an area where the thermodynamic diagram is located, and density information weights of different sub-areas are different, so that a preset one-dimensional array is multiplied with the one-dimensional data to obtain multiplied one-dimensional data, wherein the data in the preset one-dimensional array represents the weight of the data in the one-dimensional data.
Further, according to the density information, the central server in step S15 performs traversal solution on all feasible paths by using a greedy algorithm, and the obtaining of the optimal guiding path includes:
s41, calculating the cost time passing through the area where each thermodynamic diagram is located according to the density information corresponding to each thermodynamic diagram:
c i =log(h i 2 +σ i 2 )*p i ;
wherein c i To pass the cost time of the region where the ith thermodynamic diagram is located, h i Representing density information, sigma, corresponding to the ith thermodynamic diagram i For a correction variable adjusted in advance, p i For a preset fixed value, i is more than or equal to 1 and less than or equal to N, wherein N is the number of thermodynamic diagrams;
s42, calculating the cost time from the region where the ith thermodynamic diagram is located to the region where the jth thermodynamic diagram is located:
D[i,j]=(c i +c j )/2,i≠j;
D[i,i]=0;
wherein D [ i, j]Representing the cost time from the region of the ith thermodynamic diagram to the region of the jth thermodynamic diagram, c j In order to pass the cost time of the region where the jth thermodynamic diagram is located, j is more than or equal to 1 and less than or equal to N; starting from the center point of the region where the ith thermodynamic diagram is located, and ending from the center point of the region where the jth thermodynamic diagram is located as the end point from the region where the ith thermodynamic diagram is located to the jth thermodynamic diagramA path of the region;
s43, obtaining all feasible paths, carrying out traversal solving on all the feasible paths according to D [ i, j ] and a greedy algorithm, and taking the feasible path with the minimum cost time as the optimal guiding path.
Because the area of each camera in the indoor scene is limited, all feasible paths can be determined, the greedy algorithm is utilized to carry out traversal solving on all the feasible paths, each feasible path possibly comprises a plurality of areas, the cost time of all the paths can be obtained according to D [ i, j ], and the feasible path with the minimum cost time is used as the optimal guiding path.
Fig. 2 is a schematic structural diagram of a guidance robot system based on crowd density estimation according to an embodiment of the present invention, where the system includes: comprising the following steps: robot, multiple cameras, central server; the 3 cameras shown in fig. 2 are only illustrative, and the number of cameras may be set according to the size of an actual scene.
The cameras are positioned in different areas and are respectively used for acquiring video stream information and sending the acquired video stream information to the central server;
the robot is used for acquiring input destination information, calculating robot position information and sending the robot position information and the destination information to the central server; receiving a guide path sent by a central server, and controlling the movement of the robot according to the guide path;
the central server is used for inputting a plurality of video stream information into the crowd density estimation network model respectively, outputting a two-dimensional graph group, wherein the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, processing each thermodynamic diagram in the two-dimensional graph group by using PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located, acquiring all feasible paths according to the robot position information, the destination information, pre-stored map information and the two-dimensional graph group, performing traversal solving on all the feasible paths according to the density information by using a greedy algorithm, obtaining an optimal guiding path, and sending the guiding path to the robot.
Fig. 3 is a control system of a robot according to an embodiment of the present invention, and fig. 4 is a block diagram of a robot according to an embodiment of the present invention, where, as shown in fig. 3 and fig. 4, the control system of a robot includes: the intelligent electronic device comprises a voice input module 11, a ZED camera 12, a touch screen 13, a processor 14, an industrial personal computer 15, an MCU controller 16 and a laser radar sensor 17, wherein the processor 14 is respectively connected with the ZED camera 12, the voice input module 11, the touch screen 13 and the industrial personal computer 15, and the industrial personal computer 15 is respectively connected with the processor 14, the MCU controller 16 and the laser radar sensor 17;
the ZED camera 12 is located at the uppermost part of the robot and is used for acquiring limb action images of a user;
a voice input module 11 for acquiring a voice instruction;
a touch screen 13 for acquiring an input of a user;
a processor 14 for identifying a limb movement image or a voice command of a user or an input of the user to obtain a control command, wherein the voice command or the input of the user comprises destination information;
the industrial personal computer 15 is used for processing the acquired information of the laser radar sensor 17 and controlling the operation of the SLAM map building navigation frame according to the control instruction and the processed information of the laser radar sensor;
the MCU controller 16 is configured to obtain the processed information of the lidar sensor, match the processed information with pre-stored map information, and calculate the position of the robot using a particle filtering algorithm.
Further, the control system of the robot further includes a wheel type odometer 18, the wheel type odometer 18 is connected with the MCU controller 16, and the MCU controller 16 is further configured to obtain the current pose of the robot in real time according to the processed information of the lidar sensor and the processed information of the wheel type odometer, and control the movement and obstacle avoidance of the robot by using a SLAM navigation algorithm.
Further, the robot is an omni-directional wheeled robot, comprising a three-wheeled omni-directional chassis.
Further, the laser radar sensor 17 is of the type RPLIDAR A2.
According to the crowd density estimation-based guidance robot method and system, a crowd density estimation network is combined with a guidance robot, a plurality of thermodynamic diagrams are obtained according to a plurality of video stream information acquired by a plurality of cameras, density information of a plurality of areas is obtained by processing each thermodynamic diagram, an optimal guidance path is obtained according to the density information, good guidance experience of pedestrians in indoor public scenes of dense crowds is guaranteed, an SLAM navigation algorithm is utilized to avoid obstacles, the SLAM navigation algorithm of the robot and the crowd density estimation algorithm are combined to optimize an indoor guidance route, the guidance robot can cope with a large-flow public indoor scene, and meanwhile, the pedestrians can avoid obstacles automatically.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.
Claims (8)
1. A crowd density estimation-based method of directing robots, the method comprising:
a plurality of cameras in different areas respectively acquire video stream information and send the acquired video stream information to a central server;
the robot acquires input destination information, calculates robot position information and sends the robot position information and the destination information to a central server;
the central server inputs a plurality of video stream information into a crowd density estimation network model respectively, outputs a two-dimensional graph group, wherein the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, and each thermodynamic diagram of the two-dimensional graph group is processed by utilizing PCA dimensionality reduction to obtain density information of an area where each thermodynamic diagram is located;
the central server acquires all feasible paths according to the robot position information, the destination information, the pre-stored map information and the two-dimensional map group;
the central server traverses and solves all feasible paths by using a greedy algorithm according to the density information to obtain an optimal guide path, and sends the guide path to the robot;
the robot controls the movement of the robot according to the received guiding path;
processing each thermodynamic diagram of the two-dimensional diagram set by using PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located comprises:
converting each thermodynamic diagram into one-dimensional data by adopting PCA dimension reduction, wherein the one-dimensional data comprises thermodynamic diagram information;
multiplying a preset one-dimensional array with the one-dimensional data to obtain multiplied one-dimensional data, wherein the data in the preset one-dimensional array represents the weight of the data in the one-dimensional data:
and calculating the average density of the area as density information by squaring and adding the multiplied one-dimensional data and opening the root number:
the central server traverses and solves all feasible paths by using a greedy algorithm according to the density information, and the obtaining of the optimal guide path comprises the following steps:
according to the density information corresponding to each thermodynamic diagram, calculating the cost time passing through the area where each thermodynamic diagram is located:
c i =log(h i 2 +σ i 2 )*pi;
wherein, c i To pass the cost time of the region where the ith thermodynamic diagram is located, h i Representing density information, sigma, corresponding to the ith thermodynamic diagram i For a correction variable adjusted in advance, p i For a preset fixed value, i is more than or equal to 1 and less than or equal to N, wherein N is the number of thermodynamic diagrams;
calculating the cost time from the region where the ith thermodynamic diagram is located to the region where the jth thermodynamic diagram is located:
D[i,j]=(c i +c j )/2,i≠j;
D[i,i]=0;
wherein D [ i, j]Representing the cost time from the region of the ith thermodynamic diagram to the region of the jth thermodynamic diagram, c j In order to pass the cost time of the region where the jth thermodynamic diagram is located, j is more than or equal to 1 and less than or equal to N;
and obtaining all the feasible paths, carrying out traversal solving on all the feasible paths according to the D [ i, j ] and a greedy algorithm, and taking the feasible path with the minimum cost time as the optimal guiding path.
2. The crowd density estimation-based guidance robotic method of claim 1, further comprising: in the moving process of the robot, the robot acquires the current gesture of the robot in real time, and the robot is controlled to move and avoid the obstacle by utilizing the SLAM navigation algorithm.
3. The crowd density estimation based robotic guidance method of claim 1, wherein the robot position information is calculated using a particle filter algorithm.
4. A crowd density estimation-based guidance robotic system, comprising: robot, multiple cameras, central server;
the cameras are positioned in different areas and are respectively used for acquiring video stream information and sending the acquired video stream information to the central server;
the robot is used for acquiring input destination information, calculating robot position information and sending the robot position information and the destination information to the central server; receiving a guide path sent by a central server, and controlling the movement of the robot according to the guide path;
the central server is used for inputting a plurality of video stream information into the crowd density estimation network model respectively, outputting a two-dimensional graph group, wherein the two-dimensional graph group comprises a plurality of thermodynamic diagrams based on crowd density estimation, each thermodynamic diagram corresponds to each video stream information, processing each thermodynamic diagram in the two-dimensional graph group by using PCA dimension reduction to obtain density information of an area where each thermodynamic diagram is located, acquiring all feasible paths according to the robot position information, destination information, pre-stored map information and the two-dimensional graph group, performing traversal solution on all the feasible paths according to the density information by using a greedy algorithm to obtain an optimal guiding path, and transmitting the guiding path to a robot;
wherein, processing each thermodynamic diagram in the two-dimensional diagram group by using PCA dimension reduction to obtain density information of a region where each thermodynamic diagram is located comprises:
converting each thermodynamic diagram into one-dimensional data by adopting PCA dimension reduction, wherein the one-dimensional data comprises thermodynamic diagram information;
multiplying a preset one-dimensional array with the one-dimensional data to obtain multiplied one-dimensional data, wherein the data in the preset one-dimensional array represents the weight of the data in the one-dimensional data:
the multiplied one-dimensional data are square added and the average density of the area is calculated in the form of root number to be used as density information;
according to the density information, traversing and solving all feasible paths by using a greedy algorithm, wherein obtaining the optimal guide path comprises the following steps:
according to the density information corresponding to each thermodynamic diagram, calculating the cost time passing through the area where each thermodynamic diagram is located:
c i =log(h i 2 +σ i 2 )*p i ;
wherein, c i To pass the cost time of the region where the ith thermodynamic diagram is located, h i Representing density information, sigma, corresponding to the ith thermodynamic diagram i For a correction variable adjusted in advance, p i For a preset fixed value, i is more than or equal to 1 and less than or equal to N, wherein N is the number of thermodynamic diagrams;
calculating the cost time from the region where the ith thermodynamic diagram is located to the region where the ith thermodynamic diagram is located:
D[i,j]=(c i +c j )/2,i≠j;
D[i,i]=0;
wherein D [ i, j]Representing the cost time from the region of the ith thermodynamic diagram to the region of the jth thermodynamic diagram, c j In order to pass the cost time of the region where the jth thermodynamic diagram is located, j is more than or equal to 1 and less than or equal to N;
and obtaining all the feasible paths, carrying out traversal solving on all the feasible paths according to the D [ i, j ] and a greedy algorithm, and taking the feasible path with the minimum cost time as the optimal guiding path.
5. The crowd density estimation-based pointing robot system of claim 4 wherein the robot comprises: ZED camera, voice input module, touch-sensitive screen, treater, industrial computer, MCU controller and laser radar sensor, the treater is connected with ZED camera, voice input module, touch-sensitive screen and industrial computer respectively, and the industrial computer is connected with treater, MCU controller and laser radar sensor respectively:
the ZED camera is positioned at the uppermost part of the robot and is used for acquiring limb action images of a user;
the voice input module is used for acquiring voice instructions;
the touch screen is used for acquiring input of a user;
the processor is used for identifying a limb action image or a voice instruction of a user or input of the user to obtain a control instruction, wherein the voice instruction or the input of the user comprises destination information;
the industrial personal computer is used for processing the acquired information of the laser radar sensor and controlling the operation of the SLAM map building navigation frame according to the control instruction and the processed information of the laser radar sensor;
and the MCU is used for acquiring the processed information of the laser radar sensor, matching the processed information with pre-stored map information and calculating the position of the robot by using a particle filtering algorithm.
6. The crowd density estimation-based pointing robot system of claim 5, wherein the robot further comprises a wheel odometer connected to the MCU controller, the MCU controller further configured to acquire a current pose of the robot in real time based on the processed information of the lidar sensor and the information of the wheel odometer, and control movement and obstacle avoidance of the robot using a SLAM navigation algorithm.
7. The crowd density estimation based pointing robot system of claim 4 wherein the robot is an omni-directional wheeled robot comprising a three wheeled omni-directional chassis.
8. The crowd density estimation based pointing robot system of claim 5 wherein the laser radar sensor is of the type rplidar 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011381811.2A CN112731919B (en) | 2020-12-01 | 2020-12-01 | Crowd density estimation-based robot guiding method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011381811.2A CN112731919B (en) | 2020-12-01 | 2020-12-01 | Crowd density estimation-based robot guiding method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112731919A CN112731919A (en) | 2021-04-30 |
CN112731919B true CN112731919B (en) | 2023-09-01 |
Family
ID=75597120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011381811.2A Active CN112731919B (en) | 2020-12-01 | 2020-12-01 | Crowd density estimation-based robot guiding method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112731919B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114662760A (en) * | 2022-03-23 | 2022-06-24 | 青岛海信网络科技股份有限公司 | Robot-based distribution method and robot |
CN118192613B (en) * | 2024-05-13 | 2024-08-06 | 天津市品茗科技有限公司 | AI robot dynamic obstacle avoidance method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014190725A (en) * | 2013-03-26 | 2014-10-06 | Zenrin Datacom Co Ltd | Route guidance system, route guidance method and computer program |
CN104298239A (en) * | 2014-09-29 | 2015-01-21 | 湖南大学 | Enhanced map learning path planning method for indoor mobile robot |
CN106679646A (en) * | 2016-11-10 | 2017-05-17 | 北斗羲和科技发展(北京)有限公司 | Indoor emergency navigation method, device and system based on TC-OFDM positioning technology |
CN107403539A (en) * | 2017-08-07 | 2017-11-28 | 深圳市盛路物联通讯技术有限公司 | A kind of public place environment method for early warning and service end |
CN111351489A (en) * | 2020-04-29 | 2020-06-30 | 金陵科技学院 | Nursing robot path planning method based on CNN and improved A-Star algorithm |
GB202009762D0 (en) * | 2020-06-26 | 2020-08-12 | Univ Oxford Innovation Ltd | Method and system for generating a visual representation |
CN111563632A (en) * | 2020-05-14 | 2020-08-21 | 南京市斯塔万格智能科技有限公司 | Design method of efficient shopping guide robot based on SVM (support vector machine) and A-Star algorithm |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9863778B2 (en) * | 2015-12-18 | 2018-01-09 | Intel Corporation | Systems and methods to direct foot traffic |
KR102453858B1 (en) * | 2015-12-23 | 2022-10-14 | 한화테크윈 주식회사 | Apparatus and method for image processing |
-
2020
- 2020-12-01 CN CN202011381811.2A patent/CN112731919B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014190725A (en) * | 2013-03-26 | 2014-10-06 | Zenrin Datacom Co Ltd | Route guidance system, route guidance method and computer program |
CN104298239A (en) * | 2014-09-29 | 2015-01-21 | 湖南大学 | Enhanced map learning path planning method for indoor mobile robot |
CN106679646A (en) * | 2016-11-10 | 2017-05-17 | 北斗羲和科技发展(北京)有限公司 | Indoor emergency navigation method, device and system based on TC-OFDM positioning technology |
CN107403539A (en) * | 2017-08-07 | 2017-11-28 | 深圳市盛路物联通讯技术有限公司 | A kind of public place environment method for early warning and service end |
CN111351489A (en) * | 2020-04-29 | 2020-06-30 | 金陵科技学院 | Nursing robot path planning method based on CNN and improved A-Star algorithm |
CN111563632A (en) * | 2020-05-14 | 2020-08-21 | 南京市斯塔万格智能科技有限公司 | Design method of efficient shopping guide robot based on SVM (support vector machine) and A-Star algorithm |
GB202009762D0 (en) * | 2020-06-26 | 2020-08-12 | Univ Oxford Innovation Ltd | Method and system for generating a visual representation |
Non-Patent Citations (1)
Title |
---|
基于多层代价地图的移动机器人人机共融导航技术研究;王宇;中国优秀硕士学位论文全文数据库信息科技辑(第01(2019)期);第I140-1548页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112731919A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363058B (en) | Three-dimensional object localization for obstacle avoidance using one-shot convolutional neural networks | |
Shah et al. | ViNT: A foundation model for visual navigation | |
Asadi et al. | Vision-based integrated mobile robotic system for real-time applications in construction | |
US12055940B2 (en) | Path planning for autonomous moving devices | |
Boudjit et al. | Human detection based on deep learning YOLO-v2 for real-time UAV applications | |
US10497145B2 (en) | System and method for real-time large image homography processing | |
US11397871B2 (en) | Artificial intelligence moving agent | |
CN112731919B (en) | Crowd density estimation-based robot guiding method and system | |
CN111708042B (en) | Robot method and system for predicting and following pedestrian track | |
CN110850877A (en) | Automatic driving trolley training method based on virtual environment and deep double Q network | |
Bhandari et al. | Object detection and recognition: using deep learning to assist the visually impaired | |
Asadi et al. | Building an integrated mobile robotic system for real-time applications in construction | |
Loukkal et al. | Driving among flatmobiles: Bird-eye-view occupancy grids from a monocular camera for holistic trajectory planning | |
CN115439507A (en) | Three-dimensional video target tracking method based on multi-level mutual enhancement and relevant pyramid | |
Singh et al. | Multi-scale graph-transformer network for trajectory prediction of the autonomous vehicles | |
Zhang et al. | Research on the Application of Computer Vision Based on Deep Learning in Autonomous Driving Technology | |
Chen et al. | Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM | |
US20230401748A1 (en) | Apparatus and methods to calibrate a stereo camera pair | |
CN111784842B (en) | Three-dimensional reconstruction method, device, equipment and readable storage medium | |
Nandkumar et al. | Simulation of Indoor Localization and Navigation of Turtlebot 3 using Real Time Object Detection | |
WO2020026798A1 (en) | Control device, control method, and program | |
Hong et al. | Real-time visual-based localization for mobile robot using structured-view deep learning | |
Pandey et al. | Light-weight object detection and decision making via approximate computing in resource-constrained mobile robots | |
Syntakas et al. | Object Detection and Navigation of a Mobile Robot by Fusing Laser and Camera Information | |
Hoshino et al. | Mobile Robot Motion Planning through Obstacle State Classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |