CN116629106A - Quasi-digital twin method, system, equipment and medium for mobile robot operation scene - Google Patents

Quasi-digital twin method, system, equipment and medium for mobile robot operation scene Download PDF

Info

Publication number
CN116629106A
CN116629106A CN202310517881.3A CN202310517881A CN116629106A CN 116629106 A CN116629106 A CN 116629106A CN 202310517881 A CN202310517881 A CN 202310517881A CN 116629106 A CN116629106 A CN 116629106A
Authority
CN
China
Prior art keywords
positioning data
data
mobile robot
robot
airborne
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310517881.3A
Other languages
Chinese (zh)
Inventor
张建政
李洪涛
李方保
李亮华
韦鲲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sazhi Intelligent Technology Co ltd
Original Assignee
Shanghai Sazhi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sazhi Intelligent Technology Co ltd filed Critical Shanghai Sazhi Intelligent Technology Co ltd
Priority to CN202310517881.3A priority Critical patent/CN116629106A/en
Publication of CN116629106A publication Critical patent/CN116629106A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/18Manufacturability analysis or optimisation for manufacturability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a quasi-digital twin method, a system, equipment and a medium of a mobile robot operation scene, wherein the method comprises the following steps: acquiring airborne positioning data of the mobile robot and resolving; acquiring airborne ranging data of the mobile robot and resolving; acquiring the intensity of a wireless signal received by a mobile robot and calculating to obtain auxiliary positioning data; acquiring image signals of cameras in a scene and calculating to obtain image positioning data; fusing the resolved airborne positioning and ranging data with auxiliary positioning data and image positioning data to obtain unique target positioning data and target ranging data of the robot under a unified coordinate system; based on the target positioning and ranging data and the scene map, a corresponding quasi-digital twin model is established. Therefore, the invention can provide consistent positioning ranging data for the upper robot management system, and the twin model is equivalent to providing an additional ranging system except for an airborne ranging module for the robot, thereby enhancing the moving safety of the robot.

Description

Quasi-digital twin method, system, equipment and medium for mobile robot operation scene
Technical Field
The invention relates to the technical field of mobile robots, in particular to a quasi-digital twin method, a quasi-digital twin system, quasi-digital twin equipment and quasi-digital twin media for a mobile robot operation scene.
Background
In the intelligent manufacturing and production scene, more and more mobile robots are co-located in the same scene for operation, such as a mobile composite robot, a patrol robot, a logistics transportation and distribution robot and the like. These robots coexist with people, materials, other equipment, etc. in a production scenario, where these robots may come from multiple suppliers, how to allow the robots to operate orderly and safely in this complex scenario without potential safety risks, such as possible collisions between the robots and other equipment or moving objects, is a critical issue for the overall robot management system or platform to consider, which is related to the enterprise's safety production.
The core of the problems is that the robot can accurately position and measure distance, including the positioning of the robot in a scene, the accurate measurement of the distance between the robot and other surrounding objects, and the like. How to obtain the position information of robots, obstacles (including moving obstacles) and the like and the distance data between the robots and the obstacles in a unified scene coordinate in a consistent way, so that the position information and the distance data are provided for an upper robot management platform to serve as a data source and a decision basis for scheduling and monitoring is an important technical problem.
In addition, with the increase of factors such as the complexity of intelligent manufacturing scenes and the types of on-site production equipment, and the addition of temporary mobile barriers (such as transportation equipment or people and other foreign 'species') to an operation site, certain potential risks are brought to mobile robot operation. Robots encounter such "species" and only take evasive action. Therefore, how to utilize the existing detection sensing devices or facilities distributed in the scene, such as security monitoring cameras, WIFI wireless devices and the like, to complete scene factor full sensing and identify the position coordinates, and to provide the position coordinates as a data source for an upper robot management system (group control scheduling platform) to make a decision is also a technical problem.
Disclosure of Invention
In order to solve the problem of consistency of basic positioning ranging data acquisition of an upper robot management system when a plurality of robot groups and multi-source ranging and positioning equipment are operated in an intelligent manufacturing production scene and the problem of real-time positioning and ranging of a mobile robot operated in the scene through sensing equipment which is already arranged on site, the invention provides a quasi-digital twin method, a quasi-digital twin system, quasi-digital twin equipment and a quasi-digital twin medium for the mobile robot operation scene.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the present invention provides a quasi-digital twin method for a mobile robot operation scene, including:
aiming at a plurality of mobile robots in a target scene, acquiring airborne positioning data obtained by an airborne positioning module (such as a GPRS positioning module) of each mobile robot, and resolving the airborne positioning data;
acquiring airborne ranging data obtained by an airborne ranging module (such as a laser radar or an ultrasonic sensor) of each mobile robot, and resolving the airborne ranging data;
acquiring wireless signals received by each mobile robot from a preset wireless communication module, and resolving the intensity of the wireless signals to obtain auxiliary positioning data;
acquiring an image signal output by an image acquisition module (such as a security monitoring camera) which is arranged in the target scene in advance, and acquiring image positioning data of a moving object in the target scene based on the image signal;
performing fusion calculation on all the resolved airborne positioning data and airborne ranging data, the auxiliary positioning data and the image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system;
And establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance.
The present embodiment is called a "quasi" digital twin model because it is not a true conventional digital twin model, and is distinguished from a digital twin model in that robots, obstacles, etc. are all identified as one point in the model, not a mapping model corresponding to its physical architecture, and the remaining features are the same as the digital twin model, so it is called a "quasi" digital twin model. The quasi-digital twin model is interacted with the data of the robot movement field in real time, and the movement state of the robot in the model is synchronous with the movement state of the robot on the field in real time.
Optionally, the communication module includes a WIFI router, and the wireless signal is a WIFI signal;
the calculating the strength of the wireless signal includes:
and determining the distance between the corresponding mobile robot and the WIFI router according to the intensity of the acquired WIFI signal and the corresponding relation between the pre-established WIFI signal intensity and the distance between the corresponding mobile robot and the WIFI router, and acquiring auxiliary positioning data of the mobile robot according to the determined distance and the position information of the WIFI router.
Optionally, the acquiring image positioning data of the moving object in the target scene based on the image signal includes:
identifying a moving object in the image signal;
and determining image positioning data of the moving object based on the image signals and the actual position information and parameter information of the image acquisition module.
Optionally, when a moving object is identified from a plurality of image signals, the determining the image positioning data of the moving object includes:
and determining distance information between the moving object and the corresponding image acquisition module according to two image signals with the maximum number of pixels surrounded by the outline of the moving object, and determining image positioning data of the moving object according to the distance information, the actual position information and the parameter information of the corresponding image acquisition module.
Optionally, the moving object includes a moving obstacle, and the performing fusion calculation on all the resolved airborne positioning data and airborne ranging data and the auxiliary positioning data and the image positioning data includes fusing the image positioning data of the moving obstacle with the airborne ranging data by adopting the following steps:
Judging whether the mobile robot has airborne ranging data corresponding to the image positioning data of the mobile obstacle or not;
if the image positioning data of the moving obstacle does not exist, the image positioning data of the moving obstacle is used as target positioning data of the moving obstacle;
if so, judging whether the coincidence ratio of the mobile obstacle positioning data corresponding to the existing airborne ranging data and the image positioning data of the mobile obstacle is higher than a preset coincidence ratio threshold;
if the vehicle-mounted distance measurement data is higher than the image positioning data of the moving obstacle, carrying out fusion calculation on the existing vehicle-mounted distance measurement data and the image positioning data of the moving obstacle, and taking a fusion result as target positioning data of the moving obstacle;
if the vehicle-mounted distance measurement data is not higher than the target positioning data of the moving obstacle, the moving obstacle positioning data corresponding to the existing vehicle-mounted distance measurement data is used as the target positioning data of the moving obstacle;
and determining target ranging data of the mobile robot according to the target positioning data of the mobile obstacle.
Optionally, the method further comprises predicting whether the mobile robot is likely to collide with the moving obstacle according to the following steps:
acquiring a predicted motion trail and a predicted motion speed of the moving obstacle based on the historical target positioning data of the moving obstacle and the corresponding running time;
Acquiring an intersection point of a planned motion trail of the mobile robot and an estimated motion trail of the mobile obstacle;
calculating the time of the moving obstacle reaching the intersection point based on the current target positioning data, the estimated motion trail and the estimated motion speed of the moving obstacle, and recording the time as first time;
calculating the time of the mobile robot reaching the intersection point based on the current target positioning data, the planned motion trail and the preset motion speed of the mobile robot, and recording the time as second time;
based on the first time and the second time, predicting whether the mobile robot is likely to collide with the moving obstacle.
Optionally, when it is predicted that the mobile robot and the mobile obstacle may collide, corresponding early warning information is sent to an upper robot management system.
Optionally, when it is predicted that a collision between a certain mobile robot and a plurality of moving obstacles is possible, the motion track of the certain mobile robot is re-planned according to the quasi-digital twin model.
In a second aspect, the present invention provides a quasi-digital twin system of a mobile robot operating scene, the system comprising:
The airborne positioning data resolving module is used for acquiring the airborne positioning data obtained by the airborne positioning module of each mobile robot and resolving the airborne positioning data;
the airborne ranging data calculation module is used for acquiring the airborne ranging data obtained by the airborne ranging module of each mobile robot and calculating the airborne ranging data;
the auxiliary positioning data resolving module is used for acquiring wireless signals received by each mobile robot from the preset wireless communication module and resolving the intensity of the wireless signals to obtain auxiliary positioning data;
the image positioning data resolving module is used for acquiring image signals output by the image acquisition module arranged in the target scene and acquiring image positioning data of a moving object in the target scene based on the image signals;
the fusion calculation module is used for carrying out fusion calculation on all the resolved airborne positioning data and airborne ranging data, the auxiliary positioning data and the image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system;
and the twin model building module is used for building a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and the scene map of the target scene, which are generated in advance.
In a third aspect, the invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the quasi-digital twin method as described above are implemented when the processor executes the computer program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the quasimdigital twin method as described above.
By adopting the technical scheme, the invention has the following beneficial effects:
the method comprises the steps of carrying out fusion calculation on auxiliary positioning data such as airborne positioning data, airborne ranging data, WIFI and the like and image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system; and then, establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance. Therefore, on one hand, the method can provide the upper robot management system with the consistency data of the real-time position of the mobile robot running in the scene and the real-time distance between the mobile robot and the surrounding obstacles, and provides the upper robot management system with decision basis; on the other hand, because the data of the quasi-digital twin model and the mobile robots are in real-time bidirectional interaction, namely, the data in the twin model can be downloaded to each robot running on site in real time, each robot can carry out mobile navigation based on own airborne ranging data and received ranging data from the quasi-digital twin model, and the method is equivalent to the mobile robot, and a set of ranging system is additionally added, so that even if the own airborne ranging module fails, the safety of movement can be ensured, and the reliability is enhanced.
Drawings
FIG. 1 is a flow chart of a quasi-digital twinning method of a mobile robot operating scenario of the present invention;
FIG. 2 is a block diagram of a quasi-digital twin system of the mobile robot operating scenario of the present invention;
FIG. 3 is a schematic diagram of performing positioning ranging in the present scenario;
FIG. 4 is a flow chart of the present invention for locating moving obstructions based on image signals;
FIG. 5 is an interface schematic diagram of a quasi-digital twin model generated in accordance with the present invention;
FIG. 6 is a schematic illustration of the prediction of a potential collision according to the present invention;
FIG. 7 is a flow chart of the present invention for potential collision prediction;
fig. 8 is a hardware architecture diagram of the electronic device of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
Example 1
The embodiment provides a quasi-digital twin method of a mobile robot operation scene, as shown in fig. 1, the method specifically comprises the following steps:
s1, aiming at a plurality of mobile robots (hereinafter, simply referred to as robots) in a target scene, acquiring airborne positioning data obtained by an airborne positioning module (such as a GPRS positioning module) of each mobile robot, and resolving the airborne positioning data;
s2, acquiring airborne ranging data obtained by an airborne ranging module (such as a laser radar or an ultrasonic sensor) of each mobile robot, and resolving the airborne ranging data;
s3, acquiring wireless signals received by each mobile robot from a preset wireless communication module (such as a WIFI router and the like), and resolving the intensity (embodied as the amplitude) of the wireless signals to obtain auxiliary positioning data;
s4, acquiring an image signal output by an image acquisition module (such as a security monitoring camera) arranged in the target scene, and acquiring image positioning data of a moving object (which can comprise a moving robot and a moving obstacle) in the target scene based on the image signal;
s5, performing fusion calculation on all the resolved airborne positioning data and airborne ranging data, the auxiliary positioning data and the image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system;
And S6, establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance.
The present embodiment is called a "quasi" digital twin model because it is not a true conventional digital twin model, and is distinguished from a digital twin model in that a robot, a moving obstacle, etc. are all identified as one point in the model, not a mapping model corresponding to its physical architecture, and the remaining features are the same as the digital twin model, so it is called a "quasi" digital twin model. The quasi-digital twin model is interacted with the data of the robot movement field in real time, and the movement state of the robot in the model is synchronous with the movement state of the robot on the field in real time.
It should be understood that steps S1 to S4 in the present embodiment are executed in parallel.
In an implementation manner, the wireless communication module is a WIFI router, and the wireless signal is a WIFI signal; step S3 of resolving the wireless signal by: and determining the distance between the corresponding mobile robot and the WIFI router according to the intensity of the acquired WIFI signal and the corresponding relation between the pre-established WIFI signal intensity and the distance between the corresponding mobile robot and the WIFI router, and acquiring auxiliary positioning data of the mobile robot according to the determined distance and the position information of the WIFI router.
In one embodiment, step S4 obtains image positioning data of the moving object in the target scene based on the image signal as follows: firstly, identifying a moving object in the image signal; and then determining the image positioning data of the moving object based on the image signals, the actual position information and the parameter information of the image acquisition module. Preferably, when a certain moving object can be identified from a plurality of image signals, the process of determining image positioning data of the moving object is as follows: according to two image signals (the more the number of the contained pixels is, the closer the moving object is to the camera, the larger the size is displayed in the image, which is beneficial to better calculating distance information) with the largest number of pixels surrounded by the outline of the moving object, the distance information between the moving object and the corresponding image acquisition module is determined, and the image positioning data of the moving object is determined according to the distance information, the actual position information and the parameter information of the corresponding image acquisition module.
In an embodiment, the moving object includes a moving obstacle, and step S5 fuses the image positioning data of the moving obstacle with the on-board ranging data by: firstly, judging whether the mobile robot has airborne ranging data corresponding to the image positioning data of the mobile obstacle; if the image positioning data of the moving obstacle does not exist, the image positioning data of the moving obstacle is used as target positioning data of the moving obstacle; if so, judging whether the coincidence ratio of the mobile obstacle positioning data corresponding to the existing airborne ranging data and the image positioning data of the mobile obstacle is higher than a preset coincidence ratio threshold; if the vehicle-mounted distance measurement data is higher than the image positioning data of the moving obstacle, carrying out fusion calculation on the existing vehicle-mounted distance measurement data and the image positioning data of the moving obstacle, and taking a fusion result as target positioning data of the moving obstacle; if the vehicle-mounted distance measurement data is not higher than the target positioning data of the moving obstacle, the moving obstacle positioning data corresponding to the existing vehicle-mounted distance measurement data is used as the target positioning data of the moving obstacle; according to the target positioning data of the mobile obstacle, the target ranging data of the mobile robot can be determined.
In an implementation manner, the method of the present embodiment further includes predicting whether the mobile robot may collide with the moving obstacle according to the following steps: acquiring a predicted motion trail and a predicted motion speed of the moving obstacle based on the historical target positioning data of the moving obstacle and the corresponding running time; acquiring an intersection point of a planned motion trail of the mobile robot and an estimated motion trail of the mobile obstacle; calculating the time of the moving obstacle reaching the intersection point based on the current target positioning data, the estimated motion trail and the estimated motion speed of the moving obstacle, and recording the time as first time; calculating the time of the mobile robot reaching the intersection point based on the current target positioning data, the planned motion trail and the preset motion speed of the mobile robot, and recording the time as second time; based on the first time and the second time, predicting whether the mobile robot is likely to collide with the moving obstacle.
In an embodiment, when it is predicted that the mobile robot may collide with the moving obstacle, corresponding early warning information is sent to an upper robot management system.
In an embodiment, when it is predicted that a collision between a certain mobile robot and a plurality of moving obstacles is possible, the motion track of the certain mobile robot is re-planned according to the quasi-digital twin model.
According to the embodiment, the target positioning data and the target ranging data corresponding to each mobile robot under a unified coordinate system are obtained by carrying out fusion calculation on auxiliary positioning data such as airborne positioning data, airborne ranging data, WIFI and the like and image positioning data; and then, establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance. Therefore, on one hand, the method can provide the upper robot management system with the consistency data of the real-time position of the mobile robot running in the scene and the real-time distance between the mobile robot and the surrounding obstacles, and provides the upper robot management system with decision basis; on the other hand, because the data of the quasi-digital twin model and the mobile robots are in real-time bidirectional interaction, namely, the data in the twin model can be downloaded to each robot running on site in real time, each robot can carry out mobile navigation based on own airborne ranging data and received ranging data from the quasi-digital twin model, and the method is equivalent to the mobile robot, and a set of ranging system is additionally added, so that even if the own airborne ranging module fails, the safety of movement can be ensured, and the reliability is enhanced.
Specifically, the method of the present embodiment may be implemented by the quasimdigital twin system 10 shown in fig. 2, and the following details of the technical solution of the present embodiment are further described with reference to fig. 2 to 7:
as shown in fig. 2, a mobile robot, a WIFI router and a security monitoring camera running in a target scene communicate with a communication module 19 in the quasi-digital twin system 10 in a wireless communication manner, so that airborne positioning data and distance data detected by the robot are transmitted to the quasi-digital twin system 10, meanwhile, the robot transmits information data such as received WIFI signal amplitude and the like to the quasi-digital twin system 10, and the security monitoring camera also transmits image signals collected by the security monitoring camera to the quasi-digital twin system 10.
The quasi-digital twin system 10 is compatible with 3G/4G/5G or WIFI wireless communication modules. After the quasi-digital twin system 10 receives the airborne positioning and ranging data, according to the preset number of the robots in the system, the airborne positioning data and the airborne ranging data of the corresponding robots are respectively resolved by the corresponding airborne positioning data resolving module 11 and the airborne ranging data resolving module 12. The positioning data represent position coordinate data of the corresponding robot in the scene, and the ranging data represent distance data between the corresponding robot and surrounding objects or obstacles in front, which are obtained by an onboard ranging module of the corresponding robot. Both types of data are available to each robot. For a certain robot, the ranging data between the robot and a certain obstacle obtained by detection of the robot may also be the ranging data between the robot and other adjacent robots; while the neighboring robot may also obtain ranging data with the robot, a detailed schematic view can be seen in fig. 3. After the airborne positioning data and the airborne ranging data of each robot are resolved, the data are output to the fusion calculation module 15 together; in the fusion calculation module 15, the calculated positioning data corresponding to each robot and the ranging data between each robot and the obstacle are fused and calculated in a unified coordinate system (global coordinate system established based on the scene map of the target scene), so as to finally obtain unique current positioning data (i.e., target positioning data) representing each robot, and real-time ranging data (i.e., target ranging data) between each robot and the surrounding environment object and the adjacent obstacle.
In addition, the WIFI positioning network is arranged in advance in the intelligent production scene, and the robot is communicated with the WIFI router to perform data interaction with the upper robot management system or the twin system through the WIFI router, and meanwhile, the strength (namely the signal amplitude) of the WIFI signal received by the robot also reflects the distance information between the robot and the WIFI router. Therefore, the quasi-digital twin system 10 can also acquire the WIFI signals received by the robots in the scene, and calculate the WIFI positioning data (i.e. the auxiliary positioning data) of each robot in the scene through the auxiliary positioning data calculating module 13, and input the auxiliary positioning data to the fusion calculating module 15. The fusion calculation module 15 performs fusion calculation on the auxiliary positioning data and the onboard positioning data acquired by the robot, so that unique real-time positioning data of the robot, namely target positioning data, can be finally obtained.
Also, preferably, the quasi-digital twin system 10 may calculate positioning data of the object in the image from the received image signal and combine with the scene map, thereby positioning into a coordinate system corresponding to the scene map.
In fig. 2, the fusion calculation module 15 outputs the target positioning data and the target ranging data to the twin model building module 16, which maps the ranging data, the positioning data, etc. of the robot and the environment object, the obstacle, etc. onto the scene map in real time according to the scene map data and the updated target positioning data, the ranging data, and builds a corresponding quasi-digital twin model. This embodiment may display the alignment digital twin model via display 20. Meanwhile, the target positioning data and the target ranging data can be output to an upper robot management system or a group control scheduling platform through a data transmission interface 19, so that a decision basis is provided for the upper robot management system or the group control scheduling platform.
Fig. 3 is a schematic diagram of positioning and ranging of a mobile robot, a mobile obstacle (e.g., a person), a wireless router, etc. in a scene. Robot a and robot B are described as an example.
Robot A and robot B each have an onboard ranging module, such as a lidar or ultrasonic sensor, for measuring distance to surrounding objects or obstacles. In the walking process of the robot A, the ranging data among the robot, peripheral objects and obstacles are obtained by means of an airborne laser radar or other ranging modules.
In the quasi-digital twin system 10, how to dynamically detect moving obstacles and accurately locate the obstacles is critical, and is also critical for reliable and safe operation of the robot population.
In fig. 3 it is shown that robots a and B detect a moving obstacle in front, such as a person walking, and measure distance data between themselves and the person. As shown in the figure, robot a detects a distance LA1 from the person, and robot 2 detects a distance LB1 from the person. The two ranging data are calculated and processed by a calculation fusion module, dynamic positioning coordinate data of the moving obstacle and distance data between the moving obstacle and an adjacent robot (such as a robot A or a robot B) can be obtained based on a scene map and a coordinate system, and the moving obstacle and the adjacent robot (such as a robot A or a robot B) can be displayed in real time.
In addition, in an actual scene, a plurality of wireless routers are arranged, and the main purpose is to enable a wireless network to cover the scene of the robot so as to ensure the communication reliability of the robot. As schematically shown in fig. 3, 3 wireless routers.
In this embodiment, another purpose of the plurality of wireless routers is to be used for positioning the WIFI terminal device (such as a robot) in the scene, so as to improve the positioning redundancy degree of the system.
In addition, if the robot carries the 3D vision sensor in the scene to perform ranging navigation, the ranging data of the sensor can also be accessed to the fusion calculation module 15 of the quasi-digital twin system 10 for fusion processing through a communication mode.
If UWB positioning is adopted in the scene, the relevant positioning data can also be input into the fusion calculation module 15 of the quasi-digital twin system 10 for fusion processing.
In the quasimian system 10 of the present embodiment, two objects, such as a robot and a pedestrian in fig. 2, involved in any measured ranging data can be embodied in the aforementioned quasimian model, and the motion state of the robot and the distance states between the robot and other surrounding robots and between obstacles are dynamically displayed on the display interface of the display 20.
It should be further noted that, in this embodiment, the positioning of the robot in the scene may be obtained through fusion calculation according to the airborne positioning data, the airborne ranging data, and the strength of the WIFI signal received by the robot, in combination with the scene map and the global coordinate system thereof. On the basis, according to a data fusion algorithm, positioning data and ranging data of the moving obstacle in the scene can be obtained. Therefore, in the quasi-digital twin model, not only the dynamic track data of the robot, but also the moving track data of moving obstacles such as walking people are provided, even if the quasi-digital twin model is used as a human body, the quasi-digital twin model does not carry any sensor capable of detecting positioning data, and the position information of the quasi-digital twin model can be synchronously displayed in real time as input data of an upper system. The main reason for this is that the system can access the image signal of the security monitoring camera and obtain the positioning data or ranging data of the moving obstacle in the scene based on the image processing algorithm.
It should be appreciated that real-time variation data of the fixed obstacle, and the distance of the robot from the fixed obstacle, may also be embodied in a digital twin model.
Further, robot a may detect a distance LAB from robot B, and robot B may detect a distance value LBA from robot a. And after the two data are subjected to data fusion processing, consistent ranging data are obtained so as to represent the distance between the robot A and the robot B.
If the WIFI-based positioning mode is considered as absolute position positioning within the scene, the positioning data obtained by the robot in accordance with the on-board positioning module may be considered as incremental positioning, or relative positioning.
In this embodiment, WIFI-based positioning may be implemented in two ways.
One is the traditional way, namely, the way of firstly collecting data offline and establishing a database and then comparing the data online. Specifically, in an offline mode, according to a certain walking route rule, a plurality of sampling points are selected from a scene, preferably, the scene map is gridded, when a robot to be tested is positioned at grid node coordinates, signals from the robot are received through a sampling wireless router, offline calibration is carried out on the position and sampling data of the robot, and the signal strength is related to the distance between the wireless router and the robot; the farther the distance, the weaker the signal strength; after all test points are completed, the method is equivalent to a database which establishes the relation between the strength of wireless signals received by the robot and the distance between wireless routers; and in the running and walking process of the robot, comparing the received signal strength with the data of grid points in the database, and judging whether the signal strength is matched with the data of the grid points in the database so as to judge the position of the robot. This approach requires a large amount of offline testing and data acquisition, and is labor intensive and inefficient.
Another way is an improvement scheme provided by the embodiment:
generally, when a robot is deployed on site, a scene map is required to be established first; and then, in the running process of the robot, determining the position of the robot according to the scene map and the sensing point cloud data obtained by the airborne positioning module, thereby completing the tasks of track planning and navigation walking.
Based on the above, in the technical scheme, in the process of generating a scene map for a predetermined scene, the robot plans and designs a plurality of position points, and is used for sampling the WIFI routing signal when the robot is positioned at the position point, and establishing a relation database between the wireless signal intensity and the position points after processing. In the signal acquisition process, when a robot walks to each planned sampling point in a scene, the quasi-digital twin system 10 stores acquired signal amplitude values communicated with the corresponding robot from each wireless router, correspondingly stores acquired wireless signal amplitude data sent by the robot and corresponding point position numerical values of the robot, and establishes a corresponding relation database between the amplitude values and the current position coordinates (known quantity; the current position coordinate points judged by the robot according to the airborne positioning data of the robot and an established scene map). When the full scene map and the planning sampling points are built, a complete corresponding relation database is built between the corresponding wireless signal amplitude and the robot position coordinates. In the real running process of the robot, the wireless signal amplitude can be compared and matched with the data in the database, and the scene position where the matching point is located can be judged to be the current position where the robot is located. According to the technical scheme, a scene map can be built firstly, and then the robot walks in the scene according to the planned sampling points to build a database. It should be noted that the more and more densely the sampling points of the planned positions, the more advantageous the more accurate positioning. However, in the technical scheme, a plurality of possible walking routes or walking areas of the robot are established in advance in the scene, and the robot can establish a relational database based on the data relationship between the WIFI signal amplitude and the position points only by walking on the plurality of walking routes or in the walking areas for one time.
It should be noted that, for each robot, a corresponding WIFI positioning relationship database needs to be established.
According to the technical scheme, the complex offline testing and sampling work with huge task amount of the traditional WIFI positioning method is avoided. The working mechanism of the robot is not only based on the theory of WIFI signal positioning, but also equivalently improves the positioning precision of the robot in the positioning mode.
The wireless WIFI positioning mode has the advantages that when the robot on-board positioning module fails or self positioning is problematic, the WIFI positioning function still exists, and the robot can still normally operate. But in general, its positioning accuracy is lower than that based on the on-board positioning module of the robot.
In a robot operation scene, it is common practice to arrange a certain number of security monitoring cameras. These monitoring cameras are typically used for security purposes, typically by a combination of several cameras, to accomplish dead-angle-free monitoring within a scene. How to acquire positioning information of moving obstacles in a scene by using image signals acquired by the cameras is also a problem to be solved by the invention. The following is a brief description:
in two cases, when the moving obstacle is in the visual field of a certain monitoring camera and in the range of a certain robot, processing the camera image signal, estimating the distance between the camera and the moving obstacle by using a certain algorithm, and obtaining the coordinate position of the moving obstacle in the scene coordinate system according to the ranging data and the absolute position of the camera in the scene coordinate system; the position data is fused with the airborne ranging data measured by a certain robot to obtain final consistent positioning data of the moving obstacle. In another case, when the moving obstacle is not in the ranging range of the on-board ranging module of any robot, but still in the sight range of a certain camera, the moving dynamics and coordinate values of the moving obstacle can still be calculated by the distance between the moving obstacle and the camera.
Therefore, through the combination of the airborne positioning module, the airborne ranging module, the WIFI router and the camera, the embodiment can determine the position coordinates of any moving and non-moving obstacle in the scene, so that a solid and reliable positioning data basic service is provided for the upper robot management system.
In the present embodiment, the positioning and distance determination process of the moving obstacle is as shown in FIG. 4
In general, the moving obstacle can be at least photographed by two cameras simultaneously during the moving process, so that the distance between the camera and the moving obstacle can be calculated.
In an actual scene, a situation may occur in which a moving obstacle is photographed by only one camera at a certain position, and the quasi-digital twin system 10 may calculate the position of the moving obstacle according to the position information and parameter information known by the cameras, but the accuracy is lower than that of the calculation according to the images photographed by the two cameras at the same time. Therefore, in order to improve the calculation accuracy and the positioning accuracy, when the camera is deployed in any scene position, at least two camera vision ranges can be covered when the obstacle is at any scene position.
In this embodiment, as shown in fig. 5, at the beginning of deployment, a scene map is established for an operation scene of the robot, and the map is rasterized. The scene map establishes a global planar coordinate system (in particular an xy coordinate system). The locations of cameras, fixed location obstructions, wireless router mounting points, etc. within the scene are marked in a coordinate system. And mapping the robot, moving obstacles (such as people) and the like on the scene map in real time to obtain a quasi-digital twin model of the robot operation scene, and updating dynamic coordinate position data in real time.
As shown in fig. 5, in the quasi-digital twin model interface of the robot running scene, the planned walking track of the robot may also be displayed.
In addition, the real-time distance value and the corresponding coordinate value between the two selected objects can be displayed or output through a menu or by clicking the selected objects through a mouse. The shortest distance threshold value between the robot and an obstacle can be set, and when the distance measurement data between the robot and the obstacle is smaller than the set threshold value, the system gives a warning and can set specific countermeasures.
In this embodiment, after the unique positioning data and ranging data corresponding to each robot are generated through the fusion process, the relevant data are transmitted to the corresponding mobile robot in real time in addition to being displayed or output to the upper robot management system as the decision basis.
Taking the robot a as an example, in addition to the positioning and ranging data obtained by the self-airborne positioning module and the airborne ranging module, the corresponding target positioning and ranging data sent by the quasi-digital twin system 10, such as ranging data between the mobile robot a and a moving obstacle and a fixed obstacle in front, and positioning coordinate data of the robot a; and possible obstacle position information, such as current position information of moving obstacles B and C and moving speed and direction information, located near the forward moving trajectory where robot a is located, which has not been detected by robot a.
The obstacle ranging data which is detected by the robot A itself is not specially processed in the robot, and is executed according to a preset processing algorithm of the robot. For positioning data of moving obstacles not yet detected by robot a, the positioning test system will perform further processing and decision-making procedures, as described in detail below:
in this embodiment, the positioning test system determines the potential collision possibility between the mobile robot a and the mobile obstacles according to the planned route of the mobile robot and the data such as the moving speed and moving direction of the mobile obstacles along the periphery of the planned route of the robot a in the scene.
As shown in fig. 6, the present embodiment determines the moving speed and moving direction of the moving obstacle, which are related to the movement track comparison of the robot a, based on the moving obstacle history data, and then calculates the distance and movement time to determine the collision possibility.
As shown in fig. 6, the arrival at planned trajectory B (X) is calculated from the current robot a position B ,Y B ) Location point and C (X) C ,Y C ) Time required for positioning the point; and from the current moment, moving obstacle B reaches intersection point position B (X) with the robot trajectory path along the original moving direction (assuming that the moving direction is unchanged) B ,Y B ) Time of time; the obstacle moving in the same way reaches the intersection point position C (X C ,Y C ) Is a time of (a) to be used. If robot a arrives at the same point at a similar time as one of the obstacles, this means that there is a possibility of meeting.
If there are several moving obstacles on the planned trajectory, which may meet the robot a, this means that the robot a has to make obstacle avoidance movements several times, which will affect the operation efficiency of the robot a to some extent. The algorithm re-plans a subsequent walking route according to certain rules and maps to avoid excessive moving obstacles.
Alternatively, another implementation method exists. According to the collision prediction method, the twin platform is calculated, and potential collision risks (the robot A meets moving obstacles) possibly generated by the robot A can be generated, early warning information can be generated for an upper robot management system, and the system is prompted to adjust the running speed of the robot and the like so as to prevent possible meeting.
In this embodiment, as shown in fig. 7, early warning is mainly performed on potential meeting or collision risks according to acquired data information, such as the number, direction, speed, distance, speed, direction, planned trajectory, etc. of the moving obstacles, so that data support or decision support is more reasonably provided for the upper robot management system. In particular, there may be various embodiments, and a decision mechanism based on an artificial neural network algorithm may be used, or a fuzzy theory may be used to obtain a corresponding determination result. The specific implementation and content of the collision prediction are not particularly limited in this embodiment.
Example 2
The present embodiment provides a quasi-digital twin system 10 of a mobile robot operation scene, as shown in fig. 2, the system 10 mainly includes:
the airborne positioning data resolving module 11 is configured to acquire airborne positioning data obtained by the airborne positioning module of each mobile robot, and resolve the airborne positioning data;
An airborne ranging data calculation module 12, configured to acquire airborne ranging data obtained by the airborne ranging module of each mobile robot, and calculate the airborne ranging data;
the auxiliary positioning data resolving module 13 is configured to obtain wireless signals received by each mobile robot from a preset wireless communication module, and resolve the intensity of the wireless signals to obtain auxiliary positioning data;
an image positioning data resolving module 14, configured to acquire an image signal output by an image acquisition module disposed in the target scene, and acquire image positioning data of a moving object in the target scene based on the image signal;
the fusion calculation module 15 is configured to perform fusion calculation on all the resolved airborne positioning data and airborne ranging data, and the resolved airborne positioning data and the resolved image positioning data, so as to obtain target positioning data and target ranging data corresponding to each mobile robot in a unified coordinate system;
the twin model building module 16 is configured to build a quasi-digital twin model corresponding to the target scene based on target positioning data and target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance.
In an implementation manner, the wireless communication module is a WIFI router, and the wireless signal is a WIFI signal; the communication location calculation module 13 calculates the strength of the wireless signal by: and determining the distance between the corresponding mobile robot and the WIFI router according to the intensity of the WIFI signal acquired by the root and the corresponding relation between the pre-established WIFI signal intensity and the distance between the corresponding mobile robot and the WIFI router, and acquiring auxiliary positioning data of the mobile robot according to the determined distance and the position information of the WIFI router.
In one embodiment, the image location data resolution module 14 includes:
an image recognition unit configured to recognize a moving object in the image signal;
and the image positioning unit is used for determining the image positioning data of the moving object based on the image signals, the actual position information and the parameter information of the image acquisition module.
In an embodiment, when the image recognition unit recognizes a certain moving object from the plurality of image signals, the image positioning unit determines the image positioning data of the moving object by:
And determining distance information between the moving object and the corresponding image acquisition module according to two image signals with the maximum number of pixels surrounded by the outline of the moving object, and determining image positioning data of the moving object according to the distance information, the actual position information and the parameter information of the corresponding image acquisition module.
In an embodiment, the moving object includes a moving obstacle, and the fusion calculation module 15 fuses the image positioning data of the moving obstacle and the on-board ranging data by:
judging whether the mobile robot has airborne ranging data corresponding to the image positioning data of the mobile obstacle or not;
if the image positioning data of the moving obstacle does not exist, the image positioning data of the moving obstacle is used as target positioning data of the moving obstacle;
if so, judging whether the coincidence ratio of the mobile obstacle positioning data corresponding to the existing airborne ranging data and the image positioning data of the mobile obstacle is higher than a preset coincidence ratio threshold;
if the vehicle-mounted distance measurement data is higher than the image positioning data of the moving obstacle, carrying out fusion calculation on the existing vehicle-mounted distance measurement data and the image positioning data of the moving obstacle, and taking a fusion result as target positioning data of the moving obstacle;
If the vehicle-mounted distance measurement data is not higher than the target positioning data of the moving obstacle, the moving obstacle positioning data corresponding to the existing vehicle-mounted distance measurement data is used as the target positioning data of the moving obstacle;
and determining target ranging data of the mobile robot according to the target positioning data of the mobile obstacle.
In an embodiment, the method further comprises: a collision prediction module 17 for predicting whether the mobile robot and the moving obstacle may collide according to the following steps:
acquiring a predicted motion trail and a predicted motion speed of the moving obstacle based on the historical target positioning data of the moving obstacle and the corresponding running time;
acquiring an intersection point of a planned motion trail of the mobile robot and an estimated motion trail of the mobile obstacle;
calculating the time of the moving obstacle reaching the intersection point based on the current target positioning data, the estimated motion trail and the estimated motion speed of the moving obstacle, and recording the time as first time;
calculating the time of the mobile robot reaching the intersection point based on the current target positioning data, the planned motion trail and the preset motion speed of the mobile robot, and recording the time as second time;
Based on the first time and the second time, predicting whether the mobile robot is likely to collide with the moving obstacle.
In an embodiment, when the collision prediction module 17 predicts that the mobile robot may collide with the moving obstacle, the corresponding early warning information is sent to the upper robot management system.
In an embodiment, when the collision prediction module 17 predicts that a collision may occur between a certain mobile robot and a plurality of moving obstacles, the motion track of the certain mobile robot is re-planned according to the quasi-digital twin model.
According to the embodiment, the target positioning data and the target ranging data corresponding to each mobile robot under a unified coordinate system are obtained by carrying out fusion calculation on auxiliary positioning data such as airborne positioning data, airborne ranging data, WIFI and the like and image positioning data; and establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance. Therefore, on one hand, the method can provide the upper robot management system with the consistency data of the real-time position of the mobile robot running in the scene and the real-time distance between the mobile robot and the surrounding obstacles, and provides the upper robot management system with decision basis; on the other hand, because the data of the quasi-digital twin model and the mobile robots are in real-time bidirectional interaction, namely, the data in the twin model can be downloaded to each robot running on site in real time, each robot can carry out mobile navigation based on own airborne ranging data and received ranging data from the quasi-digital twin model, and the method is equivalent to the mobile robot, and a set of ranging system is additionally added, so that even if the own airborne ranging module fails, the safety of movement can be ensured, and the reliability is enhanced.
For the present embodiment, since it corresponds to the method, the description is relatively simple, and the relevant points will be referred to in the description of the system embodiment.
Example 3
The present embodiment provides an electronic device, which may be expressed in the form of a computing device (for example, may be a server device), including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the steps of the quasi-digital twin method provided in embodiment 1 may be implemented when the processor executes the computer program.
Fig. 8 shows a schematic diagram of the hardware structure of the present embodiment, and as shown in fig. 8, the electronic device 30 specifically includes:
at least one processor 31, at least one memory 32, and a bus 33 for connecting the different system components (including the processor 31 and the memory 32), wherein:
the bus 33 includes a data bus, an address bus, and a control bus.
Memory 32 includes volatile memory such as Random Access Memory (RAM) 321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 also includes a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as the steps of the quasimdigital twin method provided in embodiment 1 of the present application, by running a computer program stored in the memory 32.
The electronic device 30 may further be in communication with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 35. Also, electronic device 30 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 36. Network adapter 36 communicates with other modules of electronic device 30 over bus 33. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 30, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present application. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 4
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the quasi-digital twinning method provided by embodiment 1.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps of implementing the quasi-digital twin method provided in example 1, when said program product is run on the terminal device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, which program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on the remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (11)

1. A quasi-digital twinning method of a mobile robot operation scene, the method comprising:
aiming at a plurality of mobile robots in a target scene, acquiring airborne positioning data obtained by an airborne positioning module of each mobile robot, and resolving the airborne positioning data;
acquiring airborne ranging data obtained by an airborne ranging module of each mobile robot, and resolving the airborne ranging data;
acquiring wireless signals received by each mobile robot from a preset wireless communication module, and resolving the intensity of the wireless signals to obtain auxiliary positioning data;
acquiring an image signal output by an image acquisition module arranged in the target scene, and acquiring image positioning data of a moving object in the target scene based on the image signal;
Performing fusion calculation on all the resolved airborne positioning data and airborne ranging data, the auxiliary positioning data and the image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system;
and establishing a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and a scene map of the target scene, which is generated in advance.
2. The quasimd twin method of claim 1, wherein the communication module is a WIFI router and the wireless signal is a WIFI signal;
the calculating the strength of the wireless signal includes:
and determining the distance between the corresponding mobile robot and the WIFI router according to the intensity of the acquired WIFI signal and the corresponding relation between the pre-established WIFI signal intensity and the distance between the corresponding mobile robot and the WIFI router, and acquiring auxiliary positioning data of the mobile robot according to the determined distance and the position information of the WIFI router.
3. The quasimd twin method of claim 1, wherein the acquiring image positioning data of the moving object in the target scene based on the image signal comprises:
Identifying a moving object in the image signal;
and determining image positioning data of the moving object based on the image signals and the actual position information and parameter information of the image acquisition module.
4. The quasimd twinning method of claim 3, wherein said determining image positioning data of a moving object when the moving object is identified from a plurality of image signals comprises:
and determining distance information between the moving object and the corresponding image acquisition module according to two image signals with the maximum number of pixels surrounded by the outline of the moving object, and determining image positioning data of the moving object according to the distance information, the actual position information and the parameter information of the corresponding image acquisition module.
5. The quasi-digital twin method of claim 1 wherein the moving object comprises a moving obstacle, and the fusing of all of the resolved on-board positioning data and on-board ranging data with the auxiliary positioning data and the image positioning data comprises fusing the image positioning data of the moving obstacle with the on-board ranging data by:
Judging whether the mobile robot has airborne ranging data corresponding to the image positioning data of the mobile obstacle or not;
if the image positioning data of the moving obstacle does not exist, the image positioning data of the moving obstacle is used as target positioning data of the moving obstacle;
if so, judging whether the coincidence ratio of the mobile obstacle positioning data corresponding to the existing airborne ranging data and the image positioning data of the mobile obstacle is higher than a preset coincidence ratio threshold;
if the vehicle-mounted distance measurement data is higher than the image positioning data of the moving obstacle, carrying out fusion calculation on the existing vehicle-mounted distance measurement data and the image positioning data of the moving obstacle, and taking a fusion result as target positioning data of the moving obstacle;
if the vehicle-mounted distance measurement data is not higher than the target positioning data of the moving obstacle, the moving obstacle positioning data corresponding to the existing vehicle-mounted distance measurement data is used as the target positioning data of the moving obstacle;
and determining target ranging data of the mobile robot according to the target positioning data of the mobile obstacle.
6. The quasi-digital twinning method of claim 5, further comprising predicting whether the mobile robot is likely to collide with the moving obstacle according to the steps of:
Acquiring a predicted motion trail and a predicted motion speed of the moving obstacle based on the historical target positioning data of the moving obstacle and the corresponding running time;
acquiring an intersection point of a planned motion trail of the mobile robot and an estimated motion trail of the mobile obstacle;
calculating the time of the moving obstacle reaching the intersection point based on the current target positioning data, the estimated motion trail and the estimated motion speed of the moving obstacle, and recording the time as first time;
calculating the time of the mobile robot reaching the intersection point based on the current target positioning data, the planned motion trail and the preset motion speed of the mobile robot, and recording the time as second time;
based on the first time and the second time, predicting whether the mobile robot is likely to collide with the moving obstacle.
7. The quasi-digital twin method of claim 6, wherein when it is predicted that there is a possibility of collision between the mobile robot and the moving obstacle, corresponding pre-warning information is sent to an upper robot management system.
8. The quasimigy twin method of claim 6, wherein when it is predicted that a collision between a certain mobile robot and a plurality of the moving obstacles is likely, the motion trajectory of the certain mobile robot is re-planned according to the quasimigy twin model.
9. A quasi-digital twinning system of a mobile robot operating scene, the system comprising:
the airborne positioning data resolving module is used for acquiring the airborne positioning data obtained by the airborne positioning module of each mobile robot and resolving the airborne positioning data;
the airborne ranging data calculation module is used for acquiring the airborne ranging data obtained by the airborne ranging module of each mobile robot and calculating the airborne ranging data;
the auxiliary positioning data resolving module is used for acquiring wireless signals received by each mobile robot from the preset wireless communication module and resolving the intensity of the wireless signals to obtain auxiliary positioning data;
the image positioning data resolving module is used for acquiring image signals output by the image acquisition module arranged in the target scene and acquiring image positioning data of a moving object in the target scene based on the image signals;
the fusion calculation module is used for carrying out fusion calculation on all the resolved airborne positioning data and airborne ranging data, the auxiliary positioning data and the image positioning data to obtain target positioning data and target ranging data corresponding to each mobile robot under a unified coordinate system;
And the twin model building module is used for building a quasi-digital twin model corresponding to the target scene based on the target positioning data and the target ranging data corresponding to each mobile robot and the scene map of the target scene, which are generated in advance.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the quasimdigital twinning method according to any one of claims 1-8 when the computer program is executed by the processor.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the quasimdigital twinning method according to any one of claims 1 to 8.
CN202310517881.3A 2023-05-09 2023-05-09 Quasi-digital twin method, system, equipment and medium for mobile robot operation scene Pending CN116629106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310517881.3A CN116629106A (en) 2023-05-09 2023-05-09 Quasi-digital twin method, system, equipment and medium for mobile robot operation scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310517881.3A CN116629106A (en) 2023-05-09 2023-05-09 Quasi-digital twin method, system, equipment and medium for mobile robot operation scene

Publications (1)

Publication Number Publication Date
CN116629106A true CN116629106A (en) 2023-08-22

Family

ID=87596531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310517881.3A Pending CN116629106A (en) 2023-05-09 2023-05-09 Quasi-digital twin method, system, equipment and medium for mobile robot operation scene

Country Status (1)

Country Link
CN (1) CN116629106A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786147A (en) * 2024-02-26 2024-03-29 北京飞渡科技股份有限公司 Method and device for displaying data in digital twin model visual field range

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786147A (en) * 2024-02-26 2024-03-29 北京飞渡科技股份有限公司 Method and device for displaying data in digital twin model visual field range
CN117786147B (en) * 2024-02-26 2024-05-28 北京飞渡科技股份有限公司 Method and device for displaying data in digital twin model visual field range

Similar Documents

Publication Publication Date Title
US11885910B2 (en) Hybrid-view LIDAR-based object detection
EP3759562B1 (en) Camera based localization for autonomous vehicles
US10310087B2 (en) Range-view LIDAR-based object detection
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN107015559B (en) Probabilistic inference of target tracking using hash weighted integration and summation
US20190310651A1 (en) Object Detection and Determination of Motion Information Using Curve-Fitting in Autonomous Vehicle Applications
CN111958592B (en) Image semantic analysis system and method for transformer substation inspection robot
US20180349746A1 (en) Top-View Lidar-Based Object Detection
US11092444B2 (en) Method and system for recording landmarks in a traffic environment of a mobile unit
WO2021003453A1 (en) Annotating high definition map data with semantic labels
US20200233061A1 (en) Method and system for creating an inverse sensor model and method for detecting obstacles
CN112506222A (en) Unmanned aerial vehicle intelligent obstacle avoidance method and device
CN111895989A (en) Robot positioning method and device and electronic equipment
CN112518739A (en) Intelligent self-navigation method for reconnaissance of tracked chassis robot
CN110936959B (en) On-line diagnosis and prediction of vehicle perception system
CN111947644B (en) Outdoor mobile robot positioning method and system and electronic equipment thereof
CN113189977A (en) Intelligent navigation path planning system and method for robot
Wang et al. Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset
CN116629106A (en) Quasi-digital twin method, system, equipment and medium for mobile robot operation scene
KR20180087519A (en) Method for estimating reliability of distance type witch is estimated corresponding to measurement distance of laser range finder and localization of mobile robot using the same
CN111856499A (en) Map construction method and device based on laser radar
Hebel et al. Change detection in urban areas by direct comparison of multi-view and multi-temporal ALS data
Siddiqui UWB RTLS for construction equipment localization: experimental performance analysis and fusion with video data
CN111951552A (en) Method and related device for risk management in automatic driving
US20220221585A1 (en) Systems and methods for monitoring lidar sensor health

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination