CN118010009A - Multi-mode navigation system of educational robot in complex environment - Google Patents

Multi-mode navigation system of educational robot in complex environment Download PDF

Info

Publication number
CN118010009A
CN118010009A CN202410424755.8A CN202410424755A CN118010009A CN 118010009 A CN118010009 A CN 118010009A CN 202410424755 A CN202410424755 A CN 202410424755A CN 118010009 A CN118010009 A CN 118010009A
Authority
CN
China
Prior art keywords
data
fusion
navigation
robot
path
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410424755.8A
Other languages
Chinese (zh)
Other versions
CN118010009B (en
Inventor
吕远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibingo Technology Co ltd
Original Assignee
Beijing Aibingo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibingo Technology Co ltd filed Critical Beijing Aibingo Technology Co ltd
Priority to CN202410424755.8A priority Critical patent/CN118010009B/en
Publication of CN118010009A publication Critical patent/CN118010009A/en
Application granted granted Critical
Publication of CN118010009B publication Critical patent/CN118010009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-mode navigation system of an educational robot in a complex environment, which relates to the technical field of educational robots, wherein an image quality coefficient is constructed according to a quality analysis result, a low-quality image is screened out according to the image quality coefficient, and the low-quality image is optimized; identifying the fusion state of the fused navigation data, constructing fusion degree by the identification data, and giving a corresponding multisource data fusion optimization scheme by a data fusion optimization knowledge graph according to fusion characteristics acquired by identification if the acquired fusion degree is lower than expected; and constructing a navigation map by fusing navigation data, planning a moving path for the robot to move to a target position, constructing path priorities according to the conditions of each moving path, and screening out the target path for the robot according to the path priorities. After the path planning is completed, the target path and the standby path are screened out, so that the mutual switching is realized when the road condition is poor, and the navigation stability is improved.

Description

Multi-mode navigation system of educational robot in complex environment
Technical Field
The invention relates to the technical field of educational robots, in particular to a multi-mode navigation system of an educational robot in a complex environment.
Background
Educational robots are typical representatives of the application of artificial intelligence in the field of education, aimed at fostering the analytical, creative and practical abilities of students. It is a comprehensive application of artificial intelligence, speech recognition and bionic technology. Educational robots are mainly divided into two categories: robot education and educational service robots. The former mainly excites the interest and power of students to intelligent technology, such as robot competition of primary and secondary school students; the latter focuses on the application of robots in educational fields such as classroom teaching assistance and the like.
Educational robots have a variety of functions and roles. First, it can interact with students to provide personalized learning guidance and feedback, thereby inspiring the learning interests and engagement of the students. And secondly, the education robot can be used as an auxiliary teaching tool, and the learning content is presented in the form of multimedia such as images, sounds, videos and the like, so that the learning effect is improved. In addition, the learning system can help students practice language learning related skills such as voice communication, hearing understanding, pronunciation correction and the like, provide a programming learning platform and cultivate creativity of the students and solve the problems. Meanwhile, the educational robot can assist teachers to conduct teaching, work load of the teaching robot is reduced, and the teachers can pay more attention to individual demands of students.
Multi-modal navigation refers to navigation and positioning of robots in complex environments using multiple perception and interaction means, such as visual, acoustic, tactile, etc. The method is characterized by a multi-mode sensing fusion technology, namely fusion of different kinds of sensor data so as to provide more accurate and comprehensive positioning and navigation information. The technology can overcome the problem of insufficient information of a single sensor, reduce positioning errors in a complex environment and realize accurate positioning and navigation functions.
In the Chinese patent application publication No. CN106908055A, a multi-mode navigation method and a mobile robot are disclosed, wherein the mobile robot comprises a first navigation module, a second navigation module and a third navigation module; the method comprises the following steps: respectively acquiring the first navigation data, the second navigation data, the third navigation data and the current value of the first navigation signal, the current value of the second navigation signal and the current value of the third navigation signal corresponding to the first navigation module, the second module and the third module; calculating a first error between a current value of the first navigation signal and a corresponding target value; calculating a second error between the current value of the second navigation signal and the corresponding target value; calculating a third error between the current value of the third navigation signal and the corresponding target value; and selecting a navigation module corresponding to the minimum value among the first error, the second error and the third error for navigation.
Combining the contents of the above applications and prior art:
In classrooms or other areas with simpler road conditions, the educational robot usually performs data acquisition through a single means such as machine vision or laser radar so as to navigate, and guides the robot to move to a target position, but when the road conditions are more complex, for example, when the obstacle or the non-passable area in the advancing direction of the robot is more, if the robot is only used for navigating through the machine vision, the front concave area can be difficult to avoid, so that in the complex environment, multi-mode navigation needs to be introduced during the navigation of the robot; however, in the conventional multi-mode navigation method, after multi-source data is collected by a sensor group of a robot, multi-source data is generally fused directly, so that the fusion effect of the data is easy to be poor, and when the multi-mode navigation method is used for the robot navigation, an effective moving path may be difficult to be planned.
For this reason, the present invention provides a multi-modal navigation system of an educational robot in a complex environment.
Disclosure of Invention
(One) solving the technical problems
Aiming at the defects of the prior art, the invention provides a multi-mode navigation system of an educational robot in a complex environment, which optimizes low-quality images by screening the low-quality images according to image quality coefficients; identifying the fusion state of the fused navigation data, constructing fusion degree by the identification data, and giving a corresponding multisource data fusion optimization scheme by a data fusion optimization knowledge graph according to fusion characteristics acquired by identification if the acquired fusion degree is lower than expected; and constructing a navigation map by fusing navigation data, planning a moving path for the robot to move to a target position, constructing path priorities according to the conditions of each moving path, and screening out the target path for the robot according to the path priorities. After the path planning is completed, the target path and the standby path are screened out, so that when the road condition is poor, the mutual switching is realized, the navigation stability is improved, and the problem in the background technology is solved.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme:
A multi-mode navigation system of an educational robot in a complex environment comprises a complexity analysis unit for analyzing and acquiring the road complexity of road conditions in a moving field of view according to the distribution state of obstacle regions If the complexity/>Issuing a multi-modal navigation instruction to the outside beyond expectations; after the positions of all the obstacles and obstacle areas in the moving view are acquired, constructing the complexity/>, of the road surface in the moving viewWith road complexity/>The road conditions in the visual field are evaluated in the following manner:
Wherein, the weight coefficient: ,/> And/> ; N is the number of obstacles,/>Is the distance from the ith obstacle to the jth obstacle,/>Is the mean value of obstacle distance; m is the number of obstacle regions,/>Is the distance of the ith barrier zone and the jth barrier zone,/>Is the mean value of the distance between the obstacle regions;
an image quality analysis unit for acquiring navigation data by the robot using the sensor group, preprocessing the navigation data, performing quality analysis on the image data, and constructing an image quality coefficient from the quality analysis result According to the image quality coefficient/>Screening out low-quality images, and optimizing the low-quality images;
the fusion degree analysis unit is used for performing data fusion on the multi-source navigation data, identifying the fusion state of the fused navigation data and constructing the fusion degree by the identification data If the obtained fusion degree/>The method comprises the steps that a corresponding multisource data fusion optimization scheme is given out by a data fusion optimization knowledge graph according to fusion characteristics acquired through recognition, wherein the fusion characteristics are lower than expected;
The path planning unit is used for constructing a navigation map by fusing navigation data, planning a moving path for the robot to move to a target position after marking a feasible region and an infeasible region on the navigation map, and constructing path priority according to the conditions of each moving path According to the path priority/>And screening out a target path for the robot.
Further, determining a robot moving area and a target position, imaging by an imaging device of the robot along a forward direction, and determining a moving field of view for the robot according to an imaging distance of the imaging device; identifying the acquired view field image to acquire a corresponding obstacle; the method comprises the steps of measuring a road surface in a moving view field by using a radar module, determining a protruding or recessed area of the road surface, which is difficult to move by a robot, according to the moving capability of the robot, determining the corresponding area as an obstacle area, and determining the corresponding position for each obstacle area.
Further, after receiving the multi-mode navigation instruction, the robot acquires detection data in the moving visual field by utilizing a sensor group to acquire navigation data; and respectively summarizing the acquired navigation data, preprocessing the data, and constructing an acquired preprocessed navigation data set.
Further, the acquired image data in the moving field of view are arranged along the acquisition time, and the image is sequentially divided into a plurality of blocks for image quality analysis, so that corresponding image quality data are acquired; constructing image quality coefficients of respective images from image quality dataIf the obtained image quality coefficient/>Below the quality threshold, the corresponding image is taken as a low quality image and the low quality image is optimized.
Further, image quality coefficientThe acquisition method of (1) is as follows: after linear normalization processing is carried out on each contrast C, definition P and distortion K, corresponding data values are mapped to intervals/>In the following manner:
Wherein/> Is qualified reference value of contrast/(Is the average value of the contrast; /(I)For the value of contrast in the ith zone,/>For qualified reference value of definition,/>As the mean value of definition,/>For the value of sharpness in the ith block,/>Is qualified reference value of distortion degree,/>Is the mean value of the distortion degree; /(I)For the value of distortion in the ith block, the weight coefficient:,/>,/> And/> ,/>M is the number of blocks.
Further, the multisource navigation data are fused, and the navigation data after fusion are obtained; judging the fusion state of the fusion navigation data, acquiring corresponding fusion state data, and constructing the fusion degree of the navigation dataThe mode is as follows: position accuracy/>Data delay/>After linear normalization processing, mapping the corresponding data to interval/>In, according to the following formula:
Weight coefficient: ,/> ; if the obtained fusion degree/> And if the fusion degree is lower than the fusion degree threshold value, sending an optimization instruction to the outside.
Further, after receiving the optimization instruction, constructing a data fusion optimization knowledge graph by taking data fusion optimization as a target word; identifying the fusion state of the fusion navigation data to obtain corresponding fusion characteristics; and according to the correspondence between the data fusion optimization scheme and the fusion characteristics, giving a corresponding multi-source data fusion optimization scheme by a data fusion optimization knowledge graph, executing the data fusion optimization scheme, and optimizing the fusion of the current navigation data.
Further, a navigation map of the environment where the robot is located is constructed by utilizing the fused navigation data, and a feasible region and an infeasible region are identified in the moving visual field, wherein the infeasible region comprises barriers and barrier regions; and marking the identified and acquired obstacle and obstacle area on a navigation map, and updating navigation map information in real time along the moving direction of the robot.
Further, after receiving the optimization instruction, constructing a data fusion optimization knowledge graph by taking data fusion optimization as a target word; identifying the fusion state of the fusion navigation data to obtain corresponding fusion characteristics; and according to the correspondence between the data fusion optimization scheme and the fusion characteristics, giving a corresponding multi-source data fusion optimization scheme by a data fusion optimization knowledge graph, executing the data fusion optimization scheme, and optimizing the fusion of the current navigation data.
Further, a navigation map of the environment where the robot is located is constructed by utilizing the fused navigation data, and a feasible region and an infeasible region are identified in the moving visual field, wherein the infeasible region comprises barriers and barrier regions; and marking the identified and acquired obstacle and obstacle area on a navigation map, and updating navigation map information in real time along the moving direction of the robot.
Further, a self-adaptive Monte Carlo positioning algorithm is used for obtaining the current position of the robot, the current position is marked on a navigation map, a path planning algorithm is used for planning a moving path for the robot, which moves to a target position, according to the navigation map, the fused navigation data and the current position of the robot, and each moving path is marked on the navigation map; calculating the path priority of each pathSelecting the path priority among them/>The highest moving path is a target path, and the next is a standby path, so that the robot moves along the target path.
Further, path priorityThe acquisition mode of (a) is as follows: to ground flatness/>Path drop/>After linear normalization processing, the method is as follows:
Wherein, N is the number of sub-paths,/>Is the path drop of the ith sub-path,/>For the ground flatness of the ith sub-path, the weight coefficient: /(I),/>And/>
(III) beneficial effects
The invention provides a multi-mode navigation system of an educational robot in a complex environment, which has the following beneficial effects:
1. the method has the advantages that the moving difficulty of the robot to the front area is judged and estimated according to the complexity, if the moving difficulty of the front area is larger, the expected effect can be hardly achieved through a single navigation mode when the robot is navigated, the front navigation difficulty of the robot is judged according to the construction complexity, and the efficiency and the reliability of subsequent navigation can be improved.
2. By image quality factorThe quality of the image is comprehensively judged, if the image quality is lower, the image is used as a low-quality image, and the low-quality image is subjected to targeted optimization again according to various quality parameters and corresponding image characteristics of the low-quality image, so that the optimization efficiency of the image is improved, the image quality of part of the low-quality image can be improved, and the accuracy and the reliability are higher when the image is used for subsequent feature extraction and target identification.
3. Identifying the fusion state of the multi-source navigation data, acquiring corresponding state parameters, further constructing the fusion degree of the navigation data, and according to the constructed fusion degreeThe multi-source data fusion state can be evaluated, whether the fusion effect of the current multi-source data can reach the expectations or not is confirmed, and after the multi-source data fusion is completed, a guarantee is formed on the multi-source data fusion effect.
4. By constructing the knowledge graph, the optimization scheme is quickly output on the basis of identifying and acquiring the multi-source fusion data, the current fusion state of the multi-source data is optimized and improved after the optimization scheme is executed, and the data fusion efficiency and accuracy are improved after the data fusion efficiency is judged.
5. By planning a plurality of moving paths for the robot, the selectivity of the robot when moving to the target position is increased, and corresponding path priorities are analyzed and acquired for each moving path according to the state data of the moving paths on the basis of acquiring a plurality of moving pathsAccording to the path priority/>After the path planning is completed, the target path and the standby path are screened out, so that the mutual switching is realized when the road condition is poor, and the navigation stability is improved.
Drawings
FIG. 1 is a schematic diagram of a multi-modal navigation system of an educational robot of the present invention;
fig. 2 is a schematic diagram of a navigation method of the multi-modal robot in a complex environment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, the present invention provides a multi-modal navigation system of an educational robot in a complex environment, comprising:
the complexity analysis unit is used for analyzing and acquiring the road complexity of road conditions in the moving visual field according to the distribution state of the obstacle region If the complexity/>Issuing a multi-modal navigation instruction to the outside beyond expectations;
when applied, the method comprises the following steps:
Step 101, when the robot needs to be navigated, determining a robot moving area and a target position, imaging the robot by an imaging device of the robot along the advancing direction, and determining a moving visual field for the robot according to the imaging distance of the imaging device; if the robot is in a moving state, setting a judgment standard of an obstacle for the robot by referring to the moving capability of the robot;
102, acquiring a front moving view field by an imaging device, and identifying an acquired view field image by using a trained obstacle identification model according to an obstacle standard to acquire a corresponding obstacle; determining corresponding positions for each obstacle by selecting a reference object in the moving field of view by using a trilateration algorithm;
Step 103, measuring the road surface in the moving view field by using a radar module, determining protruding or recessed areas of the road surface, which are difficult to move by the robot, according to the moving capability of the robot after setting a standard surface, determining the corresponding areas as barrier areas, and determining corresponding positions for each barrier area after selecting a reference object in the moving view field by using a trilateral positioning algorithm;
when the robot navigation system is used, according to the preliminary image recognition and the trilateral positioning algorithm, after the preliminary image recognition, the corresponding areas which cannot be moved are screened out by combining the movement capacity, the performance and the like of the robot, and when the robot needs to be navigated, the partial areas are avoided, so that the movement efficiency of the robot is improved.
104, After the positions of the obstacles and the obstacle regions in the moving view are obtained, constructing the road complexity in the moving viewWith road complexity/>The road conditions in the visual field are evaluated in the following manner:
Wherein, the weight coefficient: ,/> And/> ; Determining a weight coefficient by referring to an analytic hierarchy process; n is the number of obstacles,/>Is the distance from the ith obstacle to the jth obstacle,/>Is the mean value of obstacle distance; m is the number of obstacle regions,/>Is the distance of the ith barrier zone and the jth barrier zone,/>Is the mean value of the distance between the obstacle regions;
presetting a complexity threshold according to navigation management expectation and historical data of the robot; if the complexity is Exceeding the complexity threshold value indicates that the road condition in the area in front of the robot is poor, and if the robot still adopts a single mode for navigation, the robot can not easily reach a destination, and multi-mode navigation needs to be started, and a multi-mode navigation instruction is sent to the outside;
in use, the contents of steps 101 to 103 are combined:
when the robot is required to be navigated, after the obstacle area and the obstacle are screened from the moving view, the distribution density degree of the infeasible area in the moving view is analyzed according to the positions of the obstacle area and the obstacle, and the corresponding complexity is obtained, at the moment, the moving difficulty of the robot to the front area is judged and estimated according to the complexity, if the moving difficulty of the front area is large, the expected effect can not be achieved through a single navigation mode when the robot is navigated, for example, if the front concave area is large, the corresponding navigation effect can not be achieved only through machine vision or image recognition, and therefore, the front navigation difficulty of the robot is judged through the construction complexity, and the efficiency and the reliability of the subsequent navigation can be improved.
Combining the contents of the above applications and prior art:
In a classroom or other simpler areas, a robot usually performs data acquisition by a single means such as machine vision or a laser radar to navigate, and guides the robot to move to a target position, but when road conditions are complex, for example, when the robot has a large number of obstacles or non-passable areas in the advancing direction, the robot can only perform navigation by machine vision, so that a front concave area can be difficult to avoid, and therefore, in a complex environment, multiple modes need to be introduced in the navigation of the robot; however, in the conventional multi-mode navigation method, after multi-source data is collected by a sensor group of a robot, multi-source data is generally fused directly, so that the fusion effect of the data is easy to be poor, and when the multi-mode navigation method is used for the robot navigation, an effective moving path may be difficult to be planned.
An image quality analysis unit for acquiring navigation data by the robot using the sensor group, preprocessing the navigation data, performing quality analysis on the image data, and constructing an image quality coefficient from the quality analysis resultAccording to the image quality coefficient/>Screening out low-quality images, and optimizing the low-quality images;
when applied, the method comprises the following steps:
Step 201, after receiving a multi-mode navigation instruction, acquiring detection data in a moving field of view by a robot through a sensor group comprising a laser radar, an imaging device, an infrared sensor and the like to acquire navigation data; the laser radar is used for map construction and obstacle detection, the imaging device is used for visual recognition and navigation, and the infrared sensor is used for detecting distortion, human body posture and the like; collecting the acquired navigation data, respectively summarizing the acquired navigation data, preprocessing the data, and constructing an acquired preprocessed navigation data set;
when the navigation data are acquired in a plurality of modes during use, the accuracy and the reliability of the data can be improved by preprocessing the data;
Step 202, arranging the acquired image data in the moving field of view along the acquisition time, and sequentially dividing the image into a plurality of blocks for image quality analysis to acquire corresponding image quality data;
the quality parameters of the image mainly include the following aspects:
Resolution ratio: resolution is an important parameter of image quality, which determines the degree of refinement of the image. The higher the resolution, the more detail in the image and the better the visual effect. Common resolution representations are pixels (px) and dots (dpi), etc. Color depth: the color depth determines the number of colors that an image can represent. It is usually expressed in terms of bits (bits), the more bits, the richer the color that an image can represent and the smoother the color transition.
Contrast ratio: contrast refers to the difference in brightness between the brightest and darkest portions of an image. The higher the contrast, the more distinct the light and shade level of the image, and the more vivid the visual effect. Definition: definition refers to the definition of each detail shadow and boundary on the image, which affects the visibility of the image details. Distortion degree: the distortion level describes the degree of loss of information in the image during transmission or processing. The lower the distortion, the closer the image is to the original state and the higher the quality.
These parameters together determine the quality of the image, but different application scenarios and requirements may place different emphasis on the parameters. In practical application, proper parameter combinations are required to be selected according to specific situations so as to achieve the best visual effect and meet application requirements.
Constructing image quality coefficients of respective images from image quality dataThe method comprises the following steps: after linear normalization processing is carried out on each contrast C, definition P and distortion K, corresponding data values are mapped to intervals/>In the following manner:
Wherein/> Is qualified reference value of contrast/(Is the average value of the contrast; /(I)For the value of contrast in the ith zone,/>For qualified reference value of definition,/>As the mean value of definition,/>For the value of sharpness in the ith block,/>Is qualified reference value of distortion degree,/>Is the mean value of the distortion degree; /(I)For the value of distortion in the ith block, the weight coefficient:,/>,/> And/> ,/>M is the number of blocks;
Presetting a quality threshold of the image according to the requirement of feature recognition of the image, and if the acquired image quality coefficient is the same If the image quality is lower than the quality threshold, the corresponding image quality is lower, the optimization is needed, the corresponding image is used as a low-quality image, the low-quality image is optimized in various modes, and the following common and effective methods are adopted:
color adjustment: by adjusting the contrast, brightness, tone, saturation and other parameters of the image, the whole color of the image can be plump, and the definition and visual effect of the image can be enhanced. This is typically accomplished by specialized image processing software, such as Photoshop, using curves, tone scale, saturation, etc. tools.
Sharpening: the sharpening of the image may enhance the details and sharpness of the image, improving the performance of the low contrast image. In the image processing software, a sharpening filter or Unsharp Mask or other tool may be used to sharpen the image. It should be noted that oversharpening may cause jagged edges to appear in the image, thus requiring moderate adjustment.
Noise reduction using a filter: filters, such as median filters and gaussian filters, can reduce noise in the image and improve the quality and detail of the image.
Super resolution technology: this is an important method of processing low quality images. The low resolution image can be converted into a high resolution image by an image interpolation or synthesis method, so that the quality and detail of the image are improved. Common super resolution methods include interpolation, convolutional neural networks, and generation of countermeasure networks.
Image restoration techniques: the technology aims at recovering an original image from a distorted image, and restores details and definition of the image by establishing an image distortion model and an optimization algorithm.
In addition, some online tools or software, such as a note online tool, a hi-format picture lossless amplifier, bigJPG, etc., also provide the function of optimizing low-quality images, such as improving the resolution of pictures or repairing old pictures, etc.
Identifying the image with the quality meeting the expectation, completing feature extraction and target identification, and obtaining corresponding feature data;
in use, the contents of steps 201 and 202 are combined:
After a plurality of image data are acquired, the image data are initially optimized, and an image quality coefficient is constructed on the basis With image quality coefficient/>If the image quality is low, the image is used as a low-quality image, and the low-quality image is subjected to targeted optimization again according to various quality parameters and corresponding image characteristics of the low-quality image, so that the optimization efficiency of the image is improved, the image quality of part of the low-quality image can be improved, and the accuracy and the reliability are higher when the image is used for subsequent feature extraction and target identification.
The fusion degree analysis unit is used for performing data fusion on the multi-source navigation data, identifying the fusion state of the fused navigation data and constructing the fusion degree by the identification dataIf the obtained fusion degree/>The method comprises the steps that a corresponding multisource data fusion optimization scheme is given out by a data fusion optimization knowledge graph according to fusion characteristics acquired through recognition, wherein the fusion characteristics are lower than expected;
when applied, the method comprises the following steps:
Step 301, after preprocessing and optimizing the navigation data acquired by the sensor group, fusing the acquired multi-source navigation data of the machine vision, the laser radar and the ultrasonic sensor by using methods such as Kalman filtering or particle filtering, and the like, and acquiring the fused navigation data;
Judging the fusion state of the fused navigation data, namely, after identifying the fused data, acquiring corresponding fusion state data, such as speed precision, position precision, data consistency, data delay and the like, so as to construct the fusion degree of the navigation data in the following manner: to position accuracy Data delay/>After linear normalization processing, mapping the corresponding data to interval/>In, according to the following formula:
Weight coefficient: ,/> ; the weight coefficient can be obtained by referring to an analytic hierarchy process;
presetting a fusion degree threshold according to fusion expectation and historical data of navigation data; if the obtained fusion degree If the fusion degree threshold value is lower than the fusion degree threshold value, the current navigation data fusion state is poor, the due effect may be difficult to achieve when the current navigation data fusion state is used for navigation, the optimization is required, at the moment, an optimization instruction is sent to the outside, if the fusion degree threshold value is opposite, the current navigation data fusion is completed, and the fused navigation data can be applied to navigation;
When the method is used, after the navigation data are preliminarily fused, the fusion state of the multi-source navigation data is identified, corresponding state parameters are obtained, the fusion degree of the navigation data is further built, and the built fusion degree is used for obtaining the navigation data The multi-source data fusion state can be evaluated, whether the fusion effect of the current multi-source data can reach the expectations or not is confirmed, and after the multi-source data fusion is completed, a guarantee is formed on the multi-source data fusion effect.
Step 302, after receiving an optimization instruction, taking data fusion optimization as a target word, and constructing a data fusion optimization knowledge graph after deep retrieval and entity relation establishment; identifying the fusion state of the fusion navigation data, for example, the distribution state, the data quality and the like of each item of data, and acquiring corresponding fusion characteristics;
According to the correspondence between the data fusion optimization scheme and fusion characteristics, a corresponding multi-source data fusion optimization scheme is given out by a data fusion optimization knowledge graph, and the data fusion optimization scheme is executed to optimize the fusion of the current navigation data, so that the efficiency and accuracy of data fusion are improved;
in use, the contents of steps 301 and 302 are combined:
If the fusion state of the multi-source navigation data is poor, the multi-source navigation data is required to be fused again, or the current fusion state is optimized, at this time, by constructing a knowledge graph, on the basis of identifying and acquiring the multi-source fusion data, an optimization scheme is quickly output, after the optimization scheme is executed, the current fusion state of the multi-source data is optimized and improved, and after the data fusion efficiency is judged, the data fusion efficiency and accuracy are improved.
The path planning unit is used for constructing a navigation map by fusing navigation data, planning a moving path for the robot to move to a target position after marking a feasible region and an infeasible region on the navigation map, and constructing path priority according to the conditions of each moving pathAccording to the path priority/>Screening a target path for the robot;
when applied, the method comprises the following steps:
Step 401, constructing a navigation map of an environment where the robot is located by using fusion navigation data, wherein the navigation map comprises the steps of using a SLAM algorithm and combining laser radar data to realize the positioning and map construction of the robot, using a deep learning algorithm to identify and classify the image, and identifying a feasible region and an infeasible region in a moving visual field, wherein the infeasible region comprises an obstacle and an obstacle region; marking the identified and acquired obstacle and obstacle area on a navigation map, and updating navigation map information in real time along the moving direction of the robot;
When the robot is used, corresponding obstacle areas and obstacles are acquired by identifying and analyzing the fusion navigation data, and then the feasible areas and the infeasible areas are screened out, so that the moving speed of the robot is faster when the robot moves to the target position, the moving obstruction is reduced, and the navigation efficiency is ensured.
Step 402, acquiring a current position of the robot by using a self-adaptive Monte Carlo positioning algorithm, marking the current position on a navigation map, planning a moving path for the robot by using a path planning algorithm according to the navigation map, fusion navigation data and the current position of the robot, and marking each moving path on the navigation map;
Step 403, calculating the path priority of each path Dividing the moving path into a plurality of sub-paths, detecting each sub-path, and obtaining the ground flatness/>, of each sub-pathAnd the drop between the highest position and the lowest position of the sub-path, obtaining the path drop/>; To ground flatness/>Path drop/>After linear normalization processing, the method is as follows:
Wherein, N is the number of sub-paths,/>Is the path drop of the ith sub-path,/>For the ground flatness of the ith sub-path, the weight coefficient: /(I),/>And/>
Selecting path priority among themThe highest moving path is a target path, and the next path is a standby path, so that the robot moves along the target path;
In use, the contents of steps 401 and 403 are combined:
After the position of the robot in the navigation map is determined, a plurality of moving paths are planned for the robot, the selectivity of the robot when moving to the target position is increased, and on the basis of acquiring a plurality of moving paths, corresponding path priorities are acquired for each moving path analysis according to the state data of the moving paths According to the path priority/>After the path planning is completed, the target path and the standby path are screened out, so that the mutual switching is realized when the road condition is poor, and the navigation stability is improved.
The analytic hierarchy process, AHP for short, is one kind of decision making process, and has the core idea of decomposing elements relevant to decision making problem into target, criterion, scheme and other layers and qualitative and quantitative analysis based on the decomposed elements. This method is suitable for handling target systems with hierarchically interleaved evaluation indicators, especially when the target values are difficult to quantitatively describe.
The analytic hierarchy process is widely applied to the fields of decision analysis, syntactic structure analysis and the like. In decision analysis, the problems are decomposed into different composition factors, and the factors are aggregated and combined according to different levels according to the mutual correlation influence among the factors and the membership to form a multi-level analysis structure model, so that the problems are finally classified into the determination of the relative importance weight of the lowest level (scheme for decision, measure and the like) relative to the highest level (total target) or the arrangement of the relative priority order.
The construction method of the data fusion optimization knowledge graph mainly comprises the following steps:
data preprocessing: this is the first step in building the knowledge-graph, involving the cleaning, sorting and normalization of the raw data. Missing values need to be identified and processed, the missing values are filled up using a suitable filling method, and data normalization processing is performed so that data between different data sources can be reliably compared and fused.
Knowledge extraction: and extracting entity, relation and entity attribute information from the preprocessed data. This may be accomplished by natural language processing, entity recognition, etc. techniques to obtain the desired information from unstructured and semi-structured data.
Entity learning: and filling the entity matching obtained by extraction into the constructed pattern layer body. This involves tasks of entity linking, data fusion, conflict detection and resolution, etc., with the aim of finding entities with different identities that represent the same object in the real world and merging these entities into one entity object with a globally unique identity.
Knowledge fusion: and realizing the fusion of a conceptual layer and a data layer of the data by methods of body alignment, entity matching and the like. Ontology alignment emphasizes fusion of concept layers, and main works include concept merging, concept context merging, and attribute definition merging of concepts. Entity matching emphasizes the fusion of data layers, and the main work includes entity linking, data fusion and conflict detection and resolution.
And (3) building a relation model: in the knowledge graph, the relationship between entities is an important component. A complex relationship model needs to be constructed to process complex relationship types such as 1-to-N, N-to-1, N-to-N and the like. This may be accomplished by using various models, such as the KG2E model, etc., to characterize the location and uncertainty of entities and relationships in semantic space.
Knowledge reasoning: new knowledge or conclusions are obtained in various ways, which need to satisfy semantics. This is mainly divided into ontology reasoning and rule reasoning, and can be realized by defining reasoning relation rules and the like.
A top-down and bottom-up combination is often used throughout the process. Firstly, the schema layer is constructed in a summary manner on the basis of knowledge extraction, then new knowledge and data can be summarized in a summary manner, so that the schema layer is updated in an iteration manner, and a new round of entity filling is performed on the basis of the updated schema layer.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the elements is merely a division of some logic functions, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.

Claims (10)

1. A multi-modal navigation system of an educational robot in a complex environment is characterized in that: comprising the steps of (a) a step of,
The complexity analysis unit is used for analyzing and acquiring the road complexity of road conditions in the moving visual field according to the distribution state of the obstacle regionIf the complexity/>Issuing a multi-modal navigation instruction to the outside beyond expectations; after the positions of all the obstacles and obstacle areas in the moving view are acquired, constructing the complexity/>, of the road surface in the moving viewWith road complexity/>The road conditions in the visual field are evaluated in the following manner:
Wherein, the weight coefficient: ,/> And/> ; N is the number of obstacles,/>Is the distance from the ith obstacle to the jth obstacle,/>Is the mean value of obstacle distance; m is the number of obstacle regions,/>Is the distance of the ith barrier zone and the jth barrier zone,/>Is the mean value of the distance between the obstacle regions;
an image quality analysis unit for acquiring navigation data by the robot using the sensor group, preprocessing the navigation data, performing quality analysis on the image data, and constructing an image quality coefficient from the quality analysis result According to the image quality coefficient/>Screening out low-quality images, and optimizing the low-quality images;
the fusion degree analysis unit is used for performing data fusion on the multi-source navigation data, identifying the fusion state of the fused navigation data and constructing the fusion degree by the identification data If the obtained fusion degree/>The method comprises the steps that a corresponding multisource data fusion optimization scheme is given out by a data fusion optimization knowledge graph according to fusion characteristics acquired through recognition, wherein the fusion characteristics are lower than expected;
The path planning unit is used for constructing a navigation map by fusing navigation data, planning a moving path for the robot to move to a target position after marking a feasible region and an infeasible region on the navigation map, and constructing path priority according to the conditions of each moving path According to the path priority/>And screening out a target path for the robot.
2. The multi-modal navigation system of an educational robot in a complex environment of claim 1, wherein:
Determining a robot moving area and a target position, imaging by an imaging device of the robot along a forward direction, and determining a moving field of view for the robot according to an imaging distance of the imaging device; identifying the acquired view field image to acquire a corresponding obstacle; the method comprises the steps of measuring a road surface in a moving view field by using a radar module, determining a protruding or recessed area of the road surface, which is difficult to move by a robot, according to the moving capability of the robot, determining the corresponding area as an obstacle area, and determining the corresponding position for each obstacle area.
3. The multi-modal navigation system of an educational robot in a complex environment of claim 1, wherein:
After receiving the multi-mode navigation instruction, the robot acquires detection data in the moving visual field by utilizing a sensor group to acquire navigation data; and respectively summarizing the acquired navigation data, preprocessing the data, and constructing an acquired preprocessed navigation data set.
4. A multi-modal navigation system of an educational robot in a complex environment, according to claim 3, wherein:
Arranging the acquired image data in the moving field of view along the acquisition time, and sequentially dividing the image into a plurality of blocks for image quality analysis to acquire corresponding image quality data; constructing image quality coefficients of respective images from image quality data If the obtained image quality coefficient/>Below the quality threshold, the corresponding image is taken as a low quality image and the low quality image is optimized.
5. The multi-modal navigation system of an educational robot in a complex environment of claim 4, wherein:
image quality coefficient The acquisition method of (1) is as follows: after linear normalization processing is carried out on each contrast C, definition P and distortion K, corresponding data values are mapped to intervals/>In the following manner:
Wherein, Is qualified reference value of contrast/(Is the average value of the contrast; /(I)For the value of contrast in the ith zone,/>For qualified reference value of definition,/>As the mean value of definition,/>For the value of sharpness in the ith block,/>Is qualified reference value of distortion degree,/>Is the mean value of the distortion degree; /(I)For the value of distortion in the ith block, the weight coefficient:,/>,/> And/> ,/>M is the number of blocks.
6. The multi-modal navigation system of an educational robot in a complex environment of claim 1, wherein:
fusing the multi-source navigation data to obtain fused navigation data; judging the fusion state of the fusion navigation data, acquiring corresponding fusion state data, and constructing the fusion degree of the navigation data The mode is as follows: position accuracy/>Data delay/>After linear normalization processing, mapping the corresponding data to interval/>In, according to the following formula:
Weight coefficient: ,/> ; if the obtained fusion degree/> And if the fusion degree is lower than the fusion degree threshold value, sending an optimization instruction to the outside.
7. The multi-modal navigation system of an educational robot in a complex environment of claim 6, wherein:
After receiving the optimization instruction, constructing a data fusion optimization knowledge graph by taking data fusion optimization as a target word; identifying the fusion state of the fusion navigation data to obtain corresponding fusion characteristics; and according to the correspondence between the data fusion optimization scheme and the fusion characteristics, giving a corresponding multi-source data fusion optimization scheme by a data fusion optimization knowledge graph, executing the data fusion optimization scheme, and optimizing the fusion of the current navigation data.
8. The multi-modal navigation system of an educational robot in a complex environment of claim 1, wherein:
Constructing a navigation map of the environment where the robot is located by utilizing the fused navigation data, and identifying a feasible region and an infeasible region in the moving visual field, wherein the infeasible region comprises an obstacle and an obstacle region; and marking the identified and acquired obstacle and obstacle area on a navigation map, and updating navigation map information in real time along the moving direction of the robot.
9. The multi-modal navigation system of an educational robot in a complex environment of claim 1, wherein:
The method comprises the steps of obtaining the current position of a robot by using a self-adaptive Monte Carlo positioning algorithm, marking the current position on a navigation map, planning a moving path for the robot by using a path planning algorithm according to the navigation map, fusion navigation data and the current position of the robot, and marking each moving path on the navigation map; calculating the path priority of each path Selecting the path priority among them/>The highest moving path is a target path, and the next is a standby path, so that the robot moves along the target path.
10. The multi-modal navigation system of an educational robot in a complex environment of claim 9, wherein:
Path priority The acquisition mode of (a) is as follows: to ground flatness/>Path drop/>After linear normalization processing, the method is as follows:
Wherein, N is the number of sub-paths,/>Is the path drop of the ith sub-path,/>For the ground flatness of the ith sub-path, the weight coefficient: /(I),/>And/>
CN202410424755.8A 2024-04-10 2024-04-10 Multi-mode navigation system of educational robot in complex environment Active CN118010009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410424755.8A CN118010009B (en) 2024-04-10 2024-04-10 Multi-mode navigation system of educational robot in complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410424755.8A CN118010009B (en) 2024-04-10 2024-04-10 Multi-mode navigation system of educational robot in complex environment

Publications (2)

Publication Number Publication Date
CN118010009A true CN118010009A (en) 2024-05-10
CN118010009B CN118010009B (en) 2024-06-11

Family

ID=90943587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410424755.8A Active CN118010009B (en) 2024-04-10 2024-04-10 Multi-mode navigation system of educational robot in complex environment

Country Status (1)

Country Link
CN (1) CN118010009B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118293927A (en) * 2024-06-06 2024-07-05 青岛理工大学 Visual-voice navigation method and system with enhanced knowledge graph

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383090A (en) * 2008-10-24 2009-03-11 北京航空航天大学 Floating vehicle information processing method under parallel road network structure
US20170314930A1 (en) * 2015-04-06 2017-11-02 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
CN112000103A (en) * 2020-08-27 2020-11-27 西安达升科技股份有限公司 AGV robot positioning, mapping and navigation method and system
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN114812581A (en) * 2022-06-23 2022-07-29 中国科学院合肥物质科学研究院 Cross-country environment navigation method based on multi-sensor fusion
CN116263335A (en) * 2023-02-07 2023-06-16 浙江大学 Indoor navigation method based on vision and radar information fusion and reinforcement learning
CN116678394A (en) * 2023-05-10 2023-09-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Real-time dynamic intelligent path planning method and system based on multi-sensor information fusion
CN116839570A (en) * 2023-07-13 2023-10-03 安徽农业大学 Crop interline operation navigation method based on sensor fusion target detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383090A (en) * 2008-10-24 2009-03-11 北京航空航天大学 Floating vehicle information processing method under parallel road network structure
US20170314930A1 (en) * 2015-04-06 2017-11-02 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
CN112000103A (en) * 2020-08-27 2020-11-27 西安达升科技股份有限公司 AGV robot positioning, mapping and navigation method and system
CN112965081A (en) * 2021-02-05 2021-06-15 浙江大学 Simulated learning social navigation method based on feature map fused with pedestrian information
CN114812581A (en) * 2022-06-23 2022-07-29 中国科学院合肥物质科学研究院 Cross-country environment navigation method based on multi-sensor fusion
CN116263335A (en) * 2023-02-07 2023-06-16 浙江大学 Indoor navigation method based on vision and radar information fusion and reinforcement learning
CN116678394A (en) * 2023-05-10 2023-09-01 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Real-time dynamic intelligent path planning method and system based on multi-sensor information fusion
CN116839570A (en) * 2023-07-13 2023-10-03 安徽农业大学 Crop interline operation navigation method based on sensor fusion target detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118293927A (en) * 2024-06-06 2024-07-05 青岛理工大学 Visual-voice navigation method and system with enhanced knowledge graph

Also Published As

Publication number Publication date
CN118010009B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
CN108399428B (en) Triple loss function design method based on trace ratio criterion
CN118010009B (en) Multi-mode navigation system of educational robot in complex environment
CN112784092B (en) Cross-modal image text retrieval method of hybrid fusion model
CN110766038B (en) Unsupervised landform classification model training and landform image construction method
CN107506774A (en) A kind of segmentation layered perception neural networks method based on local attention mask
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
CN111709244A (en) Deep learning method for identifying causal relationship of contradictory dispute events
CN104680193A (en) Online target classification method and system based on fast similarity network fusion algorithm
Sequeira et al. A lane merge coordination model for a V2X scenario
CN113159403A (en) Method and device for predicting pedestrian track at intersection
CN114283315A (en) RGB-D significance target detection method based on interactive guidance attention and trapezoidal pyramid fusion
CN115578416A (en) Unmanned aerial vehicle target tracking method, system, medium and electronic equipment
CN114355915B (en) AGV path planning based on deep reinforcement learning
CN114708472A (en) AI (Artificial intelligence) training-oriented multi-modal data set labeling method and device and electronic equipment
CN118038052A (en) Anti-difference medical image segmentation method based on multi-modal diffusion model
CN117909920A (en) End-to-end 3D object positioning method based on text guidance
CN113741480B (en) Obstacle avoidance method based on combination of dynamic obstacle extraction and cost map
CN115147432A (en) First arrival picking method based on depth residual semantic segmentation network
CN114818739A (en) Visual question-answering method optimized by using position information
Ortigosa et al. Fuzzy free path detection from disparity maps by using least-squares fitting to a plane
De Giacomo et al. Guided sonar-to-satellite translation
Yao et al. A unified neural network for panoptic segmentation
CN111222533A (en) Deep learning visual question-answering method and system based on dependency tree
Nham et al. Scenario-based Segmentation: Traffic Image Segmentation by GNN based Driver’s Scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant