CN109000655B - Bionic indoor positioning and navigation method for robot - Google Patents

Bionic indoor positioning and navigation method for robot Download PDF

Info

Publication number
CN109000655B
CN109000655B CN201810595271.4A CN201810595271A CN109000655B CN 109000655 B CN109000655 B CN 109000655B CN 201810595271 A CN201810595271 A CN 201810595271A CN 109000655 B CN109000655 B CN 109000655B
Authority
CN
China
Prior art keywords
information
color
bionic
navigation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810595271.4A
Other languages
Chinese (zh)
Other versions
CN109000655A (en
Inventor
王连明
李伟
汪云云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Normal University
Original Assignee
Northeast Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Normal University filed Critical Northeast Normal University
Priority to CN201810595271.4A priority Critical patent/CN109000655B/en
Publication of CN109000655A publication Critical patent/CN109000655A/en
Application granted granted Critical
Publication of CN109000655B publication Critical patent/CN109000655B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention provides a bionic indoor positioning and navigation method for a robot. When people identify articles, the articles are identified by utilizing self prior knowledge mainly according to the color characteristics and the shape characteristics of the articles. Human positioning and navigation mainly obtains and processes information through vision and the brain. Firstly, the human beings recognize and memorize the landmark articles in the environment, and then complete the navigation work by using the position of each article in the memory. The method provided by the invention does not depend on accurate measurement and complex mathematical calculation, completely simulates human behavior positioning navigation, and provides a new thought for the research of the indoor robot navigation technology.

Description

Bionic indoor positioning and navigation method for robot
Technical Field
The invention relates to a bionic indoor positioning and navigation method for a robot, and belongs to the technical field of autonomous positioning and navigation of indoor robots
Background
The positioning and navigation technology is an important premise for realizing the autonomous completion of work of the indoor mobile robot, numerous expert scholars carry out very deep research on the technology, and the difference of the positioning and navigation technology directly influences the use experience of the service robot. At present, an indoor mobile robot is mainly used for accurately calculating mileage through an inertial sensor, Wi-Fi (wireless fidelity), a laser radar or an image sensor to complete positioning and navigation work.
Bionics is a discipline that models biology in a particular field, using biological structures and functional principles to develop machines or new technologies. It is a cross discipline integrating the disciplines of life science, physics, information science, brain and cognition science, engineering, mechanics, modeling art, system science and the like. The scope of research today includes: mechanical bionics, molecular bionics, energy bionics, and information and control bionics. Biological functions are superior to any artificially manufactured machine so far, and the aim of bionics is to realize and effectively apply biological functions in engineering. The structure or function of information reception (sensory function), information transmission (neural function) and automatic control systems of living organisms, for example, has generated a great inspiration in the field of mechanical design. At present, the bionics research method is widely applied to various fields such as electronics, computers, control, machinery and the like, and makes great contribution to the research in the fields.
Human beings, being the most intelligent creatures, have great advantages over robots in the face of problems with positioning navigation. Besides the powerful processing capability of human brain, the positioning and navigation mechanism of human is an excellent result generated by long natural selection, and has great reference significance for the positioning and navigation technology of the robot. The human acquires the information of the environment through vision, hearing, touch and smell, wherein the vision acquires the information which accounts for 80 to 90 percent, and the information related to human positioning and navigation is mostly source vision information. Therefore, the realization of positioning and navigation technology through bionic human vision is the most close research mode to human beings.
In the last 60 th century, research on mobile robots became a hot spot due to the beginning of lunar exploration programs in countries around the world. At that time, the mobile robot does not have the capability of autonomous movement, and the movement of the robot needs to be controlled by a remote control means. Until 1972, the U.S. Stanford institute developed the first autonomous robot Shakey with intelligence. Shakey obtains the motion information and environment information of the robot itself through sensors such as a range finder, a vision camera, a motor and an encoder. The robot aims to realize autonomous recognition, inference planning and control decision making under a complex environment, but because the computer is huge in size and low in operation speed, the Shakey usually needs several hours to complete environment perception and behavior planning, and the system does not have real-time working capacity. Therefore, the positioning and navigation technology has not stopped the research as a key technology of the mobile robot.
In the end of the 70 s, with the development of computer application and sensing technology, the research of mobile robots has developed a new climax. Particularly in the mid 80 s, the robot was designed and manufactured in a tidal mat roll worldwide. A large number of world-wide famous companies have started to develop mobile robot platforms, which are mainly used as mobile robot experimental platforms in university laboratories and research institutions, thereby promoting the generation and development of mobile robotics positioning and navigation technology. Since the 90 s, research on higher-level research on mobile robots is carried out by taking development of high-level environmental information sensors and information processing technologies, high-adaptability mobile robot control technologies and planning technologies in real environments as marks. According to different working environments, the mobile robot can be divided into: an indoor mobile robot and an outdoor mobile robot. The positioning and navigation methods related to different working environments are also different, and the navigation method of the indoor robot can be generally divided into: electromagnetic navigation, track navigation, sensor navigation, visual navigation, and the like. The method comprises the following specific steps:
(1) electromagnetic navigation requires a pre-buried wire, and a robot uses an electromagnetic field of the wire as a navigation path. The navigation method has the characteristics of high positioning precision and insensitivity to environmental change. However, the method has high laying cost and poor flexibility, and once the laying is finished, the navigation route is not easy to change. The electromagnetic navigation technology is researched earlier abroad, and the technology is mature.
(2) Navigation path is a positioning navigation method for calculating the motion track of robot[25]. At present, the most used method is to estimate the travel distance and the pose of the robot by using an encoder and an inertial sensor, which is also the most basic navigation method. It is easy to find that the method is simple to implement and high in operation speed, but is not suitable for long-time navigation work. This is because as the time of the robot movement increases, the calculation error of the track is accumulated continuously, and finally, the navigation is likely to fail because the accumulated error is too large. Therefore, reducing the mechanical error and the algorithm error of the flight path is a key task for researching the method.
(3) Sensor navigation is one of the navigation methods with the widest application range and the best effect at present[25]. The method is characterized in that the environment is subjected to distance measurement and mileage calculation to realize navigation through ultrasonic, infrared, Wi-Fi or laser radar and other devices which are arranged on the mobile robot. The navigation process is to obtain the information of the environment target such as the obstacle, the wall and the like in the environment through the sensor, position and construct the environment map in real time, and thenAnd a path is planned. The existing indoor floor sweeping robot mainly completes navigation work through a laser radar. The measurement precision of the environmental distance information judged by infrared, laser, Wi-Fi, ultrasonic and other information is easily influenced by the environment. But has wide market application range due to the advantages of non-contact and reaction speed. There is also much research on navigation methods for multiple sensor fusion.
(4) The visual navigation is developed along with the development of a machine vision theory, has the advantages that the traditional sensor is difficult to compare in positioning navigation, and becomes the most popular method for researching the navigation technology by virtue of the advantages of abundant environmental information, intuition and high precision. The visual positioning is not only related to the vision of the robot, but also relates to various technologies such as camera calibration, stereo matching, map construction and behavior planning, and is a comprehensive research direction. Today, many research institutes, colleges and companies are devoting a great deal of research effort in visual navigation.
At present, a subject to which researchers at home And abroad pay attention is directed to the problem of Simultaneous Localization And Mapping (SLAM), And a sensor navigation method And a visual navigation method are generally adopted for solving the problem. The laser radar SLAM (Lidar-SLAM) adopts a non-visual navigation method, can accurately measure the distance and the azimuth angle, is slightly influenced by the environment, and has the defects of high cost and single information quantity. Compared with a non-visual navigation method, the positioning and navigation method based on the vision is developed rapidly by virtue of the characteristics of convenience in implementation, high reliability, large information amount and the like. Visual information can improve the flexibility of the system and reduce application cost, but the map construction based on the vision needs to rely on complex image processing technology, needs to solve the map optimization problem to obtain global optimization estimation, and has defects in the continuity of feature tracking and positioning. Therefore, many scholars research on Visual instantaneous positioning and mapping (VSLAM) problems, and propose a method for simulating lidar SLAM by depth data. The method mainly estimates the pose of the robot through characteristic points in image frames, and then continuously constructs a map by using the obtained pose until the robot is detected to finish composition of the environment. Although the environment is described more richly and comprehensively by visual information, the problem of larger data volume and calculation amount is brought, and the current application aspect has many problems which are not considered in theory. The biggest problem in visual navigation is now the effect of "noise". The method of continuously estimating the pose of the robot by using the image frames is called an asymptotic matching mode. The matching method has the same defects as the previous method for navigating the track, and as the motion time of the robot increases, the influence of noise is larger and larger, and the occurring errors are accumulated continuously until intolerable positioning errors are caused. The optimization for this problem is specifically referred to as SLAM-side optimization theory in VSLAM. The theory is aimed at the problem of image front-end optimization and can be realized by ICP, PnP or g2o algorithms, but the algorithms bring a large amount of calculation, cannot meet the requirement of real-time performance, and is the biggest problem which restricts the practical application of the navigation method. Aiming at the problems of slow VSLAM feature extraction, poor robustness in extreme scenes and the like, a SURF feature extraction mode based on GPU acceleration is provided, the cost of the robot is increased, and the requirement on a hardware environment is improved. DVO SLAM (deep Visual overview, SLAM) proposes a method for selecting a key frame based on a policy of an effective pixel point, and although the effect and efficiency are greatly improved compared with other methods based on sparse Visual features, a Dense algorithm also introduces a larger amount of calculation. At present, the SLAM method has limited capacity, only the construction part of the map is completed, and the constructed map has no capacity of realizing robot navigation.
In summary, the current positioning and navigation methods of indoor robots all obtain an environment map in an accurate measurement mode, which has the disadvantage of large calculation amount, and human beings have the ability to realize the memory of the environment and complete the navigation task without depending on complicated mathematical calculations.
Disclosure of Invention
Aiming at the technical problems, the invention provides a bionic indoor positioning and navigation method for a robot, which reduces the complexity of a navigation algorithm, facilitates man-machine interaction, enables the robot to better understand the life style of human beings and further provides better service.
The technical solution adopted by the invention is as follows:
a bionic indoor positioning and navigation method for a robot comprises the following steps:
(1) modeling a human positioning and navigation method based on a human visual system and positioning and navigation cells in human brain to obtain a positioning and navigation model;
firstly, simulating horizontal cells of a visual system to perform illumination removal treatment on an acquired image; then the photoreceptor cell obtains color information and edge information of the environment; the retina acquires information and then transmits the information to the brain through the neurons to perform information processing; the brain receives the information and then further processes the information to obtain the color feature, the shape feature and the symmetry feature of the article, so that the marker in the environment is identified; after the marker is identified, comparing the marker with the positioning point memorized by the position cell to obtain the current position; the grid cells memorize the paths of all position points like a map; judging the road distance between two positions by a person through a depth image; head orientation cell analysis the direction of head orientation provides orientation information for the map;
(2) performing bionic algorithm design for identifying indoor articles;
extracting color characteristics, shape characteristics and symmetrical characteristics of the object through color and edge information, and then storing the characteristics by utilizing the memory function of a database bionic human brain; during identification, template matching identification is carried out on the object to be detected through the query features;
(3) performing bionic positioning navigation algorithm design;
simulating the behaviors of the position cells, the head direction cells and the grid cells, and positioning by utilizing an indoor article database on the basis of processing and identifying the articles by the bionic human brain on the basis of the visual information; and meanwhile, the driving path is retrieved by utilizing a map database according to the obtained locating point, so that navigation is realized.
Preferably, in step (1): an analogy and simulation bionic method is adopted, modeling is deeply researched from a behavior mechanism of human eyes for acquiring information and brain processing information, and bionic is carried out from the following physiological behaviors:
(1) after acquiring the environment image, the bionic human eyes firstly carry out brightness balance and edge contrast enhancement, and then acquire the behavior of color information and edge information in the image;
(2) the bionic human eyes can sense the distance information of the environment, but adopt the behavior of fuzzy representation;
(3) the bionic human brain identifies the behavior of the marker through color features and edge features;
(4) the bionic human brain memorizes the positioning points through the position cells and realizes the positioning through the memorized marked articles;
(5) the bionic human brain realizes navigation by memorizing a path through the combined action of the position cells and the grid cells;
(6) the bionic human brain obtains the behavior of the orientation-aided navigation by using the information of the cells in the head direction.
Preferably, step (2) comprises:
enhancing the image; acquiring color information; acquiring color characteristics; acquiring edge information; vectorizing edge information and extracting shape features; detecting the symmetric characteristics; and identification based on an item prior database.
Preferably, the image enhancement specific algorithm comprises the following steps:
(1) performing HSV space conversion on an original image to respectively obtain H, S, V channel data;
(2) carrying out histogram equalization processing on the V channel data;
(3) re-fusing the processed V with H, S;
(4) and (4) converting the HSV image obtained in the step (3) into RGB.
Preferably, the algorithm for identifying the item using the database comprises the following steps:
(1) inputting an original image and solving a color histogram of the original image;
(2) according to the color distribution condition in the color histogram, finding a connected domain of the color in the original image from the most color distribution, and intercepting the corresponding image in the original image through the connected domain;
(3) solving the intercepted image color distribution histogram feature vector to obtain the feature vector of the color to be detected;
(4) searching by using the color with the most distribution obtained by the color distribution in the step (2) as a main color characteristic in a database to obtain an article record conforming to the main color characteristic;
(5) matching the color characteristic vectors of the articles obtained in the step (4) and the vector to be detected obtained in the step (3) through a Hausdorff distance, and selecting the articles with the threshold value smaller than 0.9 to represent the articles passing the color screening;
(6) solving the shape characteristics of the image intercepted in the step (2) and the shape characteristics of the object filtered in the step (5) stored in a database, and performing Hausdorff distance judgment, and selecting the object with the threshold value smaller than 1.7 to pass the screening;
(7) matching the symmetrical features of the image intercepted in the step (2) with the symmetrical features of the articles screened in the step (6), and if the symmetrical features cannot be distinguished, selecting the articles which meet the threshold condition in the step (6) and have the minimum Hausdorff distance as a result; otherwise, selecting from the step (2), and repeating the process from the step (2) to the step (7) for the second time of color distribution; and circulating in sequence.
Preferably, step (3) comprises:
path identification based on depth information; performing bionic distance representation; and constructing a robot map.
Preferably, the path identification based on the depth information includes a bionic path acquisition method and a bionic distance representation method; the bionic path acquisition method comprises the following specific steps:
(1) calibrating the depth image and the RGB image by using a chessboard calibration method;
(2) transforming the image obtained by the Kinect into an HSV space to obtain color information of a scene in front of the robot;
(3) keeping the pixel value of the part, which is consistent with the ground color, in the image unchanged, and setting the inconsistent part to be black to obtain an area only with black and road colors;
(4) and obtaining the distance of the travelable space of the road in front of the robot through the depth information. If the depth information is less than 40 cm, the front part is considered to be an obstacle and the vehicle cannot move forwards; and then judging whether roads and travelable space distances exist on the left side and the right side of the barrier in the image according to the depth information.
The beneficial technical effects of the invention are as follows:
the invention starts from human self in turning angle, adopts a bionic research method to provide a new positioning navigation method, does not depend on accurate measurement and complex mathematical calculation, completely simulates human behavior positioning navigation, and provides a new thought for the research of the indoor robot navigation technology.
Drawings
The invention will be further described with reference to the following detailed description and drawings:
FIG. 1 is a positioning navigation behavior biomimetic model;
FIG. 2 is a shape feature extraction algorithm;
FIG. 3 is a Freeman chain code graph;
FIG. 4 is a block diagram of a bionic positioning navigation method;
Detailed Description
As the robot enters a home, the need for an autonomous positioning and navigation technology for an indoor robot is increasing. At present, in the research of indoor robot positioning and navigation technologies at home and abroad, a map is still constructed through complex mathematical operation and accurate measurement, and the problems of complex algorithm, poor real-time performance, inconvenience for man-machine interaction and the like are generated, which is contrary to the behaviors of human beings in positioning and navigation. Therefore, positioning and navigation are still a hot problem in indoor robot research as a core technology for completing service work by an indoor robot.
On the basis of the existing visual positioning method, the human visual navigation behavior is firstly analyzed from a physiological layer surface. When people identify articles, the articles are identified by utilizing self prior knowledge mainly according to the color characteristics and the shape characteristics of the articles. Human positioning and navigation mainly obtains and processes information through vision and the brain. Firstly, the human beings recognize and memorize the landmark articles in the environment, and then complete the navigation work by using the position of each article in the memory. Therefore, the invention provides a positioning navigation method which completely simulates human behaviors without depending on accurate measurement and complex mathematical calculation, and provides a new thought for the research of the indoor robot navigation technology.
The positioning navigation method provided by the invention mainly comprises two parts, namely bionic article identification and bionic positioning navigation. The bionic article identification part is realized by an indoor article database based on prior. Firstly, simulating the behavior of human vision to acquire information, and storing the characteristics of human identification articles such as color, shape, symmetry, color texture and the like in a database; then, the article information acquired in real time is searched in a database, and whether a landmark article for positioning exists is judged. The bionic positioning navigation part utilizes a database to form an indoor map to perform navigation tasks. Firstly, a bionic method is adopted to pre-store positioning points and driving paths to form a map; then, the presence or absence of the located landmark article is continuously detected from the visual information. When the visual information acquires the stored positioning points, searching a robot driving path in the database by using the positioning points; and finally, moving according to the retrieved path to finish the positioning and navigation work.
The main work done by the present invention includes the following aspects:
(1) and (4) analyzing the principle, the advantage and the defect of the current visual positioning navigation method by referring to relevant documents. A bionic positioning navigation behavior model is provided by analyzing the physiological characteristics of human vision and the human positioning navigation behavior.
(2) Aiming at the physiological characteristics of human vision, people are mainly found to pay attention to the characteristics of the article by utilizing color information and edge information. The features of the item are abstracted by these two types of information: and 4 items of contents such as color, shape, symmetry, color texture and the like are matched with the characteristics extracted by the articles by using the prior knowledge in the database to complete the identification.
(3) It is known from experience that human navigation of the environment forms a topological map by markers and the positions between the markers. Therefore, the identification of the marker is completed through the acquisition and processing behaviors of the color information and the depth information of the bionic human eyes, and positioning and path ranging are carried out; distance judgment and representation are carried out by adopting a fuzzy set method through the characteristic of distance fuzzy processing of bionic human beings; and finally, searching a path in the map database by adopting the identified marker to realize navigation.
The present invention will be described in detail with reference to the accompanying drawings.
Positioning and navigation method for human
The main information source for human positioning and navigation activities is visual information. According to the relevant literature, human beings recognize the markers in the environment through visual information, and the brain memorizes the recognized positions, thereby generating specific excitement. When the number of recognized markers is increased, the memory of the brain to the marker objects is increased, and the map of the whole environment is constructed in the brain and sea to be more perfect. The human being is familiar with the environment by remembering the marker objects in the environment and forming a map in the mind. If the user re-enters the environment, the user can be positioned through the previously memorized markers, and then a path is planned to realize navigation. The positioning and navigation mechanism of the human being is studied in detail, and the positioning and navigation behavior of the human being is simulated to be modeled. As will be described in detail below.
1.1 human visual System
Human senses the external environment through visual, auditory, olfactory and tactile information, wherein approximately 70-80% of the information is obtained visually. Although a plurality of robots are positioned through sound information of the environment, the indoor robot visual information is more in line with the application requirements of the indoor robot, so the positioning method adopted by the indoor robot visual positioning method is visual positioning.
As can be seen from the cross-sectional view of the human eyeball, the light rays pass through the pupil, through the crystalline lens, through the vitreous body and finally strike the retina. The retina is composed of several layers of vision-related nerve cells, the most prominent photoreceptors being cone cells and rod cells, which are approximately one hundred million and thirty million. Cones are densely distributed in the fovea and relatively few in the peripheral region of the retina. The cone cells in the fovea have single-line connection with bipolar cells and ganglion cells, so that the fovea has high light sensing resolution. Thus, the cone cells are sensitive to photopic vision, enabling color information and edge information to be obtained.
Rod cells are not distributed in the fovea, but are mainly distributed in the periphery of the retina, and the communication between the rod cells and bipolar cells and ganglion cells is not changed, so that the rod cells are converged. So that the rod cells are sensitive to dark light, so that the light sensitivity is high but the resolving power is poor,
under weak light, only a rough outline of an object can be seen, and the object is not visually perceived. The spatial resolution of the visual cone is high, and the optical rod is more sensitive to weak light. Under direct viewing conditions, the center of the field of view falls on the fovea. Thus, the light source is favorable for the strong light condition and unfavorable for the weak light condition. Their names are derived from their morphology, but the results are roughly the same, and different functions are caused only by the different contained photosensitizers. Rhodopsin is the photopigment of rod cells, and the photopigment of cone cells is rhodopsin. Rhodopsin, which is formed by the combination of opsin and retinal, decomposes in the shell and can be synthesized again in the dark. While the rhodopsin is synthesized in the bright place.
The rod cells play a great role in scotopic vision and can acquire the edge information of an object. The cone cells are more in acquisition of color information besides contour information under photopic vision. Cones are classified into three types for different wavelengths of light. This is due to the slightly different molecular structures of the three cones, which are divided into L, S and M cones. They refer to long, medium and short wavelength cones, where the maximum sensitivity of the cell is 560nm, 530nm and 420nm, respectively. Therefore, the acquisition of the environment information by human eyes is mainly the color information and the edge information of the image, and the environment is distinguished by the two kinds of information.
In retinal cells, the horizontal cell soma lies outside the inner nuclear layer, giving off many horizontally oriented branches that extend into the outer stratum reticulum and synapse with rod, bipolar and interreticular cells. Adjacent horizontal cells are connected by gaps. Horizontal cells can be divided into rod-level cells, which are associated with rods only, and cone-level cells, which are associated with cones only. Horizontal cells can be classified into Luminance (LHC) and chrominance (CHC). CHC has two common reaction types: biphase reaction, R/G type for green light depolarization and red light depolarization, G/B type for green light depolarization and blue light hyperpolarization; three-phase reaction, hyperpolarization for red and blue light and depolarization for green light. LHC level cells receive excitatory and feedback inhibitory signals from the red and green cones. In the absence of light, the photoreceptor cells release the corresponding chemical substances, and the horizontal cells depolarize. The level of depolarization cells will hyperpolarize neighboring photoreceptor cells. In turn, in the presence of light, the photoreceptor cells reduce the release of chemicals, the horizontal cells hyperpolarize, and adjacent photoreceptor cells depolarize. Thus, the horizontal cells provide negative feedback to the photoreceptor cells. The horizontal cells collect the received signal intensity of the photoreceptor cells, the average brightness of illumination on the retina is measured, and the feedback inhibition signal adjusts the output signal of the photoreceptor cells to a proper level, so that the signal received by the bipolar cells is not too small to be submerged in the noise of the neural pathway, and the neural pathway is not too large to be supersaturated. Biologically, the exact mechanism by which depolarized horizontal cells hyperpolarize photoreceptor cells is not readily known, but current research results have determined that horizontal cells have two roles: firstly, brightness adjustment is carried out on visual signals output by the photoreceptor cells, and visual brightness adaptation is realized; secondly, the contrast of the visual image edge is enhanced, and the outline of the scenery is highlighted.
Human eyes (Binocular vision)[34]It refers to the vision produced by living beings under the condition that the visual fields of two eyes are mutually overlapped. Because the two eyes have the interpupillary distance, images which are different but basically similar are generated on the retinas, and after the visual signals are transmitted to the brain, the brain integrates the difference between the two images, so that the accurate distance relation between the eyes and the object can be judged. Therefore, compared with the single-eye vision, the double-eye vision has four obvious advantages:
(1) one more backup is provided than one eye, so that the influence on the survival of organisms due to damage is reduced;
(2) the field range of the living beings is enlarged;
(3) the effect of the superposition of the two eyes can play a role in visual compensation;
(4) binocular parallax can assist in producing accurate depth vision.
Among them, the last point plays an important role in human navigational behavior.
In summary, after receiving the image information, the human visual system adjusts the brightness and enhances the image edges by horizontal cells to prevent the illumination from affecting the image information. After horizontal cell processing the photoreceptor cells begin processing the image information. Wherein the cone cells play a role in bright scenes to obtain color information and edge information; when the device is in a dark environment, the color information disappears, the rod cells play a role in obtaining the edge information under dark vision. It follows that the main function of the human visual system is to obtain color information and edge information in the environment. Human recognition of life scenes is also accomplished by further processing of this information in the brain. Therefore, the bionic human visual system needs to bionic acquire color information and edge information and also needs to bionic human brain to process the behavior of the visual information, and the information is further abstracted into object characteristics. After the position information and the object are associated and memorized through the human brain, the visual positioning method taking the marked object as the positioning point is obtained. Meanwhile, the human vision also has the capability of acquiring distance information, which provides convenience for the construction of the map.
1.2 localization of navigational cells in the human brain
The human brain has associated cells to perform its navigational tasks. Place cells have been discovered as early as 1971 by John Okif. In 1980, researchers at new york university discovered another homing cell: head-oriented cells. These cells are able to discern the direction in which the head is facing. On 6.10.2014, scientists john okov (john o' Keefe) from the united states and scientists in norway, fumei-britt mote (May-brittmosse) and edward mote (Edvard moter), were awarded the nobel physiology or medical prize to highlight their research efforts on grid cells (grid cells) in the human brain.
From the current biological research, the human beings have such excellent positioning and navigation capabilities due to the existence of the three navigation cells. The position cell is used for memorizing position information and is located in the hippocampus of brain. In 1970, researchers at university of london placed an electrode recorder in the hippocampal region of the rat's brain, and then allowed the rat to move freely around in an unfamiliar room. At this time, the cells at the sites in the rat brain are selectively excited according to the sites where they are located. Only when the rat moves to a specific location in the room will the cells in that specific location become excited. This is as if each coordinate was given a memory so that the brain remembers where it had been. The hippocampus of humans is the same as that of rats. Head-oriented cells, orientation-sensitive. For example, one set of cells will be excited when the head is facing north, while the other set of cells will be excited when the head is facing south. When the animals are explored in space, the lattice cells in the entorhinal cortex of the brain exhibit strong spatial discharge characteristics: when an animal reaches any grid node, the corresponding grid cell will produce a strong discharge. The receptive field of the grid cells presents a charming hexagonal pattern, is similar to the shape of snowflake crystals and honeycombs in nature, and is completely generated by the cerebral cortex.
In summary, the memory of an organism to an unfamiliar environment is the result of the combined action of these three cells in the brain. Therefore, the positioning and navigation activities of human beings can be generalized in such a way that when an organism moves in an unfamiliar environment, the position cell memorizes the interested position, and when the organism passes through the position again, the position cell becomes excited, namely the position cell memorizes the position, and the memorized positions are more and more along with the increase of the movement until the whole environment is not unfamiliar. The function of the grid cells is similar to the function of navigating with an environment map.
1.3 human positioning navigation method modeling
The human positioning navigation behavior has the advantages of not depending on accurate measurement and mathematical calculation, and positioning is carried out through visual information, so that the information is rich. Information and control of bionics are important fields in bionics, and research contents of the bionics mainly include aspects of simulation of sense organs, neurons and neural networks, information processing processes in organisms, intelligent activities of high-level centers and the like. For example, an "autocorrelation velocimeter" made from the visual response of a weevil can measure the landing speed of an aircraft. According to the working principle of the horseshoe crab compound eye retina side inhibition network, a device which can enhance the image outline and improve the contrast so as to be beneficial to the detection of the fuzzy target is successfully developed. Because the physiological system related to human positioning and navigation is very complex and cannot be simulated at the current technological level, the invention establishes a positioning and navigation model applicable to an indoor robot through the behavior of bionic human positioning and navigation, and is beneficial to realizing the improvement of the precise measurement and mathematical calculation method of the current positioning and navigation algorithm.
The invention adopts a bionic method of analogy and simulation, deeply studies and models from a behavior mechanism of human eyes for acquiring information and brain for processing information, and carries out bionic from the following physiological behaviors:
(1) after acquiring the environment image, the bionic human eyes firstly carry out brightness balance and edge contrast enhancement, and then acquire the behavior of color information and edge information in the image;
(2) the bionic human eyes can sense the distance information of the environment, but adopt the behavior of fuzzy representation;
(3) the bionic human brain identifies the behavior of the marker through color features and edge features;
(4) the bionic human brain memorizes the positioning points through the position cells and realizes the positioning through the memorized marked articles;
(5) the bionic human brain realizes navigation by memorizing a path through the combined action of the position cells and the grid cells;
(6) the bionic human brain obtains the behavior of the orientation-aided navigation by using the information of the cells in the head direction.
The positioning navigation model shown in fig. 1 is obtained by analogy and simulation of the positioning navigation behavior of the person. Firstly, horizontal cells simulating a visual system perform illumination removal processing on an acquired image. The photoreceptor cells then acquire color information and edge information of the environment. The retina acquires information and then transmits the information to the brain through neurons for information processing. And the brain receives the information and then further processes the information to acquire the color feature, the shape feature and the symmetry feature of the article, so as to identify the marker in the environment. After the marker is identified, the current position is obtained by comparing the marker with the positioning point memorized by the position cell. Grid cells remember the paths of all location points like maps. The person determines the road distance between two locations from the depth image. Head orientation cell analysis the direction in which the head is oriented provides orientation information for the map.
The human visual mechanism is firstly researched, and the edge information, the color information and the distance information of the environment which can be acquired by the human visual system are obtained by referring to relevant literature data of the retina physiological structure. Aiming at the visual information identification marker object, the human brain memorizes the object, and the grid cells are associated with different position cells to search and navigate paths. Meanwhile, the distance information acquired by the person also makes the person have fuzzy knowledge on the distances among different positions. The head direction cell plays an important role in bio-grid navigation. Aiming at the positioning and navigation behaviors of human beings, a bionic positioning and navigation behavior model is established by adopting a simulated bionic method.
Bionic algorithm design for indoor article identification
A robot system is constructed according to the provided bionic positioning and navigation behavior model, wherein an industrial personal computer needs to perform corresponding algorithm processing on video streams output by Kinect to complete the identification of indoor mark articles and perform positioning. The algorithm for item identification will be described in detail below.
2.1 image enhancement
At present, processing methods for the influence of uneven illumination at home and abroad are mainly divided into two types, one type is based on an incident-reflection model of light, an image collected under illumination consists of two parts, namely the total amount of light source illumination incident to an observed scene and the total amount of illumination reflected by objects in the scene, wherein the incident light part is called an illumination component, and the reflected light part is called a reflection component. The method can effectively recover the image color by eliminating the illumination influence from the original image in the form of estimating the illumination component, but has the defect of large calculation amount and is not suitable for processing the visual information of the robot. The other type processes images directly, and is typified by Histogram Equalization (HE) algorithm and unsharp masking. The histogram equalization algorithm adopts a mode of directly stretching gray levels, the operation process is relatively simple, the contrast enhancement effect is obvious, but the problems of excessive enhancement, color distortion and the like exist; the unsharp masking method is to reserve high-frequency components in an image by using a high-pass filter, add the high-frequency components to an original image, and adjust the image enhancement amplitude by modifying a coefficient so as to enhance the image contrast, but simultaneously amplify noise and add artifacts.
Because the link is a preprocessing part of the robot visual information processing, the insufficient illumination part can be equalized by adopting a histogram equalization mode from the practical application perspective, and the requirement of eliminating color distortion is basically met. The specific algorithm steps are as follows:
(1) performing HSV space conversion on an original image to respectively obtain H, S, V channel data;
(2) carrying out histogram equalization processing on the V channel data;
(3) re-fusing the processed V with H, S;
(4) and (4) converting the HSV image obtained in the step (3) into RGB.
2.2 color information and color feature vectors
2.2.1 acquisition of color information
The color image acquired by the Kinect is sent in the RGB format by a driver under a Windows operating system or a Linux operating system. After the influence of illumination on the image is eliminated, the color information is more complicated to distinguish by using the RGB information, and a more convenient mode is needed. HSV is a mode of representing pixel points in an RGB color model by using a cylindrical coordinate system, and the representation method is more visual than the geometrical structure of RGB based on a Cartesian coordinate system. HSV is Hue, Saturation, Value (Hue, Saturation, Value), also known as HSB, where B is the English word Brightness. Wherein, the hue (H) is the basic attribute of the color, which is the name of the commonly-known color, such as red, yellow, etc.; saturation (S) refers to the purity of a color, with higher S indicating purer colors and lower S gradually graying the color. Taking the numerical value of 0-100%; lightness (V) 0-100%. Therefore, the image in the RGB space is converted into the HSV channel, and the color can be distinguished more conveniently. The conversion from RGB to HSV can assume that (r, g, b) are the red, green and blue coordinates, respectively, of a point color, their values being real numbers between 0 and 1, while max is equivalent to the maximum of r, g and b, and min is equal to the minimum of these values. To find the (h, s, v) value in the HSV space, where h e 0,360 degrees is the hue angle of the angle and s, v e 0,1 is the saturation and lightness, a calculation is made. After conversion is performed through calculation, color judgment can be performed by using HSV values corresponding to pixel points in the RGB color model.
According to the literature, the human eye can divide the colors into at most 10 colors, which are: red, orange, yellow, green, blue-green, blue, violet, white, black, gray. The invention obtains the color intervals corresponding to the 10 colors respectively through a large number of experiments. Firstly, white, black and gray are divided, and V of a black interval is less than or equal to 25; the white interval is that V is more than or equal to 60& & S is less than or equal to 15; the gray interval is more than or equal to 25 and less than or equal to 60 and less than or equal to 40. When the values of S and V are in other intervals, colors can be subdivided by using the numerical value of the H space.
For the algorithm of color division in the HSV space, the algorithm can be realized by programming through C + + under Linux and code compiling is carried out through CMake. When writing code, the development period can be shortened through the resources of OpenCV. The conversion from RGB to HSV space can be realized by a CV:: cvtColor (srcImg, hsvImg, CV _ BGR2HSV) function provided by OpenCV, wherein in the parameters, the original Image is used as src Image, the matrix is converted as hsvImg, and the data of HSV space is used as the data of the matrix, so that the purpose of distinguishing colors is achieved according to color division.
2.2.2 color feature histogram
Human eye photoreceptor cells have three different types of receptors for color information, and visible color information is important for people to identify objects. The algorithm for color acquisition has been described above, and the present invention mainly deals with the expression and acquisition of the color features of an article.
The Color Histogram (CH) is a Color feature representation method widely applied to image processing, and has the advantages of simple operation and obvious effect, and can reflect different Color information and distribution thereof contained in an article. As early as 1991, Swain and Ballard proposed using a color histogram as a representation of the color characteristics of an image. The color histogram method is insensitive to geometric transformation such as image rotation and translation and scaling with small amplitude, and meanwhile, the blurring degree of the image has little influence on the characteristics of the color histogram. This property of the color histogram makes it suitable for applications where global image color similarity is to be retrieved, i.e., classification is performed by comparing the color histogram statistics of the two images. Since the histogram represents only the quantitative characteristics of the colors in the image, reflecting the statistical distribution and the basic hue of the colors, and does not reflect the spatial location characteristics of the image pixels, it appears that different images may have the same statistical distribution of colors. Therefore, the present invention divides an image into a plurality of small regions, obtains the color distribution of each region, and adds the color distributions to obtain a global color histogram. The more the number of the divided small regions is, the stronger the resolution capability of the image is.
The color information is obtained by utilizing HSV space characteristics, and 10 kinds of color information sensitive to human eyes are obtained. Through statistics of the 10 kinds of color information, the image of each article can obtain a corresponding color feature vector, and meanwhile, the maximum color area is conveniently found to serve as another feature. The invention selects the color feature histogram data as the color feature vector for article identification.
2.3 edge information and shape feature vectors
2.3.1 acquisition of edge information
Besides the color acquisition, the edge information is also acquired. At present, the edge extraction method is mostly performed through a gray level image, and commonly used edge detection operators include a Robert operator, a Sobel operator, a Prewitt operator, a LOG operator and a Canny operator.
In view of processing effect, Canny operator is adopted to extract the edge of the image. When the specific program is implemented, the open source OpenCV is used for providing a Canny (image, image,100,250) function, and the function is directly processed, so that the operation speed is high, and the effect is good. Wherein the first parameter is an input image parameter and the image is a single channel image; the second parameter is the output image, in which the edge part is white and the other part is black; the third and fourth parameters represent thresholds, the small of which is used to control edge connection, and the large of which is used to control the initial segmentation of strong edges, i.e. if the gradient of a pixel is larger than the upper threshold, it is considered as an edge pixel, and if it is smaller than the lower threshold, it is discarded. If the gradient of the point is between the two points, the point is kept when being connected with the pixel point which is higher than the upper limit value, and if not, the point is deleted.
2.3.2 edge information vectorization and shape features
After the illumination influence is eliminated, the method adopts the algorithm shown in figure 2 to extract the characteristics.
After the image is acquired, the image is grayed due to the processing requirement of a Canny operator. Graying of the image can be achieved by a function from CV:cvtcolor (srcImg, greyImg, CV _ BGR2GRAY), where srcImage is the input image and greyImage is the output grayscale image. After the edge information of the article is obtained for the greyImage by using the Canny operator, the edge of the article can be represented in the form of Freeman chain code. Freeman coding can be defined as selecting any one pixel as a reference point, and the adjacent pixels are respectively at 8 different positions and are endowed with eight values of 0-7 direction values, namely 0-7 chain code direction values. A line in the image can be represented by a string of code values in the Freeman chain code, called the edge of the line pattern.
The amount of Freeman's chain code also increases significantly if the image pixels are high, which also adds significant complexity to the subsequent feature recognition algorithm. Therefore, the invention adopts a vectorization method to represent the edge information, different edges correspond to different directions in the Freeman chain code, the Freeman chain code is still adopted to represent the direction information, and the length value corresponding to a certain direction takes the pixel as the unit length. After length values in all directions are obtained, the length values are normalized, the influence of scaling is eliminated, and meanwhile, the calculation complexity is reduced. The invention takes the vectorized chain code as the shape characteristic vector of the object.
2.3.3 symmetry features
After vectorization of the edge information of the object, a solution is provided for identification of the symmetric features. As shown in fig. 3, the left graph is symmetrical about the longitudinal axis of the image, the right graph is symmetrical about the transverse axis of the image, and the direction of the graph edges has been marked. It can be seen that if the length ratios of 6 to 2, 5 to 3, 1 to 7 are the same, and the length ratios of 4 to 0 are different, then symmetry is obtained about the longitudinal axis; the length ratios of 0 to 4, 5 to 3, and 1 to 7 are the same, and the length ratios of 6 to 2 are different, and are symmetrical about the horizontal axis. By the method, the symmetrical characteristics of the article can be identified.
2.4 classifier
After the features of the article are obtained, the corresponding feature vectors need to be identified. The neural network is a widely applied method in pattern recognition, and a Support Vector Machine (SVM) is an important tool for feature vector recognition. These methods have slow convergence, take a long time to train, and require retraining of the network if new feature vectors are added. Considering the increase and decrease factors of articles in the working environment of the indoor mobile robot, the invention adopts a template matching method to complete classification: and classifying according to a set threshold value by calculating the distance between the sample to be detected and the standard sample.
At present, an Euclidean distance method is mostly adopted in template matching, and the Euclidean distance is mainly used for calculating the distance between two lines of a space to measure the matching degree between two vectors. However, when the obtained sample vector is incomplete due to occlusion, a large error may be generated by matching the euclidean distance with the template. The Hausdorff distance (Hausdorff distance) measurement measures the distance between the proper subsets in the space, and can effectively avoid the influence of the shielding problem on vector matching.
2.5 article prior database-based identification method
The invention adopts a database to count the characteristics of indoor articles, and completes the identification by matching the data query articles and the characteristic vectors thereof with the samples to be detected.
2.5.1 indoor database of items
Table 1 is a table drawn according to the actual situation in order to more intuitively explain the article database, and the contents thereof completely match the database. The present invention also refers to such tables as databases. Table 1 lists 8 indoor items stored in the database, label being the code for each different item. The database lists 4 types of features in total, and the 4 types of features are obtained according to the article identification rule of the bionic human eyes. Besides the characteristics are obtained according to the bionic human positioning navigation behavior model, the experience of human identifying objects is also considered. According to experience, when people first see a certain article, firstly the color of the whole article is sensitive, then the shape characteristic of the article is obtained through the color, whether the article meets the symmetrical characteristic can be obtained after the characteristic information is obtained, and after the general understanding, the details of the article can be noticed. Detailed portion this feature is presented herein in color detail.
TABLE 1
Figure GDA0003277871560000151
The field Label is convenient for inquiring codes corresponding to different articles; item field is the name of the corresponding Item; the fields of Color1, Color2 and Color3 are used for representing the main Color composition of the article, Color1 represents that the Color is distributed most, the Color is 2 times, the Color3 is distributed a little less than the former two, and the Color is sequentially coded into 1 to 10 according to ten colors which can be perceived by human eyes, namely red, orange, yellow, green, blue-green, blue, purple, white, black and gray; if there is no corresponding color, the code is 11. The Shape field is the coding of feature vectors in different shapes, such as: "1" indicates that the data in the file with the local storage name of 1.txt is the shape feature vector of the object; there are three codes under the Symmetry field, "0" represents a figure symmetrical about the horizontal axis, "1" represents a figure symmetrical about the vertical axis, "2" represents a figure symmetrical about both the vertical axis and the horizontal axis, "-1" represents an asymmetrical figure; the Color _ Texture field is an encoding corresponding to a Color histogram feature of the item, such as: "1" indicates that the data in the file with the local storage name of 1.txt is the color histogram feature vector of the item.
2.5.2 identifying items Using a database
And in the case that the indoor article is known, storing the characteristics of the indoor article through a database. The algorithm for identifying an item using a database therefore has the following steps:
(1) inputting an original image and solving a color histogram of the original image;
(2) according to the color distribution condition in the color histogram, finding a connected domain of the color in the original image from the most color distribution, and intercepting the corresponding image in the original image through the connected domain;
(3) solving the intercepted image color distribution histogram feature vector to obtain the feature vector of the color to be detected;
(4) searching by using the color with the most distribution obtained by the color distribution in the step (2) as a main color characteristic in a database to obtain an article record conforming to the main color characteristic;
(5) matching the color characteristic vectors of the articles obtained in the step (4) and the vector to be detected obtained in the step (3) through a Hausdorff distance, and selecting the articles with the threshold value smaller than 0.9 to represent the articles passing the color screening;
(6) solving the shape characteristics of the image intercepted in the step (2) and the shape characteristics of the object filtered in the step (5) stored in a database, and performing Hausdorff distance judgment, and selecting the object with the threshold value smaller than 1.7 to pass the screening;
(7) and (3) obtaining the symmetric features of the image intercepted in the step (2) and matching the symmetric features of the articles screened in the step (6), and if the symmetric features cannot be distinguished, selecting the articles which meet the threshold condition in the step (6) and have the minimum Hausdorff distance as a result. Otherwise, selecting from the step (2), and repeating the process from the step (2) to the step (7) for the second time of color distribution. And circulating in sequence.
By taking an example from the data in the Table, assuming that the image is subjected to global histogram to obtain white with the most color distribution, the codes of the articles are respectively 2, Desk and 3 and Table by searching the white in the code 8 in the database, and then the article can be determined by finding the corresponding shape features because the two articles have different corresponding shape feature codes.
The article identification algorithm adopted by the invention is carried out on the basis of the information of the bionic human eyes. It is easy to find that the adopted identification algorithms are very mature algorithms with small computation amount. The method mainly considers the practical implementation link of the robot, so that the current learning algorithm for researching fire heat is not adopted to complete the recognition work. The method utilizes the characteristic of convenience of database management data to pre-acquire various characteristics of articles, positions the marked articles from a natural scene by utilizing main color information during identification, and acquires the characteristics to be matched with the characteristics in the database.
The process of acquiring the post-processing information of the color and the edge information of the environment by the bionic human brain in the bionic model is mainly completed. Extracting the color characteristic, the shape characteristic and the symmetrical characteristic of the object through color and edge information, and then storing the characteristic by utilizing the memory function of the database bionic human brain. And during identification, the template matching is carried out on the inquiry characteristics and the object to be detected for identification.
Three, bionic positioning navigation algorithm design
The indoor mobile robot can be positioned by an indoor landmark article on the basis of article identification. The process is similar to the behavior that human eyes are sensitive to the landmark buildings in the environment, and the surrounding landmark buildings are used for constructing the environment map. Therefore, the chapter mainly simulates the behaviors of position cells, head direction cells and grid cells, and utilizes an indoor article database for positioning on the basis of the bionic human brain for processing and identifying the visual information. And meanwhile, the driving path is retrieved by utilizing a map database according to the obtained locating point, so that navigation is realized. FIG. 4 is a block diagram of the location and navigation method in this chapter.
3.1 Path identification based on depth information
3.1.1 bionic Path acquisition method
From the above, it can be seen that the human eye can obtain not only color information but also depth information in the environment. The invention selects and uses a Kinect body sensor of Microsoft corporation to obtain the color information and the depth information of the environment, and carries out corresponding processing to obtain the path information. The specific method comprises the following steps:
(1) calibrating the depth image and the RGB image by using a chessboard calibration method;
(2) transforming the image obtained by the Kinect into an HSV space to obtain color information of a scene in front of the robot;
(3) keeping the pixel value of the part, which is consistent with the ground color, in the image unchanged, and setting the inconsistent part to be black to obtain an area only with black and road colors;
(4) and obtaining the distance of the travelable space of the road in front of the robot through the depth information. If the depth information is less than 40 cm, the front is considered to be an obstacle and the vehicle cannot move forwards. And then judging whether roads and travelable space distances exist on the left side and the right side of the barrier in the image according to the depth information.
After the method is used for processing, the straight-going distance of the robot and the rotating direction after the robot meets an obstacle can be obtained.
3.1.2 bionic distance representation method
People do not have an accurate idea of distance, and the description of the path is often in an ambiguous manner. Therefore, after the robot completes the path recognition, the travelable distance can be represented by a fuzzy set. Because the working capacity of the depth vision sensor in the current market is limited, the limit distance of indoor articles which can be identified by the robot is set to be 3 meters, and meanwhile, the optimal distance for the robot to identify the indoor articles is set to be 40 cm-90 cm. From these two definitions 5 fuzzy sets are given for the path distance of the robot: d1, d2, d3, d4, and d5 respectively represent "near", far ", and far", respectively, the domains X { (30,300) } unit cm, and the respective membership functions are obtained by a reference method.
After the specific distance information x of the path is obtained through the vision sensor, the x is respectively brought into the membership function of the five fuzzy sets to obtain the corresponding membership, and the fuzzy set with the maximum membership is the distance judgment of the robot on the environmental path at the distance.
3.2 robot map
The method adopts the human-simulated map construction, mainly refers to the simplicity and the high efficiency of the human navigation technology, and can realize the description of the whole scene through the memory of the landmark buildings to finish the navigation. At present, there are 4 main methods for representing maps in robotics: grid maps, feature point maps, direct representation maps, and topological maps.
3.3 human-like map construction method
The invention can realize the bionic map in the form of a topological map. The positioning points and paths of the topological map are stored by a database, which is equivalent to a robot memory map and is referred to as an indoor path database. The database stores three contents of fuzzy distance, rotation direction, marked articles and the like of straight movement of the robot respectively. As shown in table 2: the first Label field in the database stores the position code of the landmark article where the current robot is located in the map, and the code is consistent with the article corresponding to the Label in the article database; the second 'direction' field stores the rotation direction of the robot, and three codes of '0', '3' and '4' are arranged under the field, wherein 0 represents the original place to be fixed, 3 represents the right turn, and 4 represents the left turn; the fourth "distance" field stores the fuzzy distance of the robot going straight along the road. To facilitate storage and querying of data, a range ambiguity set is encoded: "1" means near, "2" means near, "3" means far, "4" means far, "5" means far. The last "target" field indicates the code to reach the location of the landmark article. The whole navigation process is as follows: starting from the marked article corresponding to the label field, a straight path is obtained through one rotation, and the distance to be traveled when the number reaches the marked article of the target field is obtained. As can be seen from table 2, the content stored in the database is a spatial miniature composed of different road signs and paths between road signs.
TABLE 2
Figure GDA0003277871560000181
Figure GDA0003277871560000191
The nodes and edges of the topological map are stored by the database, the behavior is similar to the way of the human memory environment, and the path retrieval through the database is facilitated. Table 3 is the result of storing the environment path in the database. From the first record, it can be seen that starting from position 3, a right turn is performed first, then a road capable of going straight is found, and the driving distance of the road is the fuzzy value of the distance corresponding to 3, then the road goes straight by the corresponding distance, and finally the target position is reached. The whole process is easily found to be consistent with the situation that human beings move in the environment, positioning is completed by using the markers, and then the layout of the whole environment is described through the relative positions of a plurality of positioning points. The database is adopted for map storage because the database is beneficial to retrieval, convenient to program and fast in operation speed.
TABLE 3
Figure GDA0003277871560000192
3.4MPU9250 assisted navigation
The MPU9250 sensor is primarily intended to mimic the effects of cells in the head orientation of the human brain. Head direction cells are important accessory cells of human positioning and navigation behavior. The sensor is also used herein to acquire positional information of the robot. The sensor has the capability of acquiring data of acceleration, a gyroscope and an electronic compass.
A human-simulated topological map is constructed by mainly utilizing a database, and bionic navigation activities are completed on the basis. And on the basis of carrying out article identification on the basis of vision in the previous chapter, taking the identified marked article as a positioning point. By researching different robot maps, after comparing with the characteristics of human memory environment, the topological structure map is selected. The map comprises nodes and edges, and is similar to the relationship of positioning points and paths when a human memorizes the environment. The map only has point and line information, so the map has the advantages of simple construction and convenient query. Because the distance description of the human does not adopt a precise measurement method, the measured distance of the mobile robot is fuzzified, and the data volume is reduced. And finally, storing the map information by using a database for facilitating query and storage. In addition, the position information of the robot can be linked with the position information of the nature due to the introduction of the position information of the MPU 9250.

Claims (6)

1. A bionic indoor positioning and navigation method for a robot is characterized by comprising the following steps:
(1) modeling a human positioning and navigation method based on a human visual system and positioning and navigation cells in human brain to obtain a positioning and navigation model;
firstly, simulating horizontal cells of a visual system to perform illumination removal treatment on an acquired image; then the photoreceptor cell obtains color information and edge information of the environment; the retina acquires information and then transmits the information to the brain through the neurons to perform information processing; the brain receives the information and then further processes the information to obtain the color feature, the shape feature and the symmetry feature of the article, so that the marker in the environment is identified; after the marker is identified, comparing the marker with the positioning point memorized by the position cell to obtain the current position; the grid cells memorize the paths of all position points like a map; judging the road distance between two positions by a person through a depth image; head orientation cell analysis the direction of head orientation provides orientation information for the map;
(2) performing bionic algorithm design for identifying indoor articles; extracting color characteristics, shape characteristics and symmetrical characteristics of the object through color and edge information, and then storing the characteristics by utilizing the memory function of a database bionic human brain; during identification, template matching identification is carried out on the object to be detected and the query features;
(3) performing bionic positioning navigation algorithm design; simulating the behaviors of the position cells, the head direction cells and the grid cells, and positioning by utilizing an indoor article database on the basis of processing and identifying the articles by the bionic human brain on the visual information; meanwhile, a travel path is retrieved by utilizing a map database according to the obtained locating point, and navigation is realized;
in the step (1): an analogy and simulation bionic method is adopted, modeling is deeply researched from a behavior mechanism of human eyes for acquiring information and brain processing information, and bionic is carried out from the following physiological behaviors:
(1) after acquiring the environment image, the bionic human eyes firstly carry out brightness balance and edge contrast enhancement, and then acquire the behavior of color information and edge information in the image;
(2) the bionic human eyes can sense the distance information of the environment, but adopt the behavior of fuzzy representation;
(3) the bionic human brain identifies the behavior of the marker through color features and edge features;
(4) the bionic human brain memorizes the positioning points through the position cells and realizes the positioning through the memorized marked articles;
(5) the bionic human brain realizes navigation by memorizing a path through the combined action of the position cells and the grid cells;
(6) the bionic human brain obtains the behavior of the orientation-aided navigation by using the information of the cells in the head direction.
2. The method for robot bionic indoor positioning and navigation according to claim 1, wherein the step (2) comprises the following steps:
enhancing the image; acquiring color information; acquiring color characteristics; acquiring edge information; vectorizing edge information and extracting shape features; detecting the symmetric characteristics; and identification based on an item prior database.
3. The method for robot bionic indoor positioning and navigation according to claim 2, characterized in that the image enhancement specific algorithm steps are as follows:
(1) performing HSV space conversion on an original image to respectively obtain H, S, V channel data;
(2) carrying out histogram equalization processing on the V channel data;
(3) re-fusing the processed V with H, S;
(4) and (4) converting the HSV image obtained in the step (3) into RGB.
4. The method of claim 2, wherein the algorithm for identifying the object by using the database comprises the following steps:
(1) inputting an original image and solving a color histogram of the original image;
(2) according to the color distribution condition in the color histogram, finding a connected domain of the color in the original image from the most color distribution, and intercepting the corresponding image in the original image through the connected domain;
(3) solving the intercepted image color distribution histogram feature vector to obtain the feature vector of the color to be detected;
(4) searching by using the color with the most distribution obtained by the color distribution in the step (2) as a main color characteristic in a database to obtain an article record conforming to the main color characteristic;
(5) matching the color characteristic vectors of the articles obtained in the step (4) and the vector to be detected obtained in the step (3) through a Hausdorff distance, and selecting the articles with the threshold value smaller than 0.9 to represent the articles passing the color screening;
(6) solving the shape characteristics of the image intercepted in the step (2) and the shape characteristics of the article screened in the step (5) stored in a database, and carrying out Hausdorff distance judgment, and selecting the article with the threshold value smaller than 1.7 for screening;
(7) matching the symmetrical features of the image intercepted in the step (2) with the symmetrical features of the articles screened in the step (6), and if the symmetrical features cannot be distinguished, selecting the articles which meet the threshold condition in the step (6) and have the minimum Hausdorff distance as a result; otherwise, selecting from the step (2), and repeating the process from the step (2) to the step (7) for the second time of color distribution; and circulating in sequence.
5. The method for robot bionic indoor positioning and navigation according to claim 1, characterized in that step (3) comprises:
path identification based on depth information; performing bionic distance representation; and constructing a robot map.
6. The method of claim 5, wherein the method comprises the following steps: the path identification based on the depth information comprises a bionic path acquisition method and a bionic distance representation method; the bionic path acquisition method comprises the following specific steps:
(1) calibrating the depth image and the RGB image by using a chessboard calibration method;
(2) transforming the image obtained by the Kinect into an HSV space to obtain color information of a scene in front of the robot;
(3) keeping the pixel value of the part, which is consistent with the ground color, in the image unchanged, and setting the inconsistent part to be black to obtain an area only with black and road colors;
(4) obtaining the distance of a travelable space of a road in front of the robot through the depth information, and if the depth information is less than 40 cm, determining that the front is an obstacle and the robot cannot move forwards; and then judging whether roads and travelable space distances exist on the left side and the right side of the barrier in the image according to the depth information.
CN201810595271.4A 2018-06-11 2018-06-11 Bionic indoor positioning and navigation method for robot Expired - Fee Related CN109000655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810595271.4A CN109000655B (en) 2018-06-11 2018-06-11 Bionic indoor positioning and navigation method for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810595271.4A CN109000655B (en) 2018-06-11 2018-06-11 Bionic indoor positioning and navigation method for robot

Publications (2)

Publication Number Publication Date
CN109000655A CN109000655A (en) 2018-12-14
CN109000655B true CN109000655B (en) 2021-11-26

Family

ID=64600648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810595271.4A Expired - Fee Related CN109000655B (en) 2018-06-11 2018-06-11 Bionic indoor positioning and navigation method for robot

Country Status (1)

Country Link
CN (1) CN109000655B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111044031B (en) * 2019-10-10 2023-06-23 北京工业大学 Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
CN111360829B (en) * 2020-03-13 2023-12-05 苏州三百亿科技有限公司 Medical supplies transporting robot under artificial intelligence big data and control method thereof
CN111551153A (en) * 2020-04-20 2020-08-18 东北师范大学 Ocean profile environmental parameter rapid measurement system
CN113008225B (en) * 2021-03-10 2022-09-02 中科人工智能创新技术研究院(青岛)有限公司 Visual language navigation method and system based on non-local visual modeling
CN115456057A (en) * 2022-08-30 2022-12-09 海尔优家智能科技(北京)有限公司 User similarity calculation method and device based on sweeping robot and storage medium
CN115326078B (en) * 2022-10-17 2023-01-17 深圳赤马人工智能有限公司 Path navigation method and device, intelligent sweeping and washing robot and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101691037A (en) * 2009-10-09 2010-04-07 南京航空航天大学 Method for positioning mobile robot based on active visual perception and chaotic evolution
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features
CN102401656A (en) * 2011-11-08 2012-04-04 中国人民解放军第四军医大学 Place cell bionic robot navigation algorithm
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN106662452A (en) * 2014-12-15 2017-05-10 美国 iRobot 公司 Robot lawnmower mapping
CN107063260A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of bionic navigation method based on mouse cerebral hippocampal structure cognitive map
CN107122827A (en) * 2017-04-28 2017-09-01 安徽工程大学 A kind of RatSLAM algorithms based on DGSOM neutral nets

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101691037A (en) * 2009-10-09 2010-04-07 南京航空航天大学 Method for positioning mobile robot based on active visual perception and chaotic evolution
CN101763429A (en) * 2010-01-14 2010-06-30 中山大学 Image retrieval method based on color and shape features
CN102401656A (en) * 2011-11-08 2012-04-04 中国人民解放军第四军医大学 Place cell bionic robot navigation algorithm
CN106662452A (en) * 2014-12-15 2017-05-10 美国 iRobot 公司 Robot lawnmower mapping
CN106125730A (en) * 2016-07-10 2016-11-16 北京工业大学 A kind of robot navigation's map constructing method based on Mus cerebral hippocampal spatial cell
CN107063260A (en) * 2017-03-24 2017-08-18 北京工业大学 A kind of bionic navigation method based on mouse cerebral hippocampal structure cognitive map
CN107122827A (en) * 2017-04-28 2017-09-01 安徽工程大学 A kind of RatSLAM algorithms based on DGSOM neutral nets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A bionic autonomous navigation system by using polarization navigation sensor and stereo camera;Xian, Zhiwen等;《Autonomous Robots》;20170601;第41卷(第5期);第1107-1118页 *
具有感知和认知能力的智能机器人若干问题的研究;陈东岳;《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》;20071215(第6期);第I140-45页 *

Also Published As

Publication number Publication date
CN109000655A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109000655B (en) Bionic indoor positioning and navigation method for robot
CN108496127B (en) Efficient three-dimensional reconstruction focused on an object
CN109597087B (en) Point cloud data-based 3D target detection method
CN111210518B (en) Topological map generation method based on visual fusion landmark
CN104036488B (en) Binocular vision-based human body posture and action research method
CN111080659A (en) Environmental semantic perception method based on visual information
CN110956651A (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
CN110555412B (en) End-to-end human body gesture recognition method based on combination of RGB and point cloud
CN104463191A (en) Robot visual processing method based on attention mechanism
Cui et al. 3D semantic map construction using improved ORB-SLAM2 for mobile robot in edge computing environment
CN107397658B (en) Multi-scale full-convolution network and visual blind guiding method and device
CN106780631A (en) A kind of robot closed loop detection method based on deep learning
CN110060284A (en) A kind of binocular vision environmental detecting system and method based on tactilely-perceptible
CN113538218B (en) Weak pairing image style migration method based on pose self-supervision countermeasure generation network
CN117612135A (en) Travel area judging method based on transformation point cloud and image fusion
CN117214904A (en) Intelligent fish identification monitoring method and system based on multi-sensor data
CN109202911B (en) Three-dimensional positioning method for cluster amphibious robot based on panoramic vision
CN108469729A (en) A kind of human body target identification and follower method based on RGB-D information
Kress et al. Pose based trajectory forecast of vulnerable road users
CN114397894A (en) Mobile robot target searching method simulating human memory
Atanasov et al. Nonmyopic view planning for active object detection
Kim et al. Three‐dimensional map building for mobile robot navigation environments using a self‐organizing neural network
Chen et al. Global Visual And Semantic Observations for Outdoor Robot Localization
CN115797397B (en) Method and system for all-weather autonomous following of robot by target personnel
He Image recognition technology based on neural network in robot vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211126