CN109313822B - Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment - Google Patents

Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment Download PDF

Info

Publication number
CN109313822B
CN109313822B CN201780017028.8A CN201780017028A CN109313822B CN 109313822 B CN109313822 B CN 109313822B CN 201780017028 A CN201780017028 A CN 201780017028A CN 109313822 B CN109313822 B CN 109313822B
Authority
CN
China
Prior art keywords
virtual wall
movable electronic
electronic equipment
distance
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780017028.8A
Other languages
Chinese (zh)
Other versions
CN109313822A (en
Inventor
李北辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Qiyuan Robot Co.,Ltd.
Original Assignee
Guangzhou Airob Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Airob Robot Technology Co ltd filed Critical Guangzhou Airob Robot Technology Co ltd
Publication of CN109313822A publication Critical patent/CN109313822A/en
Application granted granted Critical
Publication of CN109313822B publication Critical patent/CN109313822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a virtual wall construction method and device based on machine vision, a map construction method and movable electronic equipment, wherein the virtual wall construction method based on machine vision carries out matching operation on a picture acquired in real time, when the matching is successful, a specific characteristic point on the picture is selected as a key point, the position of a virtual wall relative to the movable electronic equipment is calculated according to the key point, and the virtual wall is automatically constructed, so that a boundary for dividing an accessible area and an prohibited access area can be accurately constructed, and the obvious boundary can completely prohibit the movable electronic equipment from entering the prohibited access area, so that the method has the advantages of simplicity, practicability and strong reliability; in addition, the scheme in this embodiment need not extra mutual equipment and carries out the setting of virtual wall, also need not to set up location label etc. in specific position, and intelligent degree is higher.

Description

Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment
Technical Field
The invention relates to the field of instant positioning and map construction, in particular to a virtual wall construction method and device based on machine vision, a map construction method and a movable electronic device.
Background
Positioning and mapping of mobile devices is a hot research problem in the field of robotics. There have been practical solutions to mobile device autonomous positioning in known environments and map creation of known robot locations. However, in many environments the mobile device cannot be located using a global positioning system and it is difficult, if not impossible, to obtain a map of the environment in which the mobile device is operating in advance. At this time, the mobile device needs to construct a map in a completely unknown environment under the condition that the position of the mobile device is uncertain, and meanwhile, the map is used for autonomous positioning and navigation. This is called simultaneous localization and mapping (SLAM).
In the instant positioning and mapping, the mobile device identifies the characteristic marks (such as RFID labels and color block labels) in an unknown environment by using the sensors carried by the mobile device, and then identifies the accessible areas and the prohibited areas according to the information carried by the characteristic marks, so that the mobile device is guided to enter the designated areas to work according to the personalized requirements of users. The existing guiding mode has the following defects: the accessible area and the prohibited area cannot be accurately identified, and false identification and inaccurate identification are easy to occur, so that the mobile device enters the prohibited area, and damage to the mobile device may be caused.
Disclosure of Invention
The embodiment of the invention aims to provide a virtual wall construction method, a map construction method and a mobile electronic device based on machine vision, which can effectively solve the problem that the mobile device enters an access-forbidden area due to the fact that false identification and inaccurate identification easily occur in the prior art.
The embodiment of the invention provides a virtual wall construction method based on machine vision, which comprises the following steps:
in the process that the movable electronic equipment traverses the area to be positioned, images of the surrounding environment are collected in real time at a preset frequency through a camera arranged on the movable electronic equipment, and the collected images at each moment are projected to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
according to a preset image matching algorithm, when a target image acquired at any moment is matched with any marking pattern in the marking pattern library, acquiring x key points of the target image on the photosensitive surface; wherein, the key points of the target image on the photosensitive surface are all positioned on a characteristic straight line; x is more than or equal to 2;
calculating the distance between a virtual wall and the movable electronic equipment based on key points of the target image on the photosensitive surface, and constructing the virtual wall on a surface which forms a preset included angle with the characteristic straight line and is perpendicular to the photosensitive surface according to the distance between the virtual wall and the movable electronic equipment; and when the preset included angle is equal to 0 degree or 180 degrees, constructing the virtual wall on a surface which is parallel to the characteristic straight line and is vertical to the photosensitive surface.
As a refinement of the above embodiment, the method further comprises the steps of:
responding to a calibration instruction, acquiring an image right above the movable electronic equipment at the current moment, and storing the image into the marking pattern library as a new marking pattern;
and acquiring a plurality of characteristic points of the marking pattern and a characteristic descriptor corresponding to each characteristic point through the image matching algorithm.
As an improvement of the above embodiment, when a target image acquired at any time is matched with any marker pattern in the marker pattern library according to a preset image matching algorithm, acquiring x key points of the target image on the photosensitive surface specifically includes:
obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
acquiring feature points of the target image and the mark pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the mark pattern, and judging that the target image is matched with any mark pattern in the mark pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
and according to the position relation of the characteristic points with the matching relation on the target image, taking the characteristic points with x positions on the same straight line as key points.
As an improvement of the above embodiment, the image matching algorithm is a scale-invariant feature transform algorithm or an accelerated robust feature algorithm, and each feature descriptor of the marker pattern/target image is obtained by the following steps:
establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, so as to obtain a feature point of the marked image/target image and the scale and position of the feature point;
according to the gradient direction distribution characteristics of the neighborhood pixels of the feature points, giving a direction to each feature point;
and according to the scale, the position and the direction of the feature point, carrying out regional blocking on the surrounding image of the feature point, and calculating an intra-block gradient histogram so as to generate a feature descriptor of the feature point.
As an improvement of the above embodiment, an image collected by a camera is projected to a photosensitive surface of the image sensor through an imaging lens; and the distance between the virtual wall and the movable electronic equipment is calculated by a triangular distance measurement method.
As an improvement of the above embodiment, the calculating the distance between the virtual wall and the movable electronic device based on the key point of the target image on the photosensitive surface specifically includes:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
As a modification of the above embodiment, the image sensor includes a PSD sensor, a CCD sensor, or a CMOS sensor.
As a modification of the above embodiment, the width of the virtual wall is calculated by the following formula:
Figure BDA0001798216190000021
w is the width of the virtual wall, a is the distance from the upper frame of the virtual wall to the photosensitive surface, and lambda is the wide angle of the camera.
As a refinement of the above embodiment, the method further comprises the steps of:
when the distance between the virtual wall and the movable electronic equipment is smaller than a preset distance, the movable electronic equipment is moved through a preset avoidance strategy so that the distance between the movable electronic equipment and the virtual wall is increased.
As a refinement of the above embodiment, the method further comprises the steps of:
after the virtual wall is constructed, the movable electronic equipment is controlled to penetrate through the virtual wall in a preset path.
As an improvement of the above embodiment, after projecting the captured image onto a photosensitive surface of an image sensor provided in the mobile electronic device to form a target image, the method further includes:
and correcting the lens deformation of the target image.
The embodiment of the invention also correspondingly provides a map construction method, which comprises the following steps:
constructing a coordinate system by taking any position or a specific position in the area to be positioned as a coordinate origin, and calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the area to be positioned, so as to obtain the coordinate value of the movable electronic equipment in the coordinate system in real time;
the map is constructed by adopting the virtual wall construction method based on the machine vision as described in any one of the above;
and carrying out real-time map construction on the area to be positioned according to the coordinate value of the movable electronic equipment in the coordinate system and the coordinate plane of the virtual wall.
As an improvement of the above embodiment, the performing, in real time, a map of the area to be located according to the coordinate values of the movable electronic device in the coordinate system and the position of the virtual wall includes:
calculating and recording coordinate values of the position of the obstacle when the obstacle is detected by the movable electronic equipment each time based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall and the coordinate value of each obstacle position.
As an improvement of the above embodiment, at least two positioning tags are arranged on the region to be positioned, each positioning tag is correspondingly arranged at a specific position of the region to be positioned, and each piece of positioning tag information includes unique encoding information for distinguishing an absolute position of the positioning tag information; then, the real-time map construction of the area to be located according to the coordinate value of the movable electronic device in the coordinate system and the position of the virtual wall further includes:
in the traversing process, based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point, calculating the coordinate value of the position of the positioning tag when the movable electronic equipment acquires the positioning tag information each time, and recording the positioning tag information and the corresponding coordinate value;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall, the coordinate value of each barrier position, the information of each positioning label and the coordinate value thereof.
As a refinement of the above embodiment, the method further comprises the steps of:
calculating the distance of the movable electronic equipment deviating from the center line of the virtual wall according to the perspective deformation of the target image;
and returning the movable electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the movable electronic equipment from the midline of the virtual wall.
As a refinement of the above embodiment, the movable electronic device comprises a driving wheel and a driven wheel, the method further comprising the steps of:
in the process that the movable electronic equipment travels along any straight line, when the speed of a driving wheel of the movable electronic equipment is detected to be inconsistent with the speed of a driven wheel at any moment, the smaller value of the speed of the driving wheel and the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel.
The embodiment of the invention also correspondingly provides a virtual wall construction device based on machine vision, which is arranged on the movable electronic equipment and comprises:
the camera is used for acquiring images of the surrounding environment in real time at a preset frequency in the process that the movable electronic equipment traverses the area to be positioned;
the image sensor is used for receiving the image acquired at each moment and projecting the image on a photosensitive surface of the image sensor to form a target image;
a storage device for pre-storing a plurality of marker patterns;
the controller is used for acquiring x key points of the target image on the photosensitive surface when the target image acquired at any moment is matched with any marking pattern in the marking pattern library according to a preset image matching algorithm; calculating the distance between a virtual wall and the movable electronic equipment based on key points of the target image on the photosensitive surface, and constructing the virtual wall on a surface which forms a preset included angle with the characteristic straight line and is perpendicular to the photosensitive surface according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface; key points of the target image on the photosensitive surface are all located on a characteristic straight line; x is more than or equal to 2.
As an improvement of the above embodiment, the controller is further configured to, in response to a calibration instruction, acquire an image directly above the mobile electronic device at the current time through the camera, and store the image into the marker pattern library as a new marker pattern; and acquiring a plurality of characteristic points of the marking pattern and a characteristic descriptor corresponding to each characteristic point through the image matching algorithm.
As an improvement of the above embodiment, according to a preset image matching algorithm, when a target image acquired at any time is matched with any marker pattern in the marker pattern library, the acquiring, by the controller, x key points of the target image on the photosensitive surface specifically includes:
acquiring a plurality of feature points of the target image and a feature descriptor corresponding to each feature point through the image matching algorithm;
acquiring feature points of the target image and the mark pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the mark pattern, and judging that the target image is matched with any mark pattern in the mark pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
and according to the position relation of the characteristic points with the matching relation on the target image, taking the characteristic points with ax positioned on the same straight line as key points.
As an improvement of the above embodiment, the image matching algorithm is a scale-invariant feature transformation algorithm or an accelerated robust feature algorithm, and each feature descriptor of the marker pattern/target image is obtained by the controller by adopting the following steps:
establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, so as to obtain a feature point of the marked image/target image and the scale and position of the feature point;
according to the gradient direction distribution characteristics of the neighborhood pixels of the feature points, giving a direction to each feature point;
and according to the scale, the position and the direction of the feature point, carrying out regional blocking on the surrounding image of the feature point, and calculating an intra-block gradient histogram so as to generate a feature descriptor of the feature point.
As an improvement of the above embodiment, the camera further includes an imaging lens, and an image collected by the camera is projected to the photosensitive surface of the image sensor through the imaging lens; and the distance between the virtual wall and the movable electronic equipment is calculated by a triangular distance measurement method.
As an improvement of the above embodiment, the calculating, by the controller, the distance between the virtual wall and the movable electronic device based on the key point of the target image on the photosensitive surface specifically includes:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
As a modification of the above embodiment, the controller calculates the width of the virtual wall based on the following formula:
Figure BDA0001798216190000041
w is the width of the virtual wall, a is the distance from the upper frame of the virtual wall to the photosensitive surface, and lambda is the wide angle of the camera.
As a modification of the above embodiment, the image sensor includes a PSD sensor, a CCD sensor, or a CMOS sensor.
As an improvement of the above embodiment, when the distance between the virtual wall and the movable electronic device is less than a preset distance, the controller is further configured to move the movable electronic device through a preset avoidance strategy so that the distance between the movable electronic device and the virtual wall is increased.
As a modification of the above embodiment, the controller is further configured to control the movable electronic device to pass through the virtual wall in a preset path after the virtual wall is constructed.
As an improvement of the above embodiment, after the captured image is projected onto a photosensitive surface of an image sensor provided in the mobile electronic device to form a target image, the controller is further configured to correct perspective distortion of the target image.
The embodiment of the invention also correspondingly provides the movable electronic equipment, which comprises:
the virtual wall construction device based on the machine vision is used for constructing a virtual wall;
the controller is also used for constructing a coordinate system by taking any position or a specific position in the area to be positioned as a coordinate origin;
the encoder is used for calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the area to be positioned;
the controller is further configured to receive the displacement and the direction of the movable electronic device relative to the origin of coordinates, which are sent by the encoder, and acquire a coordinate value of the movable electronic device in the coordinate system at any time;
the controller is further used for carrying out real-time map construction on the area to be positioned according to the coordinate value of the movable electronic equipment in the coordinate system and the coordinate plane of the virtual wall.
As an improvement of the above embodiment, the mobile electronic device further includes a collision sensor, a laser sensor or an infrared sensor, and when an obstacle is sensed by the collision sensor, the controller takes a coordinate value of a current position of the mobile electronic device as a coordinate value of a position of the obstacle based on a moving direction and a moving distance of the mobile electronic device with respect to the starting point;
when the laser sensor/infrared sensor is used for detecting an obstacle, the controller calculates the position of the obstacle relative to the current movable electronic equipment according to a laser/infrared distance calculation principle, so that the coordinate value of the obstacle at the current moment is calculated according to the moving direction and the moving distance of the movable electronic equipment relative to the starting point at the current moment;
the controller is used for carrying out real-time map construction on the area to be positioned based on the coordinate plane of the virtual wall and the coordinate value of each obstacle position.
As an improvement of the above embodiment, at least two positioning tags are arranged on the region to be positioned, each positioning tag is correspondingly arranged at a specific position of the region to be positioned, and each piece of positioning tag information includes unique encoding information for distinguishing an absolute position of the positioning tag information; the controller constructs the map of the area to be positioned in real time according to the coordinate value of the movable electronic equipment in the coordinate system and the position of the virtual wall, and comprises the following steps:
in the traversing process, based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point, calculating the coordinate value of the position of the positioning tag when the movable electronic equipment acquires the positioning tag information each time, and recording the positioning tag information and the corresponding coordinate value;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall, the coordinate value of each barrier position, the information of each positioning label and the coordinate value thereof.
As a refinement of the above embodiment, the controller is further configured to calculate a distance of the movable electronic device from a centerline of the virtual wall according to perspective deformation of the target image; and returning the movable electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the movable electronic equipment from the midline of the virtual wall.
As an improvement of the above embodiment, the movable electronic device includes a driving wheel and a driven wheel, and the controller is further configured to, when it is detected that the speed of the driving wheel of the movable electronic device is inconsistent with the speed of the driven wheel at any time during the travel of the movable electronic device along any straight line, calculate the displacement and direction of the movable electronic device with respect to the origin of coordinates according to a reference speed which is a smaller value of the speed of the driving wheel and the speed of the driven wheel;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel.
Compared with the prior art, the embodiment of the invention provides a virtual wall construction method and device, a map construction method and a movable electronic device based on machine vision, wherein the image obtained in real time is subjected to matching operation, when the matching is successful, specific feature points on the image are selected as key points, the position of the virtual wall relative to the movable electronic device is calculated according to the key points, the virtual wall is automatically constructed, and then a boundary for dividing an accessible area and an prohibited access area can be accurately constructed; in addition, the scheme in this embodiment need not extra mutual equipment and carries out the setting of virtual wall, also need not to set up location label etc. in specific position, and intelligent degree is higher.
Drawings
Fig. 1 is a schematic flowchart of a method for constructing a virtual wall based on machine vision according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram illustrating the calculation of the distance between the virtual wall and the center of the photosurface in embodiment 1 of the invention;
fig. 3 is a schematic diagram of a positional relationship between another virtual wall and a movable electronic device in embodiment 1 of the present invention, as compared with fig. 2;
FIG. 4 is a schematic diagram of another example of calculating the distance between the virtual wall and the center of the photosurface in embodiment 1 of the invention;
FIG. 5 is a top view of the center of the photosurface and a virtual wall corresponding to FIG. 4;
fig. 6 is a schematic diagram of calculating a distance between a virtual wall and a movable electronic device in embodiment 1 of the present invention;
fig. 7 is a schematic diagram of calculating the width of the virtual wall according to embodiment 1 of the present invention;
fig. 8 is a schematic flowchart of a method for constructing a virtual wall based on machine vision according to embodiment 2 of the present invention;
FIG. 9 is a schematic diagram showing the alignment of a marker pattern in example 2 of the present invention;
FIG. 10 is a schematic flow chart of each feature descriptor for calculating a mark pattern/target image in embodiment 2 of the present invention;
fig. 11 is a schematic flowchart of a method for constructing a virtual wall based on machine vision according to embodiment 3 of the present invention;
fig. 12 is a flowchart of a virtual wall construction method based on machine vision according to embodiment 4 of the present invention;
FIG. 13 is a schematic flow chart diagram of a map building method provided in embodiment 5 of the present invention;
FIG. 14 is a flowchart illustrating a map building method according to embodiment 6 of the present invention;
FIG. 15 is a flowchart illustrating a map building method according to embodiment 7 of the present invention;
fig. 16 is a schematic view of turning a movable electronic device in embodiment 7 of the present invention;
fig. 17 is a schematic structural diagram of a virtual wall building apparatus based on machine vision according to embodiment 8 of the present invention;
fig. 18 is a schematic structural diagram of a mobile electronic device according to embodiment 9 of the present invention;
fig. 19 is a schematic structural diagram of a mobile electronic device according to embodiment 10 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a virtual wall construction method based on machine vision provided in embodiment 1 of the present invention includes the steps of:
s11, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
in the step, under natural illumination, the surface of an object generates diffuse reflection, and the reflected light is focused by an imaging lens, so that a projected target image is formed on a photosensitive surface of the image sensor. Compared with the traditional laser triangular distance measurement, the laser does not need to be emitted first by the laser and then enters the target object, a reference surface does not need to be established, the distance measurement can be carried out only by diffuse reflection of the object (such as a door frame) under natural light, and the cost is reduced while the distance measurement precision is improved.
Wherein the image sensor comprises a PSD sensor, a CCD sensor or a CMOS sensor.
S12, according to a preset image matching algorithm, when a target image acquired at any time is matched with any mark pattern in the mark pattern library, acquiring x key points of the target image on the photosensitive surface; wherein, the key points of the target image on the photosensitive surface are all positioned on a characteristic straight line; x is more than or equal to 2;
the marking patterns in the marking pattern library need to be stored in the memory of the mobile electronic device in advance, wherein one mode is that when a pre-stored marking instruction is received, an input picture (such as a vector diagram) is read and stored in the marking pattern library to be used for subsequent virtual wall identification. In addition, the characteristic line or key point of the virtual wall needs to be defined on each marking pattern, and the method can be realized by two modes: one is to add characteristic straight line information or key point information to a pre-stored picture and then import the information into the mobile electronic equipment; and the other is to perform the operation of defining the characteristic straight line or the key point on the imported picture according to the definition operation input by the user in response to the instruction of defining the characteristic straight line or the key point.
Preferably, the embodiment of the present invention may introduce the mark pattern by the following method: and responding to a calibration instruction, acquiring an image right above the movable electronic equipment at the current moment, and storing the image into the marking pattern library as a new marking pattern.
The matching of images is performed by matching feature vectors of feature points. When a new marking pattern is obtained, the marking pattern is preprocessed and feature points are extracted, and then a plurality of feature points of the marking pattern and a feature descriptor corresponding to each feature point are obtained. Image feature extraction is a prerequisite for image analysis and image recognition, and is the most effective way to simplify and express high-dimensional image data.
Based on the above calibration process, the corresponding key points can be preferably determined by the following two ways: in order to display a plurality of generated characteristic points on the marking pattern, according to a selection instruction of a user for the plurality of characteristic points, the definition of key points of the marking pattern is completed; in another specific example, after a plurality of feature points of the mark pattern are generated, the system selects the plurality of feature points through a preset algorithm to complete the definition of the key points of the mark pattern.
S13, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; and when the preset included angle is equal to 0 degree or 180 degrees, constructing the virtual wall on a surface which is parallel to the characteristic straight line and is vertical to the photosensitive surface.
In step S3, feature points are obtained by performing feature extraction on the target image using the image matching algorithm, and then specific feature points are selected as key points to perform mapping of the virtual wall. It should be noted that the preset included angle is an included angle between the virtual wall to be constructed and the characteristic straight line, and is closely related to the selection of the key point.
It should be noted that, the distance between the virtual wall to be constructed and the movable electronic device may be calculated by laser focusing or phase focusing, or by phase laser ranging, pulse laser ranging, or triangulation laser ranging. Preferably, this scheme adopts triangle range finding method, compares with traditional triangle method laser rangefinder, need not to transmit laser earlier through the laser instrument on target object and the reference surface, and the reflection of rethread target object and the reflection image of reference surface calculate the distance of target object and reference surface, consequently also need not to establish a reference surface, only need the diffuse reflection of object under the natural light can carry out the range finding, has simple structure, the precision is high, fast and use nimble advantage, and further reduction in production cost moreover.
In step S4, if the preset included angle is equal to 0 ° or 180 °, the virtual wall to be constructed is determined to be parallel to the characteristic straight line.
According to the principle of triangulation, calculating the distance between the virtual wall and the mobile electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
As shown in fig. 2, when θ is 0 ° or 180 °, i.e., when a virtual wall to be constructed is parallel to the characteristic straight line, | cos0 ° | is | cos180 ° | is 1, the distance of the virtual wall from the movable electronic device can be calculated by the following formula:
D=a/b*S
wherein a is a distance from the upper frame 400 of the virtual wall 100 to the photosensitive surface 301, b is a distance from the imaging lens 303 to the photosensitive surface 301, S is a distance from the characteristic straight line 304 to the central point 302 of the photosensitive surface 301, and D is a distance from the virtual wall 100 to the central point 302 of the photosensitive surface 301. Fig. 2 is a situation where the movable electronic device is facing a virtual wall, and fig. 3 is a situation where the movable electronic device is shifted to the right relative to the virtual wall, it can be understood that, in any direction (including facing, back-to-back, left-to-right shifting and right-to-right shifting) of the movable electronic device relative to the virtual wall, as long as the camera detects a specific number of matching feature points, a reasonable key point can be selected based on the positions of the feature points to construct a virtual wall with an accurate position.
As shown in fig. 4, when the preset included angle is not equal to 0 ° or 180 °, the distance between the virtual wall and the movable electronic device is calculated by the following formula:
D=a/b*S*|cosθ|
wherein a is a distance from the upper frame 400 of the virtual wall 100 to the photosensitive surface 301, b is a distance from the imaging lens 303 to the photosensitive surface 301, S is a distance from the characteristic straight line 304 to the central point 302 of the photosensitive surface 301, D is a distance from the virtual wall 100 to the central point 302 of the photosensitive surface 301, and θ is the preset included angle; as shown in fig. 4, a mapping straight line 402 exists on the ceiling 500 of the room, and the mapping straight line 402 is projected on the photosensitive surface 301 through the imaging lens 303 to generate the characteristic straight line 304; it can be understood that h ═ a/b ×, S is the distance between the central point 302 of the photosensitive surface 301 and the plane 401 where the mapping straight line 402 is located, and as can be seen from fig. 5, D ═ h ═ cos θ |.
It should be noted that, in the embodiment of the present invention, the distance D between the virtual wall 100 and the central point 302 of the photosensitive surface 301 is taken as the distance between the virtual wall 100 and the movable electronic device. In another preferred embodiment, as shown in fig. 7, assuming that the movable electronic device 300 is a regular circle, the distance between the virtual wall 100 and the movable electronic device 300 can be defined as the distance from the center point 305 of the movable electronic device 300 to the virtual wall 100; specifically, when the movable electronic device 300 faces the virtual wall 100, a distance between the virtual wall 100 and the movable electronic device 300 is a sum of a distance between the virtual wall 100 and a center point 302 of the photosensitive surface 301 and a distance between a center point 305 of the movable electronic device 300 and a center point 302 of the photosensitive surface 301, i.e., L ═ D + D ', where L is a distance between the virtual wall 100 and the movable electronic device 300, D is a distance between the virtual wall 100 and a center point 302 of the photosensitive surface 301, and D' is a distance between a center point 305 of the movable electronic device 300 and a center point 302 of the photosensitive surface 301. It can be understood that the above formula is applicable to the case that the movable electronic device is in a circular shape, and is also applicable to the case that the movable electronic device is in other regular shapes, and details are not described herein again.
In the embodiment of the present invention, the distance from the upper frame of the virtual wall to the photosurface refers to the distance from the ceiling of a room to the photosurface, or refers to the distance from the upper frame of a door to the photosurface, and specific values thereof can be preset in the system.
In addition to calculating the position information of the virtual wall, the width information of the virtual wall may also be calculated by the following steps, specifically, calculating the width of the virtual wall by the following formula in conjunction with fig. 7:
Figure BDA0001798216190000071
w is the width of the virtual wall 100, a is the distance from the upper frame of the virtual wall 100 to the photosensitive surface 301, and λ is the wide angle of the camera.
In the embodiment, the position image of the virtual wall to be constructed is directly acquired, the picture is preprocessed and feature-extracted by adopting a preset image matching algorithm, then matching operation is performed on the picture acquired in real time, when matching is successful, specific feature points on the picture are selected as key points, the position of the virtual wall relative to the movable electronic equipment is calculated according to the key points, the virtual wall is automatically constructed, namely, a boundary for dividing the accessible area and the prohibited access area can be accurately constructed, and the obvious boundary can completely prohibit the movable electronic equipment from entering the prohibited access area, so that the method has the advantages of simplicity, practicability and strong reliability; in addition, the scheme in this embodiment need not extra mutual equipment and carries out the setting of virtual wall, also need not to set up location label etc. in specific position, and intelligent degree is higher.
Referring to fig. 8, a flowchart of a virtual wall building method based on machine vision according to embodiment 2 of the present invention includes the steps of:
s21, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
s22, obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
s23, acquiring feature points of the target image and the marker pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the marker pattern, and judging that the target image is matched with any marker pattern in the marker pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
preferably, the embodiment of the present invention may introduce the mark pattern by the following method: and responding to a calibration instruction, acquiring an image right above the movable electronic equipment at the current moment, and storing the image into the marking pattern library as a new marking pattern.
The matching of images is performed by matching feature vectors of feature points. When a new marking pattern is obtained, the marking pattern is preprocessed and feature points are extracted, and then a plurality of feature points of the marking pattern and a feature descriptor corresponding to each feature point are obtained.
S24, according to the position relation of the feature points with the matching relation on the target image, taking the feature points with x positions on the same straight line as key points; x is more than or equal to 2;
according to the biological characteristics of image recognition, the sight line of people always focuses on the main features of the image, namely, the places where the curvature of the outline of the image is maximum or the direction of the outline changes suddenly, and the information amount of the places is maximum. Therefore, in the image recognition process, the input redundant information must be eliminated, and the key information must be extracted, which involves the problem of feature extraction. For example, a door frame image directly above the movable electronic device is captured, and the extracted feature points may be located at the edge of the door frame.
For example, in the calibration process of the marking pattern, the movable electronic device is first placed under the upper frame of the door, the calibration button (which may be disposed on the movable electronic device or on the remote controller) may be pressed or the calibration option on the third interactive terminal (e.g., a mobile phone, a tablet computer, a PC, etc.) may be clicked, and then the picture of the current time is taken through the camera, as shown in fig. 9, the picture 200 includes the left frame 201 of the door, the right frame 202 of the door, the upper frame 203 of the door, and the top 204 of the room. The picture is subjected to feature extraction through a preset image matching algorithm, and a plurality of feature points (x 1-x 6 and y 1-y 6) can be obtained. In the traversing process of the movable electronic equipment, as long as the picture including the door frame is obtained, and the number of the matched feature points is calculated to be larger than a preset threshold value through a preset image matching algorithm, the condition of constructing the virtual wall can be considered to be met. It can be understood that when x2 and x3 are used as key points, the constructed virtual wall is parallel to the characteristic straight line formed by x2 and x 3; when y1 and y2 are used as key points, the constructed virtual wall is at a 90-degree right angle to the characteristic straight line formed by y1 and y 2.
It should be noted that, when the number of feature points of the extracted marker pattern is smaller than a preset number threshold, subsequent identification is unsuccessful, and a virtual wall cannot be constructed at a specific position. Therefore, when the number of the feature points is too small, the calibration is not successful, so that the user is reminded to recalibrate or construct the virtual wall in other modes.
S25, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; and when the preset included angle is equal to 0 degree or 180 degrees, constructing the virtual wall on a surface which is parallel to the characteristic straight line and is vertical to the photosensitive surface.
As shown in fig. 10, the image matching algorithm is a scale-invariant feature transformation algorithm or an accelerated robust feature algorithm, and each feature descriptor of the marker pattern/target image is obtained through the following steps:
s26, establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, thereby obtaining a feature point of the marked image/target image and the scale and position of the feature point;
in step S26, since the local extreme point may not be the true extreme point in the discrete space, the true polar implantation point may fall in the gap of the discrete point, and therefore the coordinate position of the feature point may be obtained by performing interpolation check on the gap positions and then obtaining the coordinate position of the extreme point.
S27, endowing a direction for each feature point according to the gradient direction distribution characteristics of the neighborhood pixels of the feature points;
in step S27, histogram statistics is performed on the gradient directions of points in the neighborhood of the feature point, a direction having a high specific gravity in the histogram is selected for histogram statistics, and the direction having the highest specific gravity in the histogram is selected as the principal direction of the feature point.
And S28, according to the scale, the position and the direction of the feature point, carrying out regional blocking on the surrounding image of the feature point, and calculating an intra-block gradient histogram, thereby generating a feature descriptor of the feature point.
In this embodiment, the Scale-invariant feature transform algorithm (SIFT) establishes a Scale space by convolution of an original image and a gaussian kernel, and extracts feature points of Scale invariance on a gaussian difference space pyramid. The algorithm has certain affine invariance, visual invariance, rotation invariance and illumination invariance, so the algorithm has wide application in the aspect of image feature improvement. In addition, the scale invariant feature transformation algorithm uses the first-order difference of gaussians to be the Laplace kernel of the gaussians, and the operation amount is greatly reduced.
The Speeded-Up Robust Features algorithm (SURF) is used as a scale invariant feature transformation algorithm, the matching process is almost the same, and the difference is that the Speeded-Up Robust Features algorithm uses an approximate Harr wavelet method to extract feature points, and the method is a speckle feature detection method based on the Hessian determinant. Approximate Harr wavelet values can be effectively calculated by utilizing the integral images on different scales, the construction of a second-order differential template is simplified, and the efficiency of feature detection of a scale space is improved.
Referring to fig. 11, a flowchart of a virtual wall building method based on machine vision provided in embodiment 3 of the present invention includes the steps of:
s31, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
s32, obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
s33, acquiring feature points of the target image and the marker pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the marker pattern, and judging that the target image is matched with any marker pattern in the marker pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
s34, according to the position relation of the feature points with the matching relation on the target image, taking the feature points with x positions on the same straight line as key points; x is more than or equal to 2;
s35, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface;
and S36, when the distance between the virtual wall and the movable electronic equipment is smaller than the preset distance, moving the movable electronic equipment through a preset avoidance strategy to increase the distance between the movable electronic equipment and the virtual wall.
Steps S31 to S35 of this embodiment are substantially the same as steps S21 to S25 shown in fig. 8, and the working process may refer to the detailed description of steps S21 to S25, which is not described herein again.
On the basis of the embodiment 2, the embodiment of the invention adds the step of enabling the movable electronic equipment to travel away from the virtual wall. In a real application scenario, for protection of a machine or other reasons, an entry prohibition area needs to be set, for example, a sweeping robot is prohibited from entering a toilet to prevent surface water or excessive water vapor from entering the machine to cause a short circuit phenomenon, so that a virtual wall needs to be constructed and a corresponding avoidance strategy needs to be set to prevent the robot from entering the entry prohibition area by mistake. Preferably, the avoidance strategy is specifically:
adjusting a direction of travel of the movable electronic device to move the movable electronic device in a direction away from the virtual wall.
It is understood that, besides the above-disclosed avoidance strategy, the avoidance strategy of the present embodiment may also adopt other manners, which are not described herein again.
Preferably, since the relative proportion of the far and near features changes during the imaging process of the photosurface and the images are bent or deformed, after the captured images are projected to the photosurface of an image sensor arranged in the mobile electronic device to form a target image, the target image needs to be corrected for perspective deformation before being compared with the pre-stored mark patterns in a mark pattern library, so that a more accurate virtual wall is constructed.
In the problem of identifying a virtual wall by a mobile electronic device, the conventional method mainly utilizes the following modes:
the method comprises the steps of constructing a map in the process of traversing the whole room area through the movable electronic equipment, rasterizing the constructed map, uploading a rasterized map file to a computer, setting a virtual wall on the map file through the computer, and uploading the rasterized map file with the virtual wall drawn to the movable electronic equipment. The method has the disadvantages that when map construction is carried out in a new environment, the map file needs to be exported again, uploaded, drawn into a virtual wall and imported, and the process is complicated; on the other hand, the setting needs to be carried out by means of additional interaction equipment, and the development of intellectualization is not facilitated.
In the embodiment, the movable electronic device is directly placed under a pre-constructed virtual wall (under a door frame), that is, when the movable electronic device forms a 90-degree angle with the virtual wall, a camera is used to acquire a picture right above the camera as a marker pattern, feature points of the marker pattern are extracted to acquire feature descriptors of each feature point, so that when the movable electronic device acquires a picture matched with the feature points of the marker pattern within a wide-angle range of the camera during room traversal, if the matched feature points are greater than a preset threshold value, key points located on the same straight line in the feature points can be selected, position information of the virtual wall is calculated, and the virtual wall is automatically constructed to serve as a boundary between a subsequent access-permitted area and a subsequent access-prohibited area, or serve as a map for constructing the whole room, without a complicated importing and exporting process, the construction process is more flexible, and the method has the advantages of simplicity and practicability; in addition, in this embodiment, the scheme need not extra mutual equipment and carries out the setting of virtual wall, also need not to set up location label etc. in specific position, and intelligent degree is higher, when needing to cancel the virtual wall of specific position, only need to prestore into portable electronic equipment the mark pattern delete can, convenient and fast.
Referring to fig. 12, a flow of a virtual wall building method based on machine vision provided in embodiment 4 of the present invention includes the steps of:
s41, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
s42, obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
s43, acquiring feature points of the target image and the marker pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the marker pattern, and judging that the target image is matched with any marker pattern in the marker pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
s44, according to the position relation of the feature points with the matching relation on the target image, taking the feature points with x positions on the same straight line as key points; x is more than or equal to 2;
s45, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface;
and S46, after the virtual wall is constructed, controlling the movable electronic equipment to pass through the virtual wall by a preset path.
Steps S41 to S45 of this embodiment are substantially the same as steps S21 to S25 shown in fig. 8, and the working process may refer to the detailed description of steps S21 to S25, which is not described herein again.
On the basis of embodiment 2, the embodiment of the invention adds the step of enabling the movable electronic equipment to travel through the virtual wall. In an actual application scenario, after the cleaning and other work of a specific area is completed, the mobile electronic device needs to enter another area to continue working, and after the virtual wall is constructed, the mobile electronic device can be controlled to pass through the virtual wall by a preset path to enter another area. Preferably, the path may be a straight path passing through the virtual wall and perpendicular to the virtual wall, or may be any curved path passing through the virtual wall.
It is understood that the path of the present embodiment may be configured in other forms besides the above-disclosed path, and is not described herein again.
Referring to fig. 13, a schematic flow chart of a map building method provided in embodiment 5 of the present invention includes the steps of:
s51, constructing a coordinate system by taking any position or a specific position in the area to be positioned as a coordinate origin, and calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the area to be positioned, so as to obtain the coordinate value of the movable electronic equipment in the coordinate system in real time;
s52, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
s53, according to a preset image matching algorithm, when a target image acquired at any time is matched with any mark pattern in the mark pattern library, acquiring x key points of the target image on the photosensitive surface; wherein, the key points of the target image on the photosensitive surface are all positioned on a characteristic straight line; x is more than or equal to 2;
s54, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface;
s55, according to the coordinate value of the movable electronic equipment in the coordinate system and the coordinate plane of the virtual wall, real-time map construction is conducted on the area to be located.
Steps S52 to S54 of this embodiment are substantially the same as steps S11 to S13 shown in fig. 1, and the working process may refer to the detailed description of steps S11 to S13, which is not described herein again.
In this embodiment, a coordinate system is constructed by using an arbitrary position or a specific position within an area to be constructed as a reference point (origin of coordinates), and then a distance and a direction of the movable electronic device with respect to the origin of coordinates are calculated by an encoder provided on the movable electronic device, so as to acquire coordinate values of the movable electronic device in the coordinate system. By the virtual wall construction method based on machine vision, when the position information of the virtual wall relative to the movable electronic equipment is obtained through calculation, the coordinate plane of the virtual wall in the coordinate system can be obtained. The mobile electronic equipment can establish a simple framework of a room according to the coordinate plane of each virtual wall in the coordinate system to form a 3D (three-dimensional) map for the navigation of the mobile electronic equipment, and has the advantages of simplicity and practicability.
Preferably, in addition to using virtual walls to construct the map, the mobile electronic device is also used to construct the map using obstacles and location tags encountered during traversal. Referring to fig. 14, a schematic flow chart of a map building method provided in embodiment 6 of the present invention is applicable to a case where at least two positioning tags are set on a to-be-positioned area, each positioning tag is correspondingly set at a specific position of the to-be-positioned area, and each piece of positioning tag information includes unique encoding information used for distinguishing an absolute position of the to-be-positioned area, and the method includes the steps of:
s61, constructing a coordinate system by taking any position or a specific position in the region to be positioned as a coordinate origin, and calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the region to be positioned, so as to obtain the coordinate value of the movable electronic equipment in the coordinate system in real time;
s62, in the process that the movable electronic equipment traverses the area to be positioned, acquiring images of the surrounding environment in real time at a preset frequency through a camera arranged on the movable electronic equipment, and projecting the acquired images at each moment to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
s63, obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
s64, acquiring feature points of the target image and the marker pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the marker pattern, and judging that the target image is matched with any marker pattern in the marker pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
s65, according to the position relation of the feature points with the matching relation on the target image, taking the feature points with x positions on the same straight line as key points; x is more than or equal to 2;
s66, calculating the distance between a virtual wall and the movable electronic equipment based on the key point of the target image on the photosensitive surface, and constructing the virtual wall on a surface which is perpendicular to the photosensitive surface and forms a preset included angle with the characteristic straight line according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface;
s67, calculating and recording coordinate values of the position of the obstacle when the obstacle is detected each time by the movable electronic equipment based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point;
s68, calculating coordinate values of the position of the positioning label when the positioning label information is acquired by the movable electronic equipment each time based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point in the traversing process, and recording the positioning label information and the corresponding coordinate values;
s69, real-time map construction is carried out on the area to be positioned based on the coordinate plane of the virtual wall, the coordinate value of each obstacle position, the information of each positioning label and the coordinate value of the positioning label.
Steps S62 to S66 of this embodiment are substantially the same as steps S21 to S25 shown in fig. 8, and the working process can refer to the detailed description of steps S21 to S25, which is not described herein again.
Compared with embodiment 5, in this embodiment, a map is further constructed according to the positions of the obstacle and the positioning tag, specifically, the obstacle is detected by an obstacle sensor, a laser sensor or an infrared sensor, and when the obstacle is detected, the coordinate value of the obstacle is obtained according to the position of the movable electronic device at the current moment; in addition, different sensors are arranged according to the type of the positioning tag to acquire the position information of the positioning tag, for example, when the positioning tag is a color block tag, a color sensor is arranged for sensing, and when the positioning tag is an RFID tag, an RFID sensor is arranged for sensing. According to the embodiment, a complete and detailed map can be constructed by positioning the coordinate values of the label and the barrier and the coordinate plane of the virtual wall, so that accurate navigation of the movable electronic equipment is facilitated, and execution of subsequent work is facilitated.
One common application requirement for a robot is that when the robot is off the centerline of the door, the robot needs to be guided back to the centerline of the door to continue traveling. In another preferred embodiment, on the basis of embodiment 6, as shown in fig. 15, the map construction method further includes the steps of:
s71, calculating the distance of the movable electronic equipment deviating from the center line of the virtual wall according to the perspective deformation of the target image;
perspective distortion is the bending or distortion that occurs due to the relative scale change of the near and far features, so the distance of the movable electronic device from the virtual wall midline can be calculated according to the proportion of the number of pixels projected on the photosurface by the near and far.
And S72, returning the movable electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the movable electronic equipment from the midline of the virtual wall.
Through the step, the movable electronic equipment can be guided to return to the position of the center line of the virtual wall, so that the position of the movable electronic equipment is calibrated, and subsequent quick positioning and re-establishment of a forward path are facilitated.
In addition, due to the accuracy of the encoder and the like, there are inevitable errors in the relative distance and direction of the movable electronic device recorded by the encoder, resulting in inaccuracy in the constructed map. Therefore, in the embodiment, after the map is constructed, the coordinate values of the positioning tags, the obstacles or the virtual walls are obtained for multiple times in a mode of multiple traversal of the movable device, and then the coordinate values of each positioning tag, obstacle or virtual wall are corrected by adopting algorithms such as recursion, the more the traversal times of the movable device are, the more accurate the calculated coordinate values of the positioning tags, obstacles or virtual walls are until the error is almost reduced to be negligible finally.
It is further noted that in mobile electronic devices that use wheels for travel, there is a very common phenomenon of slippage. For example, when an obstacle is encountered, the driven wheel positioned at the front part does not rotate any more, the driving wheel at the rear part is still in a rotating state, the encoder still records that the movable electronic equipment is in a moving state, and calculates the relative displacement and the relative distance of the movable electronic equipment in real time according to the rotation of the driving wheel, which can generate a serious error of a travel path, so that the coordinate values of subsequently detected positioning labels, obstacles or virtual walls have errors, an accurate map cannot be constructed, and accurate navigation cannot be realized. Preferably, whether the movable electronic device is in a slipping state can be detected by specifying an effective corrective action to avoid the occurrence of a subsequent error:
in the process that the movable electronic equipment travels along any straight line, when the speed of a driving wheel of the movable electronic equipment is detected to be inconsistent with the speed of a driven wheel at any moment, the smaller value of the speed of the driving wheel and the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel. For the straight-going state, the linear velocities of all points on the movable electronic equipment are equal, and when the unequal state is detected at any moment, the movable electronic equipment can be judged to be in the slipping state at the current moment; more complicated is the case where the mobile electronic device makes a turn along a central point where the velocity of the points is not uniform. Taking the mobile electronic device as an example of a circular object, as shown in fig. 16, when the mobile electronic device 300 makes a left turn at point O at any time, assuming that the speed of the left driving wheel K1 is 50cm/s and the speed of the right driving wheel K2 is 100cm/s, based on the proportional relationship between the distances of the points and the point O, for example, the distance s1 from the front driven wheel K3 to the point O is 80cm, and the distance s3 from the right driving wheel K2 to the point O is 100cm, the ratio between the theoretical speed of the driven wheel K3 and the speed of the right driving wheel K2 is known to be 80/100-4/5; therefore, in the normal driving state, the theoretical speed of the driven wheel K3 at the front end should be 80cm/s, and when the actual speed of the driven wheel K3 at the current time is not consistent with the theoretical speed of 80cm/s, it can be determined that the mobile electronic device 300 is in the slipping state at the current time.
Referring to fig. 17, a schematic structural diagram of a virtual wall building apparatus based on machine vision according to embodiment 8 of the present invention is shown, where the virtual wall building apparatus is disposed on a movable electronic device, and includes:
the camera 81 is used for acquiring images of the surrounding environment in real time at a preset frequency in the process that the movable electronic equipment traverses the area to be positioned;
the image sensor 82 is used for receiving the image acquired at each moment and projecting the image on the photosensitive surface of the image sensor 72 to form a target image;
a storage device 83 for pre-storing a plurality of marker patterns;
the controller 84 is configured to, according to a preset image matching algorithm, acquire x key points of a target image on the photosensitive surface when the target image acquired at any time matches any one of the marker patterns in the storage device 83; calculating the distance between a virtual wall and the movable electronic equipment based on key points of the target image on the photosensitive surface, and constructing the virtual wall on a surface which forms a preset included angle with the characteristic straight line and is perpendicular to the photosensitive surface according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface; key points of the target image on the photosensitive surface are all located on a characteristic straight line; x is more than or equal to 2.
Preferably, the controller 84 is specifically configured to: firstly, acquiring a plurality of feature points of the target image and a feature descriptor corresponding to each feature point at the current moment through the image matching algorithm, then acquiring feature points of the target image having a matching relationship with the marking pattern by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the marking pattern, judging that the target image is matched with any marking pattern in the marking pattern library when the number of the acquired feature points having the matching relationship is greater than a preset threshold value, and then taking the feature points of which x are located on the same straight line as key points according to the position relationship of the feature points having the matching relationship on the target image; x is more than or equal to 2.
Wherein the controller 84 introduces the marking pattern by:
and responding to a calibration instruction, acquiring an image right above the movable electronic equipment at the current moment, and storing the image into the marking pattern library as a new marking pattern.
The matching of images is performed by matching feature vectors of feature points. When a new marking pattern is obtained, the marking pattern is preprocessed and feature points are extracted, and then a plurality of feature points of the marking pattern and a feature descriptor corresponding to each feature point are obtained.
Wherein the image matching algorithm is a scale invariant feature transform algorithm or an accelerated robust feature algorithm, the controller 84 obtains each feature descriptor of the marker pattern/target image by:
establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, so as to obtain a feature point of the marked image/target image and the scale and position of the feature point;
according to the gradient direction distribution characteristics of the neighborhood pixels of the feature points, giving a direction to each feature point;
and according to the scale, the position and the direction of the feature point, carrying out regional blocking on the surrounding image of the feature point, and calculating an intra-block gradient histogram so as to generate a feature descriptor of the feature point.
Regarding the working principle and process of the virtual wall building apparatus of this embodiment, reference may be made to the description of embodiment 1 and embodiment 2, which is not repeated herein.
The camera 81 is provided with an imaging lens (or imaging lens) for focusing, so that the reflected object forms a target image on the photosensitive surface of the image sensor 82.
In this embodiment, the distance between the virtual wall and the mobile electronic device is calculated by triangulation,
according to the principle of triangulation, calculating the distance between the virtual wall and the mobile electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
As shown in fig. 2, when θ is 0 ° or 180 °, i.e., when a virtual wall to be constructed is parallel to the characteristic straight line, | cos θ ° | | is | cos180 ° | is 1, the distance of the virtual wall from the movable electronic device may be calculated by the following formula:
D=a/b*S
wherein a is a distance from the upper frame 400 of the virtual wall 100 to the photosensitive surface 301, b is a distance from the imaging lens 303 to the photosensitive surface 301, S is a distance from the characteristic straight line 304 to the central point 302 of the photosensitive surface 301, and D is a distance from the virtual wall 100 to the central point 302 of the photosensitive surface 301. Fig. 2 is a situation where the movable electronic device is facing a virtual wall, and fig. 3 is a situation where the movable electronic device is shifted to the right relative to the virtual wall, it can be understood that, in any direction (including facing, back-to-back, left-to-right shifting and right-to-right shifting) of the movable electronic device relative to the virtual wall, as long as the camera detects a specific number of matching feature points, a reasonable key point can be selected based on the positions of the feature points to construct a virtual wall with an accurate position.
As shown in fig. 4, when the preset included angle is not equal to 0 ° or 180 °, the distance between the virtual wall and the movable electronic device is calculated by the following formula:
D=a/b*S*|cosθ|
wherein a is a distance from the upper frame 400 of the virtual wall 100 to the photosensitive surface 301, b is a distance from the imaging lens 303 to the photosensitive surface 301, S is a distance from the characteristic straight line 304 to the central point 302 of the photosensitive surface 301, D is a distance from the virtual wall 100 to the central point 302 of the photosensitive surface 301, and θ is the preset included angle; as shown in fig. 4, a mapping straight line 402 exists on the ceiling 500 of the room, and the mapping straight line 402 is projected on the photosensitive surface 301 through the imaging lens 303 to generate the characteristic straight line 304; it can be understood that h ═ a/b ×, S is the distance between the central point 302 of the photosensitive surface 301 and the plane 401 where the mapping straight line 402 is located, and as can be seen from fig. 5, D ═ h ═ cos θ |.
It should be noted that, in the embodiment of the present invention, the distance D between the virtual wall 100 and the central point 302 of the photosensitive surface 301 is taken as the distance between the virtual wall 100 and the movable electronic device. In another preferred embodiment, as shown in fig. 7, assuming that the movable electronic device 300 is a regular circle, the distance between the virtual wall 100 and the movable electronic device 300 can be defined as the distance from the center point 305 of the movable electronic device 300 to the virtual wall 100; specifically, when the movable electronic device 300 faces the virtual wall 100, as shown in fig. 7, assuming that the movable electronic device 300 is a regular circle, the distance between the virtual wall 100 and the movable electronic device 300 may be defined as the sum of the distance between the virtual wall 100 and the center point 302 of the photosensitive surface 301 and the distance between the center point 305 of the movable electronic device 300 and the center point 302 of the photosensitive surface 301, i.e., L ═ D + D ', where L is the distance between the virtual wall 100 and the movable electronic device 300, D is the distance between the virtual wall 100 and the center point 302 of the photosensitive surface 301, and D' is the distance between the center point 305 of the movable electronic device 300 and the center point 302 of the photosensitive surface 301. It can be understood that the above formula is applicable to the case that the movable electronic device is in a circular shape, and is also applicable to the case that the movable electronic device is in other regular shapes, and details are not described herein again.
In the embodiment of the present invention, the distance from the upper frame of the virtual wall to the photosurface refers to the distance from the ceiling of a room to the photosurface, or refers to the distance from the upper frame of a door to the photosurface, and specific values thereof can be preset in the system.
In addition to calculating the position information of the virtual wall, the width information of the virtual wall may also be calculated by the following steps, specifically, calculating the width of the virtual wall by the following formula in conjunction with fig. 7:
Figure BDA0001798216190000131
w is the width of the virtual wall 100, a is the distance from the upper frame of the virtual wall 100 to the photosensitive surface 301, and λ is the wide angle of the camera 81.
The image sensor 82 includes a PSD sensor, a CCD sensor, or a CMOS sensor.
In another preferred embodiment, the controller 84 is further configured to move the mobile electronic device through a preset avoidance strategy when the distance between the virtual wall and the mobile electronic device is less than a preset distance, so that the distance between the mobile electronic device and the virtual wall is increased.
In another preferred embodiment, the controller 84 is further configured to control the movable electronic device to pass through the virtual wall in a preset path after the virtual wall is constructed.
Preferably, after the captured picture is projected to a photosensitive surface of an image sensor 82 provided in the mobile electronic device to form a target image, before the target image is compared with a pre-stored mark pattern in a mark pattern library, the controller 84 is further configured to correct perspective deformation of the target image.
Referring to fig. 18, a schematic structural diagram of a mobile electronic device provided in embodiment 9 of the present invention includes:
the virtual wall construction apparatus 91 of any one of the above, configured to construct a virtual wall;
the controller 84 is further configured to construct a coordinate system with an arbitrary position or a specific position in the region to be located as a coordinate origin;
the encoder 92 is configured to calculate, in real time, a displacement and a direction of the movable electronic device relative to the origin of coordinates in a process in which the movable electronic device traverses the region to be located;
the controller 84 is configured to receive the displacement and the direction of the mobile electronic device relative to the origin of coordinates, which are sent by the encoder 92, and obtain coordinate values of the mobile electronic device in the coordinate system at any time;
the controller 84 is further configured to perform real-time map construction on the area to be located according to the coordinate value of the mobile electronic device in the coordinate system and the coordinate plane of the virtual wall.
The working principle and process of the mobile electronic device of this embodiment for real-time map construction can refer to the description of embodiment 5, and are not described herein again.
It should be noted that the coordinate plane of the virtual wall is calculated according to the coordinate values of the movable electronic device in the coordinate system and the distance between the virtual wall and the movable electronic device.
In this embodiment, the mobile electronic device includes a driving wheel and a driven wheel, and the controller 84 is further configured to, when it is detected that the speed of the driving wheel of the mobile electronic device is inconsistent with the speed of the driven wheel at any time during the mobile electronic device travels along any straight line, calculate the displacement and direction of the mobile electronic device from the origin of coordinates according to a reference speed which is a smaller value of the speed of the driving wheel and the speed of the driven wheel;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel.
Referring to fig. 19, a schematic structural diagram of a mobile electronic device provided in embodiment 10 of the present invention includes:
the virtual wall building device 101 of any one of the above embodiments, configured to build a map;
the controller is further used for constructing a coordinate system by taking any position or a specific position in the area to be positioned as a coordinate origin;
the encoder 102 is configured to calculate, in real time, a displacement and a direction of the movable electronic device relative to the origin of coordinates in a process in which the movable electronic device traverses the region to be positioned;
the controller 84 is configured to receive the displacement and the direction of the mobile electronic device relative to the origin of coordinates, which are sent by the encoder 102, and obtain coordinate values of the mobile electronic device in the coordinate system at any time;
a collision sensor/laser sensor/infrared sensor 103 for detecting an obstacle;
the controller 84 is further configured to map the area to be located in real time based on the coordinate plane of the virtual wall and the coordinate value of each obstacle position.
For the working principle and process of the mobile electronic device of this embodiment for real-time map construction, reference may be made to the description of embodiment 6 above, and details are not repeated here.
In this embodiment, when an obstacle is sensed by the collision sensor 103, the controller 84 sets the coordinate value of the current position of the movable electronic device as the coordinate value of the position of the obstacle based on the moving direction and the moving distance of the movable electronic device with respect to the starting point; when an obstacle is detected by the laser/infrared sensor 103, the controller 84 calculates the position of the obstacle with respect to the mobile electronic device at the present time according to the laser/infrared distance calculation principle, thereby calculating the coordinate value of the obstacle according to the current time.
In another preferred embodiment, at least two positioning tags are arranged on the area to be positioned, each positioning tag is correspondingly arranged at a specific position of the area to be positioned, and each piece of positioning tag information comprises unique coding information for distinguishing the absolute position of the positioning tag information; the real-time map construction of the area to be located by the controller 84 according to the coordinate values of the movable electronic device in the coordinate system and the position of the virtual wall includes:
in the traversing process, based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point, calculating the coordinate value of the position of the positioning tag when the movable electronic equipment acquires the positioning tag information each time, and recording the positioning tag information and the corresponding coordinate value;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall, the coordinate value of each barrier position, the information of each positioning label and the coordinate value thereof.
Preferably, the controller 84 is further configured to calculate a distance of the mobile electronic device from the virtual wall centerline according to the perspective deformation of the target image; and returning the mobile electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the mobile electronic equipment from the midline of the virtual wall.
It should be noted that, in the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Finally, it should be noted that the series of processes described above includes not only processes performed in time series in the order described herein, but also processes performed in parallel or individually, rather than in time series. Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and may also be implemented by software entirely. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments of the present invention.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (32)

1. A virtual wall construction method based on machine vision is characterized by comprising the following steps:
in the process that the movable electronic equipment traverses the area to be positioned, images of the surrounding environment are collected in real time at a preset frequency through a camera arranged on the movable electronic equipment, and the collected images at each moment are projected to a photosensitive surface of an image sensor arranged in the movable electronic equipment to form a target image;
according to a preset image matching algorithm, when a target image acquired at any time is matched with any marking pattern in a marking pattern library, acquiring feature points of x positions of the target image on the photosensitive surface, which are located on the same feature straight line, as key points according to the position relation of the feature points on the target image, which have the matching relation with the marking pattern; wherein x is more than or equal to 2; defining a characteristic line or a key point of a virtual wall on each marking pattern;
calculating the distance between a virtual wall and the movable electronic equipment based on key points of the target image on the photosensitive surface, and constructing the virtual wall on a surface which forms a preset included angle with the characteristic straight line and is perpendicular to the photosensitive surface according to the distance between the virtual wall and the movable electronic equipment; and when the preset included angle is equal to 0 degree or 180 degrees, constructing the virtual wall on a surface which is parallel to the characteristic straight line and is vertical to the photosensitive surface.
2. The method of machine vision based virtual wall construction according to claim 1, said method further comprising the steps of:
responding to a calibration instruction, acquiring an image right above the movable electronic equipment at the current moment, and storing the image into the marking pattern library as a new marking pattern;
and acquiring a plurality of characteristic points of the marking pattern and a characteristic descriptor corresponding to each characteristic point through the image matching algorithm.
3. The method for constructing a virtual wall based on machine vision according to claim 2, wherein, when a target image acquired at any time is matched with any marker pattern in a marker pattern library according to a preset image matching algorithm, acquiring, as key points, feature points of the target image on the photosensitive surface, where x feature points are located on the same feature straight line, according to a positional relationship of feature points on the target image, where the x feature points are located on the same feature straight line, specifically:
obtaining a plurality of feature points of the target image at the current moment and a feature descriptor corresponding to each feature point through the image matching algorithm;
acquiring feature points of the target image and the mark pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the mark pattern, and judging that the target image is matched with any mark pattern in the mark pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
and according to the position relation of the characteristic points with the matching relation on the target image, taking the characteristic points with x positions on the same straight line as key points.
4. The method for constructing a virtual wall based on machine vision according to claim 3, wherein the image matching algorithm is a scale invariant feature transform algorithm or an accelerated robust feature algorithm, and each feature descriptor of the marker pattern/target image is obtained by the following steps:
establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, so as to obtain a feature point of the marked image/target image and the scale and position of the feature point;
according to the gradient direction distribution characteristics of the neighborhood pixels of each feature point in the marked image/target image, giving a direction to each feature point;
and according to the scale, the position and the direction of each feature point in the marked image/target image, calculating an intra-block gradient histogram by carrying out regional blocking on the surrounding image of the feature point, thereby generating a feature descriptor of the feature point.
5. The machine vision-based virtual wall construction method according to claim 1, wherein an image collected by a camera is projected to a photosensitive surface of the image sensor through an imaging lens; and the distance between the virtual wall and the movable electronic equipment is calculated by a triangular distance measurement method.
6. The method for constructing a virtual wall based on machine vision according to claim 5, wherein the calculating the distance between the virtual wall and the movable electronic device based on the key point of the target image on the photosensitive surface is specifically as follows:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
7. The method of claim 1, wherein the image sensor comprises a PSD sensor, a CCD sensor, or a CMOS sensor.
8. The machine-vision-based virtual wall construction method of claim 1, wherein the width of the virtual wall is calculated by the following formula:
Figure FDA0002265253640000021
w is the width of the virtual wall, a is the distance from the upper frame of the virtual wall to the photosensitive surface, and lambda is the wide angle of the camera.
9. The method of machine vision based virtual wall construction according to claim 1, said method further comprising the steps of:
when the distance between the virtual wall and the movable electronic equipment is smaller than a preset distance, the movable electronic equipment is moved through a preset avoidance strategy so that the distance between the movable electronic equipment and the virtual wall is increased.
10. The method of machine vision based virtual wall construction according to claim 1, said method further comprising the steps of:
after the virtual wall is constructed, the movable electronic equipment is controlled to penetrate through the virtual wall in a preset path.
11. The method for constructing a virtual wall based on machine vision according to claim 1, wherein the step of projecting the captured image onto a photosensitive surface of an image sensor provided in the mobile electronic device to form a target image further comprises:
and correcting the lens deformation of the target image.
12. A map construction method is characterized by comprising the following steps
Constructing a coordinate system by taking any position or a specific position in a region to be positioned as a coordinate origin, and calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the region to be positioned, so as to obtain the coordinate value of the movable electronic equipment in the coordinate system in real time;
constructing a virtual wall by using the machine vision-based virtual wall construction method according to any one of claims 1 to 11;
and carrying out real-time map construction on the area to be positioned according to the coordinate value of the movable electronic equipment in the coordinate system and the coordinate plane of the virtual wall.
13. The mapping method according to claim 12, wherein the real-time mapping of the area to be located according to the coordinate values of the movable electronic device in the coordinate system and the position of the virtual wall comprises:
calculating and recording coordinate values of the position of the obstacle when the obstacle is detected by the movable electronic equipment each time based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall and the coordinate value of each obstacle position.
14. The map construction method according to claim 13, wherein at least two positioning tags are provided on the area to be positioned, each positioning tag is correspondingly provided at a specific position of the area to be positioned, and each positioning tag information includes unique coding information for distinguishing an absolute position thereof; then, the real-time map construction of the area to be located according to the coordinate value of the movable electronic device in the coordinate system and the position of the virtual wall further includes:
in the traversing process, based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point, calculating the coordinate value of the position of the positioning tag when the movable electronic equipment acquires the positioning tag information each time, and recording the positioning tag information and the corresponding coordinate value;
and constructing a real-time map of the area to be positioned based on the coordinate plane of the virtual wall, the coordinate value of each barrier position, the information of each positioning label and the coordinate value thereof.
15. The mapping method according to claim 12, wherein the method further comprises the steps of:
calculating the distance of the movable electronic equipment deviating from the center line of the virtual wall according to the perspective deformation of the target image;
and returning the movable electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the movable electronic equipment from the midline of the virtual wall.
16. The mapping method of claim 12, wherein the mobile electronic device includes a primary wheel and a secondary wheel, the method further comprising the steps of:
in the process that the movable electronic equipment travels along any straight line, when the speed of a driving wheel of the movable electronic equipment is detected to be inconsistent with the speed of a driven wheel at any moment, the smaller value of the speed of the driving wheel and the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel.
17. The utility model provides a virtual wall founds device based on machine vision, its characterized in that, but mobile electronic equipment is located to virtual wall founds device based on machine vision, includes:
the camera is used for acquiring images of the surrounding environment in real time at a preset frequency in the process that the movable electronic equipment traverses the area to be positioned;
the image sensor is used for receiving the image acquired at each moment and projecting the image on a photosensitive surface of the image sensor to form a target image;
a storage device for pre-storing a plurality of marker patterns;
the controller is used for acquiring feature points of x positions of the target image on the photosensitive surface on the same feature straight line as key points according to the position relationship of the feature points on the target image, which have the matching relationship with the marker pattern, when the target image acquired at any moment is matched with any marker pattern in the marker pattern library according to a preset image matching algorithm; calculating the distance between a virtual wall and the movable electronic equipment based on key points of the target image on the photosensitive surface, and constructing the virtual wall on a surface which forms a preset included angle with the characteristic straight line and is perpendicular to the photosensitive surface according to the distance between the virtual wall and the movable electronic equipment; when the preset included angle is equal to 0 degree or 180 degrees, the virtual wall is constructed on a surface which is parallel to the characteristic straight line and is perpendicular to the photosensitive surface; x is more than or equal to 2; each mark pattern defines a characteristic line or a key point of a virtual wall.
18. The machine-vision-based virtual wall building apparatus of claim 17, wherein the controller is further configured to control the camera to acquire an image directly above the movable electronic device at the current time in response to a calibration instruction, and store the image in the storage device as a new marking pattern; and acquiring a plurality of characteristic points of the marking pattern and a characteristic descriptor corresponding to each characteristic point through the image matching algorithm.
19. The virtual wall building device based on machine vision according to claim 18, wherein, according to a preset image matching algorithm, when a target image acquired at any time is matched with any marker pattern in a marker pattern library, acquiring, as key points, feature points of the target image on the photosensitive surface, where x feature points are located on the same feature straight line, according to a positional relationship of feature points on the target image, where the feature points have a matching relationship with the marker pattern, specifically:
acquiring a plurality of feature points of the target image and a feature descriptor corresponding to each feature point through the image matching algorithm;
acquiring feature points of the target image and the mark pattern in a matching relationship by calculating the Euclidean distance between each feature descriptor of the target image and each feature descriptor of the mark pattern, and judging that the target image is matched with any mark pattern in the mark pattern library when the number of the acquired feature points in the matching relationship is greater than a preset threshold value;
and according to the position relation of the characteristic points with the matching relation on the target image, taking the characteristic points with x positions on the same straight line as key points.
20. The machine-vision-based virtual wall construction apparatus of claim 19, wherein the image matching algorithm is a scale invariant feature transform algorithm or an accelerated robust feature algorithm, and each feature descriptor of the marker pattern/target image is obtained by a controller by the following steps:
establishing a scale space of a marked image/target image through Gaussian blur, identifying an extreme point in the scale space of the marked image/target image through a Gaussian differential function, checking the extreme point in the scale space of the marked image/target image, and removing an unstable extreme point in the scale space of the marked image/target image, so as to obtain a feature point of the marked image/target image and the scale and position of the feature point;
according to the gradient direction distribution characteristics of the neighborhood pixels of each feature point in the marked image/target image, giving a direction to each feature point;
and according to the scale, the position and the direction of each feature point in the marked image/target image, calculating an intra-block gradient histogram by carrying out regional blocking on the surrounding image of the feature point, thereby generating a feature descriptor of the feature point.
21. The machine vision-based virtual wall building apparatus of claim 17, wherein the camera further comprises an imaging lens through which an image captured by the camera is projected to a photosensitive surface of the image sensor; and the distance between the virtual wall and the movable electronic equipment is calculated by a triangular distance measurement method.
22. The machine-vision-based virtual wall building apparatus of claim 21, wherein the controller calculates the distance between the virtual wall and the movable electronic device based on the key point of the target image on the photosensitive surface by:
calculating a distance of the virtual wall from the movable electronic device by the following formula:
D=a/b*S*|cosθ|
the distance between the upper frame of the virtual wall and the photosensitive surface is a first distance, the distance between the imaging lens and the photosensitive surface is a second distance, the distance between the characteristic straight line and the central point of the photosensitive surface is a second distance, the distance between the virtual wall and the central point of the photosensitive surface is a third distance, and the angle theta is the preset included angle.
23. The machine-vision-based virtual wall construction apparatus of claim 22, wherein the controller calculates the width of the virtual wall based on the following formula:
Figure FDA0002265253640000041
w is the width of the virtual wall, a is the distance from the upper frame of the virtual wall to the photosensitive surface, and lambda is the wide angle of the camera.
24. The machine-vision-based virtual wall construction apparatus of claim 17, wherein the image sensor comprises a PSD sensor, a CCD sensor, or a CMOS sensor.
25. The machine-vision-based virtual wall construction apparatus of claim 17, wherein when the distance between the virtual wall and the movable electronic device is less than a preset distance, the controller is further configured to move the movable electronic device through a preset avoidance strategy such that the distance between the movable electronic device and the virtual wall is increased.
26. The machine-vision-based virtual wall construction apparatus of claim 17, wherein the controller is further configured to control the movable electronic device to traverse a preset path through the virtual wall after the virtual wall is constructed.
27. The machine-vision-based virtual wall construction apparatus of claim 17, wherein the controller is further configured to correct perspective distortion of the target image after projecting the captured image onto a photosensitive surface of an image sensor disposed in the mobile electronic device to form the target image.
28. A removable electronic device, comprising:
a machine vision based virtual wall construction apparatus as claimed in any one of claims 17 to 27 for constructing a virtual wall;
the controller is also used for constructing a coordinate system by taking any position or a specific position in the area to be positioned as a coordinate origin;
the encoder is used for calculating the displacement and the direction of the movable electronic equipment relative to the coordinate origin in real time in the process that the movable electronic equipment traverses the area to be positioned;
the controller is further configured to receive the displacement and the direction of the movable electronic device relative to the origin of coordinates, which are sent by the encoder, and acquire a coordinate value of the movable electronic device in the coordinate system at any time;
the controller is further used for carrying out real-time map construction on the area to be positioned according to the coordinate value of the movable electronic equipment in the coordinate system and the coordinate plane of the virtual wall.
29. The movable electronic device according to claim 28, further comprising an impact sensor, a laser sensor, or an infrared sensor, wherein when an obstacle is sensed by the impact sensor, the controller takes the coordinate value of the current position of the movable electronic device as the coordinate value of the position of the obstacle based on the moving direction and the moving distance of the movable electronic device with respect to the starting point;
when the laser sensor/infrared sensor is used for detecting an obstacle, the controller calculates the position of the obstacle relative to the current movable electronic equipment according to a laser/infrared distance calculation principle, so that the coordinate value of the obstacle at the current moment is calculated according to the moving direction and the moving distance of the movable electronic equipment relative to the starting point at the current moment;
the controller is used for carrying out real-time map construction on the area to be positioned based on the coordinate plane of the virtual wall and the coordinate value of each obstacle position.
30. The portable electronic device of claim 29, wherein at least two positioning tags are disposed on the area to be positioned, each positioning tag is correspondingly disposed at a specific position of the area to be positioned, and each piece of positioning tag information includes unique coded information for distinguishing an absolute position thereof; the controller constructs the map of the area to be positioned in real time according to the coordinate value of the movable electronic equipment in the coordinate system and the position of the virtual wall, and comprises the following steps:
in the traversing process, based on the moving direction and the moving distance of the movable electronic equipment relative to the starting point, calculating the coordinate value of the position of the positioning tag when the movable electronic equipment acquires the positioning tag information each time, and recording the positioning tag information and the corresponding coordinate value;
and constructing a map based on the coordinate plane of the virtual wall, the coordinate value of each obstacle position, the information of each positioning label and the coordinate value of the positioning label.
31. A mobile electronic device as in claim 28 wherein said controller is further configured to calculate a distance of said mobile electronic device from a centerline of said virtual wall based on perspective distortion of said target image; and returning the movable electronic equipment to the midline of the virtual wall along a track parallel to the virtual wall according to the distance of the movable electronic equipment from the midline of the virtual wall.
32. A mobile electronic device as recited in claim 28, wherein said mobile electronic device includes a driving wheel and a driven wheel, and said controller is further configured to calculate a displacement and a direction of said mobile electronic device from said origin of coordinates based on a reference speed, which is a smaller value of a speed of said driving wheel and a speed of said driven wheel, when it is detected that the speed of said driving wheel of said mobile electronic device is not identical to the speed of said driven wheel at any time during travel of said mobile electronic device along any straight line;
when the speed of a driven wheel of the movable electronic equipment is detected to be lower than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of the driven wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed;
when the speed of a driven wheel of the movable electronic equipment is detected to be higher than the theoretical speed at any moment in the process that the movable electronic equipment turns at any central point, the speed of a driving wheel is used as a reference speed, and the displacement and the direction of the movable electronic equipment relative to the origin of coordinates are calculated according to the reference speed; and the theoretical speed is obtained by calculation according to the speed of the driving wheel.
CN201780017028.8A 2017-12-13 2017-12-13 Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment Active CN109313822B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/116015 WO2019113859A1 (en) 2017-12-13 2017-12-13 Machine vision-based virtual wall construction method and device, map construction method, and portable electronic device

Publications (2)

Publication Number Publication Date
CN109313822A CN109313822A (en) 2019-02-05
CN109313822B true CN109313822B (en) 2020-04-10

Family

ID=65207680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780017028.8A Active CN109313822B (en) 2017-12-13 2017-12-13 Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment

Country Status (2)

Country Link
CN (1) CN109313822B (en)
WO (1) WO2019113859A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596508B (en) * 2019-08-29 2022-04-12 美智纵横科技有限责任公司 Control method and device of sensor and storage medium
CN111070212B (en) * 2020-01-06 2021-06-01 中联恒通机械有限公司 Vehicle-mounted manipulator control system and method
CN114343507A (en) * 2022-01-28 2022-04-15 深圳市优必选科技股份有限公司 Map data generation method and device and sweeping robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631262A (en) * 2012-08-29 2014-03-12 科沃斯机器人科技(苏州)有限公司 Automatic-mobile robot walking scope restriction system and restriction method thereof
US9658616B2 (en) * 2013-06-13 2017-05-23 Samsung Electronics Co., Ltd. Cleaning robot and method for controlling the same
CN106774294A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 A kind of mobile robot virtual wall method for designing
CN106843230A (en) * 2017-03-24 2017-06-13 上海思岚科技有限公司 It is applied to the virtual wall system and its implementation of mobile device
CN107063242A (en) * 2017-03-24 2017-08-18 上海思岚科技有限公司 Have the positioning navigation device and robot of virtual wall function
CN107063117A (en) * 2017-03-15 2017-08-18 上海大学 Underwater laser synchronous scanning range of triangle imaging system and method based on optical field imaging

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7571511B2 (en) * 2002-01-03 2009-08-11 Irobot Corporation Autonomous floor-cleaning robot
CN101882313B (en) * 2010-07-14 2011-12-21 中国人民解放军国防科学技术大学 Calibration method of correlation between single line laser radar and CCD (Charge Coupled Device) camera
CN204240979U (en) * 2014-08-15 2015-04-01 上海思岚科技有限公司 Micro-optical scanning distance measuring equipment, system and optical ranging system
CN105136434B (en) * 2015-08-12 2019-09-20 中北大学 A kind of plane mechanism two dimensional motion rule test device
US10451740B2 (en) * 2016-04-26 2019-10-22 Cepton Technologies, Inc. Scanning lidar systems for three-dimensional sensing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631262A (en) * 2012-08-29 2014-03-12 科沃斯机器人科技(苏州)有限公司 Automatic-mobile robot walking scope restriction system and restriction method thereof
US9658616B2 (en) * 2013-06-13 2017-05-23 Samsung Electronics Co., Ltd. Cleaning robot and method for controlling the same
CN106774294A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 A kind of mobile robot virtual wall method for designing
CN107063117A (en) * 2017-03-15 2017-08-18 上海大学 Underwater laser synchronous scanning range of triangle imaging system and method based on optical field imaging
CN106843230A (en) * 2017-03-24 2017-06-13 上海思岚科技有限公司 It is applied to the virtual wall system and its implementation of mobile device
CN107063242A (en) * 2017-03-24 2017-08-18 上海思岚科技有限公司 Have the positioning navigation device and robot of virtual wall function

Also Published As

Publication number Publication date
CN109313822A (en) 2019-02-05
WO2019113859A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
Veľas et al. Calibration of rgb camera with velodyne lidar
CN104848858B (en) Quick Response Code and be used for robotic vision-inertia combined navigation system and method
US20180005018A1 (en) System and method for face recognition using three dimensions
Steder et al. Robust on-line model-based object detection from range images
CN107741234A (en) The offline map structuring and localization method of a kind of view-based access control model
Goncalves et al. A visual front-end for simultaneous localization and mapping
Taylor et al. Multi‐modal sensor calibration using a gradient orientation measure
KR20180125010A (en) Control method of mobile robot and mobile robot
JP2011174879A (en) Apparatus and method of estimating position and orientation
CN109313822B (en) Virtual wall construction method and device based on machine vision, map construction method and movable electronic equipment
Maier et al. Vision-based humanoid navigation using self-supervised obstacle detection
Kanezaki et al. Fast object detection for robots in a cluttered indoor environment using integral 3D feature table
CN108303094A (en) The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN112184793B (en) Depth data processing method and device and readable storage medium
CN109416251B (en) Virtual wall construction method and device based on color block labels, map construction method and movable electronic equipment
CN108544494A (en) A kind of positioning device, method and robot based on inertia and visual signature
Beauvisage et al. Multi-spectral visual odometry for unmanned air vehicles
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
KR101456172B1 (en) Localization of a mobile robot device, method and mobile robot
Gonzalez-Jimenez et al. Improving 2d reactive navigators with kinect
CN114972491A (en) Visual SLAM method, electronic device, storage medium and product
KR101979246B1 (en) Method and apparatus for for calibrating sensor coordinate system with game zone
KR102131493B1 (en) Indoor Positioning Method using Smartphone with QR code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210113

Address after: 510555 room 621, No.2, Tengfei 1st Street, Huangpu District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou Xiaoluo robot Co.,Ltd.

Address before: 510555 room 621, Tengfei Street 2, Guangzhou knowledge city, Guangzhou, Guangdong.

Patentee before: GUANGZHOU AIROB ROBOT TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: 315100 East 1st Road, Science Park, Jiangshan Town, Yinzhou District, Ningbo City, Zhejiang Province

Patentee after: Zhejiang Qiyuan Robot Co.,Ltd.

Address before: 510555 room 621, No.2, Tengfei 1st Street, Huangpu District, Guangzhou City, Guangdong Province

Patentee before: Guangzhou Xiaoluo robot Co.,Ltd.

CP03 Change of name, title or address