CN114998426B - Robot ranging method and device - Google Patents

Robot ranging method and device Download PDF

Info

Publication number
CN114998426B
CN114998426B CN202210941707.7A CN202210941707A CN114998426B CN 114998426 B CN114998426 B CN 114998426B CN 202210941707 A CN202210941707 A CN 202210941707A CN 114998426 B CN114998426 B CN 114998426B
Authority
CN
China
Prior art keywords
target object
line
straight line
coordinate system
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210941707.7A
Other languages
Chinese (zh)
Other versions
CN114998426A (en
Inventor
兰婷婷
张瑞琪
曾祥永
支涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunji Technology Co Ltd
Original Assignee
Beijing Yunji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunji Technology Co Ltd filed Critical Beijing Yunji Technology Co Ltd
Priority to CN202210941707.7A priority Critical patent/CN114998426B/en
Publication of CN114998426A publication Critical patent/CN114998426A/en
Application granted granted Critical
Publication of CN114998426B publication Critical patent/CN114998426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of robot ranging and provides a robot ranging method. The method comprises the following steps: the method includes the steps that an image acquisition device arranged on a robot acquires a target image containing a target object and a target object surrounding environment, wherein the target object surrounding environment comprises the following steps: walls on both sides of the target object and the ground below the target object; detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; determining a first position of a target object on a target image by using a target detection algorithm; determining a second position of the target object in the coordinate system of the image acquisition device based on the linear equation of the first straight line and the second straight line and the first position; a distance of the robot from the target object is determined based on the second position.

Description

Robot ranging method and device
Technical Field
The disclosure relates to the technical field of robot ranging, in particular to a robot ranging method and device.
Background
The robot often needs to acquire the distance between the robot and a specific object in order to judge the position state of the robot. The existing robot ranging technology needs to adopt high-precision laser radar, a depth camera, an infrared range finder and other equipment. For a high-precision laser radar, the cost is high, and the tracking effect is unstable in a narrow environment; the depth camera has the advantages of small visual field, distance measurement and easy influence of illumination; the infrared distance measuring instrument is easy to damage human eyes. And these devices incur additional costs. In order to reduce the cost, it is desirable to use a common color camera with low cost to complete the robot ranging, but the use of the common color camera causes a problem of low robot ranging precision. How to ensure the distance measurement precision of the robot under the condition of adopting a common color camera is a great technical point in the field of robot distance measurement.
In the course of implementing the disclosed concept, the inventors found that there are at least the following technical problems in the related art: in the robot ranging, the cost of the ranging equipment arranged on the robot and the ranging precision of the robot cannot be considered at the same time.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a robot ranging method, apparatus, electronic device, and computer-readable storage medium, so as to solve the problem in the prior art that, in robot ranging, cost of a ranging device disposed on a robot and robot ranging accuracy cannot be considered at the same time.
In a first aspect of the embodiments of the present disclosure, a robot ranging method is provided, including: the method comprises the following steps of obtaining a target image containing a target object and a target object surrounding environment through image obtaining equipment arranged on a robot, wherein the target object surrounding environment comprises the following steps: walls on both sides of the target object and the ground below the target object; detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; determining a first position of a target object on a target image by using a target detection algorithm; determining a second position of the target object in the coordinate system of the image acquisition device based on the linear equation of the first straight line and the second straight line and the first position; a distance of the robot from the target object is determined based on the second position.
In a second aspect of the embodiments of the present disclosure, there is provided a robot ranging apparatus including: an acquisition module configured to acquire a target image including a target object and an environment around the target object through an image acquisition device provided on the robot, wherein the environment around the target object includes: walls on both sides of the target object and the ground below the target object; the detection module is configured to detect the boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determine a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; a first determination module configured to determine a first position of a target object on a target image using a target detection algorithm; a second determination module configured to determine a second position of the target object in the coordinate system of the image acquisition device based on the line equations of the first and second lines and the first position; a third determination module configured to determine a distance of the robot from the target object based on the second position.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: the method comprises the following steps of obtaining a target image containing a target object and a target object surrounding environment through image obtaining equipment arranged on a robot, wherein the target object surrounding environment comprises the following steps: walls on both sides of the target object and the ground below the target object; detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; determining a first position of a target object on a target image by using a target detection algorithm; determining a second position of the target object in the coordinate system of the image acquisition device based on the linear equation of the first straight line and the second straight line and the first position; a distance of the robot from the target object is determined based on the second position. By adopting the technical means, the problem that in the prior art, in the robot ranging, the cost of ranging equipment arranged on the robot and the ranging precision of the robot cannot be taken into consideration can be solved, and the ranging precision of the robot is taken into consideration while the ranging cost is reduced.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a robot ranging method provided in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a target image according to an embodiment of the disclosure
Fig. 4 is a schematic structural diagram of a robot ranging device provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A robot ranging method and apparatus according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include terminal devices 101, 102, and 103, server 104, and network 105.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 104, including but not limited to smart phones, robots, laptop portable computers, desktop computers, and the like (e.g., 102 may be a robot); when the terminal apparatuses 101, 102, and 103 are software, they can be installed in the electronic apparatus as above. The terminal devices 101, 102, and 103 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search type application, a shopping type application, and the like, may be installed on the terminal devices 101, 102, and 103.
The server 104 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 104 may be a server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 104 may be hardware or software. When the server 104 is hardware, it may be various electronic devices that provide various services to the terminal devices 101, 102, and 103. When the server 104 is software, it may be multiple software or software modules providing various services for the terminal devices 101, 102, and 103, or may be a single software or software module providing various services for the terminal devices 101, 102, and 103, which is not limited by the embodiment of the present disclosure.
The network 105 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, bluetooth (Bluetooth), near Field Communication (NFC), infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
The target user can establish a communication connection with the server 104 via the network 105 through the terminal devices 101, 102, and 103 to receive or transmit information or the like. It should be noted that the specific types, numbers and combinations of the terminal devices 101, 102 and 103, the server 104 and the network 105 may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this.
Fig. 2 is a schematic flow chart of a robot ranging method provided in an embodiment of the present disclosure. The robot ranging method of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the robot ranging method includes:
s201, acquiring a target image containing a target object and a target object surrounding environment through image acquisition equipment arranged on the robot, wherein the target object surrounding environment comprises: walls on both sides of the target object and the ground below the target object;
s202, detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image;
s203, determining a first position of a target object on a target image by using a target detection algorithm;
s204, determining a second position of the target object in the coordinate system of the image acquisition equipment based on the linear equation of the first straight line and the second straight line and the first position;
and S205, determining the distance between the robot and the target object based on the second position.
In the prior art, in order to improve the ranging precision of a robot, high-precision laser radars, depth cameras, infrared range finders and other equipment are often arranged on the robot. These devices provide higher accuracy ranging results and also increase costs. Therefore, the method and the device aim to use the common color camera with low cost, reduce the hardware cost and ensure the ranging precision of the robot through an algorithm.
The image acquisition device may be a low cost device such as a common color camera; the target image is an image of the target object and the environment around the target object; each side wall and the ground are provided with a boundary line, two boundary lines are arranged between the walls on the two sides and the ground, and the two boundary lines are respectively marked as a first straight line and a second straight line; since the target image is acquired by the image acquisition device, that is, the present disclosure is calculated from the angle of the image acquisition device, it is necessary to determine the equation of the first line and the second line in the coordinate system of the target image, and then determine the second position of the target object in the coordinate system of the image acquisition device, and thus determine the distance between the robot and the target object.
According to the technical scheme provided by the embodiment of the disclosure, the method for acquiring the target image including the target object and the environment around the target object by the image acquisition equipment arranged on the robot comprises the following steps: walls on both sides of the target object and the ground below the target object; detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; determining a first position of a target object on a target image by using a target detection algorithm; determining a second position of the target object in a coordinate system of the image acquisition device based on a linear equation of the first straight line and the second straight line and the first position; a distance of the robot from the target object is determined based on the second position. By adopting the technical means, the problem that in the prior art, in the robot ranging, the cost of ranging equipment arranged on the robot and the ranging precision of the robot cannot be taken into consideration can be solved, and the ranging precision of the robot is taken into consideration while the ranging cost is reduced.
Fig. 3 is a schematic diagram of a target image according to an embodiment of the present disclosure. As shown in fig. 3, the target image includes a target object and a surrounding environment of the target object; the surrounding environment of the target object comprises walls on two sides of the target object and the ground below the target object; two lines of intersection exist between the walls and the ground on the two sides, and are respectively marked as a first straight line l1 and a second straight line l2.
In step S204, determining a second position of the target object in the coordinate system of the image acquisition device based on the line equation of the first line and the second line and the first position includes: calculating coordinates of vanishing points of the first straight line and the second straight line based on a straight line equation of the first straight line and the second straight line; calculating a linear equation of a shadow elimination line of the shadow elimination point based on the coordinates of the shadow elimination point; a second position of the target object in the coordinate system of the image acquisition device is determined based on the linear equation of the vanishing line and the first position.
Determining a line equation of the first line and the second line under the coordinate system of the target image:
Figure 400806DEST_PATH_IMAGE001
Figure 729020DEST_PATH_IMAGE002
x and y are two coordinate axes of a coordinate system of the target image, T denotes transposition, and a1, b1, c1, a2, b2, c2 are linear equation coefficients.
Figure 337855DEST_PATH_IMAGE003
Is a vector perpendicular to the first line and the second line, and may be a vector in the plane of the ground.
Shadow eliminating point
Figure 167140DEST_PATH_IMAGE004
The shadow elimination line is a straight line passing through the shadow elimination point and can be expressed as
Figure 177821DEST_PATH_IMAGE005
Or
Figure 145777DEST_PATH_IMAGE006
P y Is the y coordinate of point P.
Shadow elimination and shadow elimination lines are commonly used in the field of camera calibration, and are not described herein in detail because shadow elimination and shadow elimination lines are terms of art. In brief, the vanishing point is a point where the first line and the second line visually intersect, and in fig. 3, the first line and the second line have a tendency to intersect, and the vanishing line is a line passing through the vanishing point.
Through the above processing, the coordinate system of the target image and the coordinate system of the image capturing device are associable, so that the second position of the target object in the coordinate system of the image capturing device can be determined based on the linear equation of the vanishing line in the coordinate system of the target image and the first position.
Determining a second position of the target object in the coordinate system of the image acquisition device based on the equation of the line of shadow and the first position, comprising: acquiring an internal reference matrix of the image acquisition equipment; calculating a normal vector vertical to the ground based on a linear equation of the internal reference matrix and the shadow elimination line; determining a plane equation of the ground surface in a coordinate system of the image acquisition equipment by using the normal vector; a second position of the target object in the coordinate system of the image acquisition device is determined based on the planar equation of the ground and the first position.
Normal vector perpendicular to ground
Figure 773068DEST_PATH_IMAGE007
K is an internal reference matrix of the image acquisition device;
the equation of the plane of the ground under the coordinate system of the image acquisition device can be expressed as
Figure 484672DEST_PATH_IMAGE008
x, y and z are three coordinate axes of the coordinate system of the image capturing device (where x, y should be distinguished from x, y in the above, and x, y in two places represent different coordinate systems, x is a transverse vector of the coordinate system, and y is a longitudinal quantity of the coordinate system). The height of the image acquisition equipment from the ground isWhen known as H, the final derivation
Figure 756384DEST_PATH_IMAGE009
Determining a second position of the target object in the coordinate system of the image acquisition device based on the ground plane equation and the first position, comprising: calculating a direction vector of the target object based on the first position and the internal reference matrix; a second position of the target object in the coordinate system of the image acquisition device is determined based on the planar equation and the directional vector of the ground.
A first position of the target object on the target image is determined as Q using a target detection algorithm.
According to
Figure 895242DEST_PATH_IMAGE010
Can lead out
Figure 9828DEST_PATH_IMAGE011
And m is a direction vector, and a second position of the target object in the coordinate system of the image acquisition equipment is determined based on the plane equation and the direction vector of the ground, namely the intersection point of the straight line where m is located and the ground is the second position of the target object in the coordinate system of the image acquisition equipment.
m is a direction vector of a straight line connecting the origin of the coordinate system of the image acquisition equipment and the foothold of the target object, and the robot is the origin of the coordinate system of the image acquisition equipment.
The lane line detection algorithm is realized through a first neural network model, the first neural network model can identify the boundary line between the wall and the ground in the image through training, and can determine a linear equation of the boundary line under a preset coordinate system, and the preset coordinate system comprises a coordinate system of the image acquisition equipment.
The first neural network model can identify the boundary line between the wall and the ground in the image and can determine the linear equation of the boundary line under the preset coordinate system after being trained, and it can be understood that the first neural network model learns and saves the corresponding relationship between the image and the boundary line between the wall and the ground in the image after being trained and learns and saves the corresponding relationship between the image and the linear equation of the boundary line under the preset coordinate system corresponding to the image. The first neural network model can determine a linear equation of the boundary line under the preset coordinate system, and can be a linear equation for determining coordinates of any two points on the boundary line and then calculating the boundary line by using the coordinates of the two points.
The target detection algorithm is realized through a second neural network model, the second neural network model is trained, the position of the object on the image can be determined, and the target detection algorithm can be understood that the second neural network model is trained, and the corresponding relation between the image and the position of the object in the image is learned and stored. Wherein, the position of the object in the image can be the coordinate of the object on the image (a coordinate system is established for the image separately).
The first position is correlated with the internal reference matrix, and the direction vector of the target object can be calculated according to the first position and the internal reference matrix.
The neural network model in the disclosure may be any one of commonly used neural network models, and deep learning is preferred as the training method.
In step S205, determining a distance between the robot and the target object based on the second position includes: determining a distance between the robot and the target object by using a point-to-point distance formula based on the origin and the second position, wherein the robot is the origin of a coordinate system of the image acquisition device.
The origin and the second position have respective coordinates, and then the distance between the robot and the target object is determined using a point-to-point distance formula.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic diagram of a robot ranging device provided in an embodiment of the present disclosure. As shown in fig. 4, the robot ranging apparatus includes:
an acquiring module 401 configured to acquire, by an image acquiring device provided on the robot, a target image including a target object and an environment around the target object, where the environment around the target object includes: walls on both sides of the target object and the ground below the target object;
a detection module 402, configured to detect a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm, obtain a first straight line and a second straight line, and determine a straight line equation of the first straight line and the second straight line in a coordinate system of the target image;
a first determining module 403 configured to determine a first position of a target object on a target image using a target detection algorithm;
a second determining module 404 configured to determine a second position of the target object in the coordinate system of the image acquisition device based on the line equations of the first and second lines and the first position;
a third determination module 405 configured to determine a distance of the robot from the target object based on the second position.
In the prior art, in order to improve the ranging precision of a robot, high-precision laser radars, depth cameras, infrared range finders and other equipment are often arranged on the robot. These devices provide higher accuracy ranging results and also increase costs. Therefore, the method and the device aim to use the common color camera with low cost, reduce the hardware cost and ensure the ranging precision of the robot through an algorithm.
The image acquisition device may be a low cost device such as a common color camera; the target image is an image of the target object and the surrounding environment of the target object; each side wall and the ground are provided with a boundary line, two boundary lines are arranged between the walls on the two sides and the ground, and the two boundary lines are respectively marked as a first straight line and a second straight line; since the target image is acquired by the image acquisition device, that is, the present disclosure is calculated from the angle of the image acquisition device, it is necessary to determine the equation of the first line and the second line in the coordinate system of the target image, and then determine the second position of the target object in the coordinate system of the image acquisition device, and thus determine the distance between the robot and the target object.
According to the technical scheme provided by the embodiment of the disclosure, the method for acquiring the target image including the target object and the environment around the target object by the image acquisition equipment arranged on the robot comprises the following steps: walls on both sides of the target object and the ground below the target object; detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image; determining a first position of a target object on a target image by using a target detection algorithm; determining a second position of the target object in the coordinate system of the image acquisition device based on the linear equation of the first straight line and the second straight line and the first position; a distance of the robot from the target object is determined based on the second position. By adopting the technical means, the problem that in the prior art, in the robot ranging, the cost of ranging equipment arranged on the robot and the ranging precision of the robot cannot be taken into consideration can be solved, and the ranging precision of the robot is taken into consideration while the ranging cost is reduced.
Fig. 3 is a schematic diagram of a target image according to an embodiment of the present disclosure. As shown in fig. 3, the target image includes a target object and a surrounding environment of the target object; the surrounding environment of the target object comprises walls on two sides of the target object and the ground below the target object; two lines of intersection exist between the walls and the ground on the two sides, and are respectively marked as a first straight line l1 and a second straight line l2.
Optionally, the second determining module 404 is further configured to calculate coordinates of vanishing points of the first straight line and the second straight line based on straight line equations of the first straight line and the second straight line; calculating a linear equation of the shadow elimination line of the shadow elimination point based on the coordinates of the shadow elimination point; a second position of the target object in the coordinate system of the image acquisition device is determined based on the linear equation of the vanishing line and the first position.
Optionally, the second determining module 404 is further configured to determine a line equation of the first line and the second line in the coordinate system of the target image:
Figure 259544DEST_PATH_IMAGE001
Figure 244817DEST_PATH_IMAGE002
x and y are two coordinate axes of a coordinate system of the target image, T denotes transposition, and a1, b1, c1, a2, b2, c2 are linear equation coefficients.
Figure 210368DEST_PATH_IMAGE003
Is a vector perpendicular to the first line and the second line, and may be a vector in the plane of the ground.
Shadow eliminating point
Figure 281092DEST_PATH_IMAGE004
The shadow elimination line is a straight line passing through the shadow elimination point and can be expressed as
Figure 865658DEST_PATH_IMAGE005
Or
Figure 705438DEST_PATH_IMAGE006
P y Is the y coordinate of the P point.
Shadow elimination and shadow elimination lines are commonly used in the field of camera calibration, and are not described herein in detail because shadow elimination and shadow elimination lines are terms of art. In brief, the vanishing point is a point where the first line and the second line visually intersect, and in fig. 3, the first line and the second line have a tendency to intersect, and the vanishing line is a line passing through the vanishing point.
Through the above processing, the coordinate system of the target image and the coordinate system of the image acquisition device can be linked, so that the second position of the target object in the coordinate system of the image acquisition device can be determined based on the linear equation of the vanishing line in the coordinate system of the target image and the first position.
Optionally, the second determining module 404 is further configured to obtain an internal reference matrix of the image obtaining device; calculating a normal vector vertical to the ground based on a linear equation of the internal reference matrix and the shadow elimination line; determining a plane equation of the ground surface in a coordinate system of the image acquisition equipment by using the normal vector; a second position of the target object in the coordinate system of the image acquisition device is determined based on the planar equation of the ground and the first position.
Normal vector perpendicular to ground
Figure 186098DEST_PATH_IMAGE007
K is an internal reference matrix of the image acquisition device;
the plane equation of the ground under the coordinate system of the image acquisition device can be expressed as
Figure 885063DEST_PATH_IMAGE008
x, y and z are three coordinate axes of the coordinate system of the image capturing device (where x, y should be distinguished from x, y in the above, and x, y in two places represent different coordinate systems, x is a transverse vector of the coordinate system, and y is a longitudinal quantity of the coordinate system). The height of the image acquisition device from the ground is known and is recorded as H, and the final derivation can be obtained
Figure 742161DEST_PATH_IMAGE009
Optionally, the second determining module 404 is further configured to calculate a direction vector of the target object based on the first position and the internal reference matrix; a second position of the target object in the coordinate system of the image acquisition device is determined based on the planar equation and the directional vector of the ground.
A first position of the target object on the target image is determined as Q using a target detection algorithm.
According to
Figure 436447DEST_PATH_IMAGE010
Can lead out
Figure 619167DEST_PATH_IMAGE011
And m is a direction vector, and a second position of the target object in the coordinate system of the image acquisition equipment is determined based on the plane equation and the direction vector of the ground, namely the intersection point of the straight line where m is located and the ground is the second position of the target object in the coordinate system of the image acquisition equipment.
m is a direction vector of a straight line connecting the origin of the coordinate system of the image acquisition device and the foothold of the target object, and the robot is the origin of the coordinate system of the image acquisition device.
The lane line detection algorithm is realized through a first neural network model, the first neural network model is trained, the boundary line between the wall and the ground in the image can be identified, and the linear equation of the boundary line under a preset coordinate system can be determined, wherein the preset coordinate system comprises the coordinate system of the image acquisition equipment.
The first neural network model can identify the boundary line between the wall and the ground in the image and can determine the linear equation of the boundary line under the preset coordinate system after being trained, and it can be understood that the first neural network model learns and saves the corresponding relationship between the image and the boundary line between the wall and the ground in the image after being trained and learns and saves the corresponding relationship between the image and the linear equation of the boundary line under the preset coordinate system corresponding to the image. The first neural network model can determine a linear equation of the boundary line under the preset coordinate system, and can be a linear equation for determining coordinates of any two points on the boundary line and then calculating the boundary line by using the coordinates of the two points.
The target detection algorithm is realized through a second neural network model, the second neural network model is trained, the position of the object on the image can be determined, and the target detection algorithm can be understood that the second neural network model is trained, and the corresponding relation between the image and the position of the object in the image is learned and stored. Wherein, the position of the object in the image can be the coordinate of the object on the image (a coordinate system is established for the image separately).
The first position is correlated with the internal reference matrix, and the direction vector of the target object can be calculated according to the first position and the internal reference matrix.
The neural network model in the disclosure may be any one of the commonly used neural network models, and the training method is preferably deep learning.
Optionally, the third determining module 405 is further configured to determine the distance of the robot from the target object using a point-to-point distance formula based on the origin and the second position, wherein the robot is the origin of the coordinate system of the image acquisition device.
The origin and the second position have respective coordinates, and then the distance between the robot and the target object is determined using a point-to-point distance formula.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by functions and internal logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present disclosure.
Fig. 5 is a schematic diagram of an electronic device 5 provided in an embodiment of the present disclosure. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and operable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of each module/unit in each apparatus embodiment described above when executing the computer program 503.
Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to complete the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 503 in the electronic device 5.
The electronic device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of the electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or less components than those shown, or combine certain components, or be different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 5. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 5. The memory 502 is used for storing computer programs and other programs and data required by the electronic device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, and multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the above embodiments may be realized by the present disclosure, and the computer program may be stored in a computer readable storage medium to instruct related hardware, and when the computer program is executed by a processor, the steps of the above method embodiments may be realized. The computer program may comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.

Claims (9)

1. A robot ranging method, comprising:
acquiring a target image containing a target object and a surrounding environment of the target object through image acquisition equipment arranged on a robot, wherein the surrounding environment of the target object comprises: walls on both sides of the target object and a ground below the target object;
detecting a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determining a straight line equation of the first straight line and the second straight line in a coordinate system of the target image;
determining a first position of the target object on the target image using a target detection algorithm;
determining a second position of the target object in a coordinate system of the image acquisition device based on a line equation of the first line and the second line and the first position;
determining a distance of the robot from the target object based on the second position;
wherein determining a second position of the target object in the coordinate system of the image acquisition device based on the line equations of the first and second lines and the first position comprises: calculating coordinates of vanishing points of the first straight line and the second straight line based on straight line equations of the first straight line and the second straight line; calculating a linear equation of a vanishing line of the vanishing points based on the coordinates of the vanishing points, wherein the linear equation of the vanishing line is y = P y ,P y Is the ordinate of the vanishing point; determining a second position of the target object in a coordinate system of the image acquisition device based on a linear equation of the vanishing line and the first position.
2. The method of claim 1, wherein determining the second position of the target object in the coordinate system of the image acquisition device based on the equation of the line of vanishing line and the first position comprises:
acquiring an internal reference matrix of the image acquisition equipment;
calculating a normal vector perpendicular to the ground based on the internal reference matrix and a linear equation of the vanishing line;
determining a plane equation of the ground in a coordinate system of the image acquisition equipment by using the normal vector;
determining a second position of the target object in a coordinate system of the image acquisition device based on the planar equation of the ground and the first position.
3. The method of claim 2, wherein determining the second position of the target object in the coordinate system of the image acquisition device based on the plane equation of the ground and the first position comprises:
calculating a direction vector of the target object based on the first position and the internal reference matrix;
determining a second position of the target object in a coordinate system of the image acquisition device based on the planar equation of the ground and the direction vector.
4. The method of claim 1, wherein the lane-line detection algorithm is implemented by a first neural network model that has been trained to recognize a boundary line between a wall and a ground in an image and to determine a line equation of the boundary line in a predetermined coordinate system, the predetermined coordinate system including a coordinate system of the target image.
5. The method of claim 1, wherein the target detection algorithm is implemented by a second neural network model that has been trained to determine the location of an object on the image.
6. The method of claim 1, wherein the determining the distance of the robot from the target object based on the second position comprises:
determining a distance between the robot and the target object using a point-to-point distance formula based on an origin and the second position, wherein the robot is the origin of a coordinate system of the image acquisition device.
7. A robot ranging device, comprising:
an acquisition module configured to acquire a target image including a target object and an environment around the target object by an image acquisition device provided on a robot, wherein the environment around the target object includes: walls on both sides of the target object and a ground below the target object;
the detection module is configured to detect a boundary line between each side wall and the ground in the target image by using a lane line detection algorithm to obtain a first straight line and a second straight line, and determine a straight line equation of the first straight line and the second straight line in a coordinate system of the target image;
a first determination module configured to determine a first position of the target object on the target image using a target detection algorithm;
a second determination module configured to determine a second position of the target object in a coordinate system of the image acquisition device based on a line equation of the first line and the second line and the first position;
a third determination module configured to determine a distance of the robot from the target object based on the second position;
the second determination module is further configured to calculate coordinates of vanishing points of the first straight line and the second straight line based on straight line equations of the first straight line and the second straight line; based on the shadowThe coordinates of the vanishing points and the linear equation of the vanishing line of the vanishing points are calculated, and the linear equation of the vanishing line is that y = P y ,P y Is the ordinate of the vanishing point; determining a second position of the target object in a coordinate system of the image acquisition device based on a linear equation of the vanishing line and the first position.
8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210941707.7A 2022-08-08 2022-08-08 Robot ranging method and device Active CN114998426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210941707.7A CN114998426B (en) 2022-08-08 2022-08-08 Robot ranging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210941707.7A CN114998426B (en) 2022-08-08 2022-08-08 Robot ranging method and device

Publications (2)

Publication Number Publication Date
CN114998426A CN114998426A (en) 2022-09-02
CN114998426B true CN114998426B (en) 2022-11-04

Family

ID=83023015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210941707.7A Active CN114998426B (en) 2022-08-08 2022-08-08 Robot ranging method and device

Country Status (1)

Country Link
CN (1) CN114998426B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927283A (en) * 2021-01-29 2021-06-08 成都安智杰科技有限公司 Distance measuring method and device, storage medium and electronic equipment
CN113111707A (en) * 2021-03-07 2021-07-13 上海赛可出行科技服务有限公司 Preceding vehicle detection and distance measurement method based on convolutional neural network
CN113469133A (en) * 2021-07-26 2021-10-01 奥特酷智能科技(南京)有限公司 Deep learning-based lane line detection method
CN113866783A (en) * 2021-09-10 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle distance measurement method and system
CN114565510A (en) * 2022-02-23 2022-05-31 山东新一代信息产业技术研究院有限公司 Lane line distance detection method, device, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6963661B1 (en) * 1999-09-09 2005-11-08 Kabushiki Kaisha Toshiba Obstacle detection system and method therefor
WO2022160266A1 (en) * 2021-01-29 2022-08-04 深圳市锐明技术股份有限公司 Vehicle-mounted camera calibration method and apparatus, and terminal device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927283A (en) * 2021-01-29 2021-06-08 成都安智杰科技有限公司 Distance measuring method and device, storage medium and electronic equipment
CN113111707A (en) * 2021-03-07 2021-07-13 上海赛可出行科技服务有限公司 Preceding vehicle detection and distance measurement method based on convolutional neural network
CN113469133A (en) * 2021-07-26 2021-10-01 奥特酷智能科技(南京)有限公司 Deep learning-based lane line detection method
CN113866783A (en) * 2021-09-10 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle distance measurement method and system
CN114565510A (en) * 2022-02-23 2022-05-31 山东新一代信息产业技术研究院有限公司 Lane line distance detection method, device, equipment and medium

Also Published As

Publication number Publication date
CN114998426A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN107223269B (en) Three-dimensional scene positioning method and device
CN109946680B (en) External parameter calibration method and device of detection system, storage medium and calibration system
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN110988849A (en) Calibration method and device of radar system, electronic equipment and storage medium
CN109828250B (en) Radar calibration method, calibration device and terminal equipment
CN111612841A (en) Target positioning method and device, mobile robot and readable storage medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN116079697B (en) Monocular vision servo method, device, equipment and medium based on image
Huang et al. Obstacle distance measurement based on binocular vision for high-voltage transmission lines using a cable inspection robot
CN114187589A (en) Target detection method, device, equipment and storage medium
US20220327740A1 (en) Registration method and registration apparatus for autonomous vehicle
CN113741446B (en) Robot autonomous exploration method, terminal equipment and storage medium
CN114998426B (en) Robot ranging method and device
CN115031635A (en) Measuring method and device, electronic device and storage medium
CN108801226B (en) Plane inclination testing method and equipment
CN109493423B (en) Method and device for calculating midpoint positions of two points on surface of three-dimensional earth model
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN115334247A (en) Camera module calibration method, visual positioning method and device and electronic equipment
CN111223139B (en) Target positioning method and terminal equipment
CN110389349B (en) Positioning method and device
CN114415129A (en) Visual and millimeter wave radar combined calibration method and device based on polynomial model
CN114266876A (en) Positioning method, visual map generation method and device
CN113870600A (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN116503387B (en) Image detection method, device, equipment, system and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant