CN113269824B - Image-based distance determination method and system - Google Patents

Image-based distance determination method and system Download PDF

Info

Publication number
CN113269824B
CN113269824B CN202110592461.2A CN202110592461A CN113269824B CN 113269824 B CN113269824 B CN 113269824B CN 202110592461 A CN202110592461 A CN 202110592461A CN 113269824 B CN113269824 B CN 113269824B
Authority
CN
China
Prior art keywords
image
target
real
coordinate information
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110592461.2A
Other languages
Chinese (zh)
Other versions
CN113269824A (en
Inventor
刘旭健
宋承继
戚娜
梁菲菲
王芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Polytechnic Institute
Original Assignee
Shaanxi Polytechnic Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Polytechnic Institute filed Critical Shaanxi Polytechnic Institute
Priority to CN202110592461.2A priority Critical patent/CN113269824B/en
Publication of CN113269824A publication Critical patent/CN113269824A/en
Application granted granted Critical
Publication of CN113269824B publication Critical patent/CN113269824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a distance determining method and system based on images, which are used for shooting a target area in real time through a shooting mechanism, so that the earlier-stage camera calibration processing can be applied for a long time, and the accuracy of distance measurement can be ensured. The real-time image is utilized to determine the region image of the target region and the target image of the target object, wherein the region image coordinate information of the region image in the real-time image and the target image coordinate information of the target image in the real-time image; and correcting the target image coordinate information in real time by utilizing the region image coordinate information and the preset image coordinate information of the target region in the preset image, and determining the distance information between the target object and the reference position by combining the preset conversion relation. Therefore, the method has great advantages in the processing flow and the information processing amount, and can obtain and output the distance measurement of the target object in a short time, so that the operation efficiency can be improved, the application scene with high real-time requirements can be met, and the method has high robustness.

Description

Image-based distance determination method and system
Technical Field
The application relates to the technical field of image processing, in particular to a distance determining method and system based on images.
Background
With the rapid development of image processing technology, the field of application of image processing technology is increasing. With the current image processing technology, there are various ways to determine the distance of the object in the image based on the image, for example, distance determination based on binocular vision, depth information determination based on structured light, depth information determination based on laser speckle, and even distance determination based on neural network model for the object in the image.
However, the implementation cost is relatively high, and the amount of information to be processed is large, so that the real-time performance is poor, and for some application scenes with strong requirements on the real-time performance, the real-time performance requirements can be met only by improving the hardware performance, so that the cost is too high, and the popularization and the application are difficult.
Disclosure of Invention
The embodiment of the application aims to provide an image-based distance determining method and system, which can meet the real-time requirement of an application scene with lower cost under the condition of meeting the distance measurement precision in a monocular vision mode.
In order to achieve the above object, embodiments of the present application are realized by:
in a first aspect, an embodiment of the present application provides an image-based distance determining method, where an image-based distance determining system includes an electronic device and an image capturing mechanism, where a spatial position of the image capturing mechanism is fixed, and a capturing range covers a target area, where the target area is one of moving ranges of a target object, and the method is applied to the electronic device, and includes: acquiring a real-time image containing the target area, wherein the target object is positioned in the target area; determining an area image of the target area and a target image of the target object according to the real-time image, determining area image coordinate information of the area image in the real-time image, and determining target image coordinate information of the target image in the real-time image; correcting the target image coordinate information in real time according to the region image coordinate information and preset image coordinate information of a target region in a preset image; and determining the distance information of the target object and the reference position according to the corrected target image coordinate information and a preset conversion relation, wherein the preset conversion relation is used for converting coordinates in an image coordinate system into coordinates in a world coordinate system.
In this embodiment of the application, the shooting mechanism that sets up the shooting range through fixing and covers the target area (one of the movable ranges of target object) shoots the target area in real time, therefore, parameters such as shooting focal length, shooting angle of the shooting mechanism can be all fixed, thereby make earlier stage camera calibration processing can long-term application, and can guarantee the precision of distance measurement. The real-time image is utilized to determine the region image of the target region and the target image of the target object, and the region image coordinate information of the region image in the real-time image and the target image coordinate information of the target image in the real-time image; the method can also utilize the region image coordinate information and the preset image coordinate information of the target region in the preset image to correct the target image coordinate information in real time, and then can determine the distance information between the target object and the reference position by combining with the preset conversion relation. In such a way, the imaging mechanism and a specific executing mechanism (for example, in a production line scene, the executing mechanism can be a mechanical gripper, in a scene of target real-time interception, the executing mechanism can be an interception device, etc., without limitation) can be separately arranged, compared with binocular vision and other modes, the method has the advantages of obtaining and outputting the distance measurement of the target object in a short time, improving the running efficiency, meeting the application scene with higher real-time requirement and having stronger robustness. And the coordinate information of the target image can be corrected in real time by utilizing the coordinate information of the region image and the coordinate information of the preset image of the target region in the preset image, so that the accuracy of the distance information between the target object and the reference position can be ensured.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the determining, according to the real-time image, a region image of the target region and a target image of the target object includes: performing region detection on the real-time image, determining the outline of the target region, and determining the image in the outline of the target region as the region image; and carrying out target detection on the area image, determining the outline of the target object, and determining the image in the outline of the target object as the target image.
In this implementation manner, the area detection is performed on the real-time image, the outline of the target area (corresponding to the obtained area image) is determined, then the target detection is performed on the area image, and the outline of the target object (corresponding to the obtained target image) is determined, so that the efficiency of target object detection can be improved (because the outline of the target area is relatively fixed and easy to detect, the detection range of the target object is reduced). Therefore, the regional image and the target image can be rapidly and accurately determined, so that the running efficiency of the whole scheme can be ensured, and the instantaneity is ensured.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the correcting, in real time, the target image coordinate information according to the region image coordinate information and preset image coordinate information of a target region in a preset image includes: matching the region image coordinate information with preset image coordinate information of a target region in a preset image, and determining an image coordinate difference between the region image coordinate information and the preset image coordinate information; and determining a correction focal length coefficient, an image translation amount and an image deflection amount according to the image coordinate difference so as to correct the target image coordinate information based on the correction focal length coefficient, the image translation amount and the image deflection amount, wherein the correction focal length coefficient is used for correcting the zoom difference between the area outline of the target area in the real-time image and the area outline in the preset image, and the image translation amount and the image deflection amount are used for correcting the image position difference between the area outline of the target area in the real-time image and the area outline in the preset image.
In this implementation manner, by matching the region image coordinate information with the preset image coordinate information of the target region in the preset image, an image coordinate difference between the region image coordinate information and the preset image coordinate information is determined, and a correction focal length coefficient (for correcting a scaling difference between a region contour of the target region in the real-time image and a region contour of the target region in the preset image), an image translation amount, and an image deflection amount (for correcting an image position difference between the region contour of the target region in the real-time image and the region contour of the target region in the preset image) are further determined. Therefore, the coordinate information of the target image can be comprehensively corrected in real time, and the accuracy of distance information measurement based on monocular vision is guaranteed.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining the preset conversion relationship is: calibrating the fixedly arranged camera shooting mechanism to obtain a camera internal parameter and a camera external parameter of the camera shooting mechanism; determining a conversion relation between an image coordinate system of the image pickup mechanism and a world coordinate system based on the camera internal parameters and the camera external parameters; acquiring actual world coordinate parameters of the target area in a world coordinate system, acquiring calibration image coordinate information of the target area in a calibration image, and converting the calibration image coordinate information into theoretical world coordinate parameters in the world coordinate system by utilizing the conversion relation; and carrying out parameter fitting correction on the conversion relation according to the plurality of groups of world coordinate parameter pairs obtained in the mode to obtain the preset conversion relation, wherein one group of world coordinate parameter pairs comprises one theoretical world coordinate parameter and a corresponding actual world coordinate parameter.
In the implementation mode, on the basis of utilizing a traditional monocular vision calibration process (obtaining a conversion relation), a plurality of groups of world coordinate parameter pairs (a theoretical world coordinate parameter obtained through conversion of a preset conversion relation and a corresponding real world coordinate parameter in reality) are additionally utilized, and parameter fitting correction is carried out on the conversion relation, so that the preset conversion relation is obtained, the measured distance information can be more accurate, the measured value is more stable and reliable, and the problem that the robustness of the existing monocular vision distance measurement mode is insufficient is solved.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the method further includes: according to a plurality of calibration images used when the camera shooting mechanism is calibrated, determining coordinate information of a calibration image where a target area is located in each calibration image; and solving an average value of the coordinate information of the target area in each calibration image to obtain the preset image coordinate information.
In the implementation mode, according to a plurality of calibration images used when the camera shooting mechanism is calibrated, the coordinate information of the calibration image where the target area is located in each calibration image is determined; and (3) calculating an average value of the coordinate information of the target area in each calibration image to obtain the coordinate information of the preset image. The preset image coordinate information can be used for apportioning errors, and the accuracy and stability of the real-time calibration basis can be ensured.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the image-based distance determining system further includes an actuator, where the actuator is disposed at the reference position, and the method further includes: judging whether the distance information is in a preset range or not; when the distance information is in a preset range, a control instruction is generated based on the distance information and is sent to the executing mechanism, so that the executing mechanism executes actions based on the control instruction.
In the implementation manner, when the distance information is within the preset range, a control instruction is generated based on the distance information and sent to the executing mechanism, so that the executing mechanism executes the action based on the control instruction. Thus, the corresponding executing mechanism can be set to execute actions based on the distance information according to the actual application scene, for example, the moving workpiece is grabbed, the moving target is intercepted, and the like.
In a second aspect, an embodiment of the present application provides an image-based distance determining system, including an electronic device and an image capturing mechanism, where a spatial position of the image capturing mechanism is fixed, and a capturing range covers a target area, and the image capturing mechanism is configured to capture a real-time image including the target area, where the target area is one of moving ranges of a target object; the electronic device is configured to perform target detection on the real-time image, determine whether a target object is included in a target area of the real-time image, and execute the image-based distance determining method according to the first aspect or any one of possible implementation manners of the first aspect if the target object is included in the real-time image.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an image-based distance determining system according to an embodiment of the present application.
Fig. 2 is a flowchart of an image-based distance determining method according to an embodiment of the present application.
Icon: 10-an image-based distance determination system; 11-a camera mechanism; 12-an electronic device; 13-actuator.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an image-based distance determining system 10 according to an embodiment of the present application.
In the present embodiment, the image-based distance determination system 10 may include an electronic device 12 and an image capturing mechanism 11.
In the present embodiment, the spatial position of the imaging mechanism 11 (the imaging mechanism 11 for monocular vision, that is, the visual processing of an image captured by one camera) is fixed and the capturing range covers the target area. The target area may be a regular area, for example, a rectangular area, a circular area, a bar shape, a triangular area, or an irregular area, for example, a combination of a plurality of curves, a plurality of straight lines, a curve-straight line, or the like, in view of the imaging means 11, and for convenience of explanation, the rectangular target area is described as an example, and the present embodiment is not limited thereto. The mounting position of the imaging means 11 may be, for example, immediately above the target area (above the center of the target area), obliquely above the target area, or below the target area, laterally of the target area, or the like, depending on the actual application, and is not limited thereto. In addition, for convenience of explanation, the imaging mechanism 11 in this embodiment captures real-time images of the target area at a fixed spatial position, a fixed angle, and a fixed focal length, and in other possible implementations, the spatial position, the capturing angle, the focal length, and the like may be changed, which is not limited herein.
In this embodiment, the target object may be an object (e.g., a workpiece) with homogeneity, or may be an object with different homogeneity (e.g., an interception target, it cannot be expected in advance what the object needs to intercept), which is not limited herein.
In the present embodiment, the electronic device 12 is connected to the imaging mechanism 11 (may be connected by wire or wirelessly), and is not limited thereto. The electronic device 12 may be a terminal (e.g., a personal computer, a smart phone, etc.) or a server (e.g., a cloud server, a web server, a server cluster, etc.), which is not limited herein.
Of course, in some other possible application scenarios, the image-based distance determination system 10 may also include an actuator 13, where the actuator 13 is coupled to the electronic device 12. Since the structure, function, executed action, etc. of the actuator 13 have great differences in different scenes, a detailed description is omitted in this embodiment. The actuator 13 may be provided in the target area or may be provided on the periphery of the target area, and is not limited thereto.
To implement distance determination based on images (specifically, distance determination to a target object based on monocular vision), electronic device 12 may perform an image-based distance determination method.
Referring to fig. 2, fig. 2 is a flowchart of an image-based distance determining method according to an embodiment of the present application. The image-based distance determining method may include step S10, step S20, step S30, and step S40.
In the present embodiment, the image capturing mechanism 11 may capture a real-time image including the target area and transmit it to the electronic apparatus 12.
After receiving the real-time image transmitted from the imaging means 11, the electronic device 12 performs target detection on the real-time image to determine whether or not the target area of the real-time image contains a target object.
For example, the electronic device 12 may perform edge detection on the real-time image to obtain a contour of the target area in the real-time image, and compare the contour existing in the contour of the target area with a preset contour to determine whether a new contour exists, so as to determine whether a target object exists in the target area (this method may be applied to a target object with homogeneity or a target object with different homogeneity). Of course, for the object with homogeneity, the contours in the real-time image may also be directly extracted, and each contour may be matched with a preset object contour (the contour of the object) to determine whether the contour of the object is included therein, and if so, whether the matched contour is located within the contour of the target area. By the judging mode, when the target area does not contain the target object, steps of calling the outline of the target area can be reduced, and therefore operation efficiency is improved better.
Upon determining that the real-time image contains the target object, the electronic device 12 may execute step S10.
Step S10: and acquiring a real-time image containing the target area, wherein the target object is positioned in the target area, and the real-time image is shot by the shooting mechanism.
Since the electronic device 12 has already acquired the real-time image and then performs contour detection, the acquisition of the real-time image including the target area captured by the image capturing mechanism 11 may be understood as a pre-acquired real-time image, and the determination of the real-time image as a real-time image including the target area (and the target object being located in the target area) may be performed. Of course, in other possible implementations, some sensing components may be provided to cooperate with other auxiliary components, such as infrared sensing, ultrasonic detection, etc., and after determining that the target object enters the target area, the imaging mechanism 11 is controlled to capture images, and then the manner in which the electronic device 12 captures real-time images, that is, receiving real-time images captured by the imaging mechanism 11, is not limited herein.
After acquiring the real-time image containing the target area, the electronic device 12 may perform step S20.
Step S20: and determining the region image of the target region and the target image of the target object according to the real-time image, determining the region image coordinate information of the region image in the real-time image, and determining the target image coordinate information of the target image in the real-time image.
In this embodiment, the electronic device 12 may perform region detection on the real-time image, determine the outline of the target region, and determine the image within the outline of the target region as the region image. The electronic device 12 may then perform target detection on the area image, determine the outline of the target object, and determine the image within the outline of the target object as the target image.
The real-time image is subjected to area detection to determine the outline of the target area (corresponding to the obtained area image), then the area image is subjected to target detection to determine the outline of the target object (corresponding to the obtained target image), so that the efficiency of target object detection can be improved (because the outline of the target area is relatively fixed and easy to detect, the detection range of the target object is reduced). Therefore, the regional image and the target image can be rapidly and accurately determined, so that the running efficiency of the whole scheme can be ensured, and the instantaneity is ensured.
After determining the region image of the target region and the target image of the target object, the electronic device 12 may determine the region image coordinate information in which the region image is located in the real-time image, and the target image coordinate information in which the target image is located in the real-time image.
For example, the electronic device 12 may determine, as the region image coordinate information, the coordinates of each pixel point on the contour of the region image in the real-time image according to the position of the contour in the real-time image. Similarly, the electronic device 12 may determine, according to the position of the outline of the target image in the real-time image, the coordinates of each pixel point on the outline in the real-time image, as the coordinate information of the target image.
After determining the region image coordinate information and the target image coordinate information, the electronic device 12 may perform step S30.
Step S30: and correcting the target image coordinate information in real time according to the region image coordinate information and the preset image coordinate information of the target region in the preset image.
In this embodiment, the electronic device 12 may correct the target image coordinate information in real time according to the region image coordinate information and the preset image coordinate information of the target region in the preset image. The preset image coordinate information, hereinafter referred to as "calendar" may be understood as a set of relatively determined data.
For example, the electronic device 12 may match the region image coordinate information with preset image coordinate information of the target region in the preset image, and determine an image coordinate difference between the region image coordinate information and the preset image coordinate information.
Electronic device 12 may then determine a correction focus factor, an image shift amount, and an image deflection amount based on the image coordinate differences to correct the target image coordinate information based on the correction focus factor, the image shift amount, and the image deflection amount.
And correcting the focal length coefficient for correcting the zoom difference between the region outline of the target region in the real-time image and the region outline in the preset image. That is, the image coordinate difference indicates that the area outline of the target area in the real-time image is scaled (i.e., the overall size is different) from the area outline in the preset image, and it may be that the focal length of the image capturing mechanism 11 is changed (which causes some diffusion of the outline of the target area in the real-time image and thus enlargement of the outline of the target area), or the setting height of the image capturing mechanism 11 is changed, which causes a change in the overall size of the outline of the target area, so that the area outline of the target area in the real-time image and the area outline of the target area in the preset image may be kept to be the same size by correcting the focal length coefficient.
And the image translation amount and the image deflection amount are used for correcting the image position difference between the region outline of the target region in the real-time image and the region outline in the preset image. That is, the image coordinate difference indicates that the region outline of the target region in the real-time image has an angular deviation and/or a positional deviation from the region outline in the preset image. Possibly due to a change in the shooting angle of the camera 11, resulting in a change in the angle and/or position of the target area in the real-time image.
By matching the region image coordinate information with the preset image coordinate information of the target region in the preset image, the difference of the region image coordinate information and the preset image coordinate information is determined, and a correction focal length coefficient (used for correcting the zoom difference between the region outline of the target region in the real-time image and the region outline in the preset image), an image translation amount and an image deflection amount (used for correcting the image position difference between the region outline of the target region in the real-time image and the region outline in the preset image) are further determined. Therefore, the coordinate information of the target image can be comprehensively corrected in real time, and the accuracy of distance information measurement based on monocular vision is guaranteed.
In this embodiment, the manner of determining the preset conversion relationship may be: and calibrating the fixedly arranged camera shooting mechanism 11 to obtain the camera internal parameters and the camera external parameters of the camera shooting mechanism 11.
The conversion relation between the image coordinate system and the world coordinate system of the image pickup mechanism 11 (i.e., the conversion relation between the pixel coordinate system, the image coordinate system, the camera coordinate system, and the world coordinate system under monocular vision) is determined based on the camera internal reference and the camera external reference. Since the monocular camera calibration process is existing, it is not described in detail here.
After determining the conversion relation between the image coordinate system and the world coordinate system, the electronic device 12 may obtain the actual world coordinate parameters of the target area in the world coordinate system, and obtain the calibration image coordinate information of the target area in the calibration image, and convert the calibration image coordinate information into the theoretical world coordinate parameters in the world coordinate system by using the conversion relation. The electronic device 12 may then perform a parameter fitting correction on the conversion relation according to the plurality of sets of world coordinate parameter pairs obtained in this way, to obtain the preset conversion relation. Here, a set of world coordinate parameter pairs includes one theoretical world coordinate parameter and the corresponding actual world coordinate parameter.
On the basis of utilizing the traditional monocular vision calibration process (obtaining a conversion relation), a plurality of groups of world coordinate parameter pairs (a theoretical world coordinate parameter obtained by converting a preset conversion relation and a corresponding real world coordinate parameter in reality) are added, and the conversion relation is subjected to parameter fitting correction, so that the preset conversion relation is obtained, the measured distance information can be more accurate, the measured value is more stable and reliable, and the problem of insufficient robustness of the existing monocular vision distance measurement mode is solved.
In this embodiment, the electronic device 12 may further determine coordinate information of a calibration image in which the target area in each calibration image is located according to a plurality of calibration images used when calibrating the image capturing mechanism 11; and then, calculating a mean value of the coordinate information of the target area in each calibration image to obtain the coordinate information of the preset image. The preset image coordinate information can be used for apportioning errors, and the accuracy and stability of the real-time calibration basis can be ensured.
After correcting the target image coordinate information in real time, the electronic device 12 may perform step S40.
Step S40: and determining the distance information of the target object and the reference position according to the corrected target image coordinate information and a preset conversion relation, wherein the preset conversion relation is used for converting coordinates in an image coordinate system into coordinates in a world coordinate system.
In this embodiment, the electronic device 12 may determine the distance information between the target object and the reference position according to the corrected target image coordinate information and the preset conversion relationship.
For example, the electronic device 12 may determine the coordinate information of the target object in the world coordinate system according to the corrected target image coordinate information and the preset conversion relationship, and then calculate the distance between the coordinate information of the reference position (which may or may not be the origin of the world coordinate system, and the coordinate information of the target object in the world coordinate system), so as to obtain the distance information between the target object and the reference position.
In the present embodiment, the electronic device 12 may also determine whether the distance information is within a preset range, under the condition that the image-based distance determination system 10 includes the actuator 13 (may be provided at a reference position). When the distance information is within the preset range, the electronic device 12 may generate a control instruction based on the distance information and send the control instruction to the actuator 13, so that the actuator 13 performs an action based on the control instruction.
In this way, the corresponding actuator 13 can be set according to the actual application scenario to perform actions based on the distance information, for example, grabbing a moving workpiece, intercepting a moving target, and the like.
In summary, the embodiment of the application provides an image-based distance determining method and system, in which the image capturing mechanism 11 is fixedly arranged to cover the target area (one of the moving ranges of the target object), so that parameters such as the capturing focal length and the capturing angle of the image capturing mechanism 11 can be fixed, thereby enabling the early camera calibration process to be applied for a long time and ensuring the accuracy of distance measurement. The real-time image is utilized to determine the region image of the target region and the target image of the target object, and the region image coordinate information of the region image in the real-time image and the target image coordinate information of the target image in the real-time image; the method can also utilize the region image coordinate information and the preset image coordinate information of the target region in the preset image to correct the target image coordinate information in real time, and then can determine the distance information between the target object and the reference position by combining with the preset conversion relation. In this way, as a whole, the imaging mechanism 11 and the specific executing mechanism 13 can be separately configured (for example, in a production line scene, the executing mechanism 13 may be a mechanical gripper, in a scene of target real-time interception, the executing mechanism 13 may be an interception device, etc., without limitation), and the processing flow and the information processing amount have great advantages compared with the binocular vision and other modes, and the distance measurement of the target object can be obtained and output in a short time, thereby improving the operation efficiency, meeting the application scene with higher real-time requirements, and having stronger robustness. And the coordinate information of the target image can be corrected in real time by utilizing the coordinate information of the region image and the coordinate information of the preset image of the target region in the preset image, so that the accuracy of the distance information between the target object and the reference position can be ensured.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (5)

1. An image-based distance determination method, characterized in that an image-based distance determination system includes an electronic device and an image capturing mechanism, the spatial position of the image capturing mechanism is fixed, and a shooting range covers a target area, the target area being one of the moving ranges of a target object, the method being applied to the electronic device, comprising:
acquiring a real-time image containing the target area, wherein the target object is positioned in the target area;
determining an area image of the target area and a target image of the target object according to the real-time image, determining area image coordinate information of the area image in the real-time image, and determining target image coordinate information of the target image in the real-time image;
correcting the target image coordinate information in real time according to the region image coordinate information and preset image coordinate information of a target region in a preset image;
determining the distance information of the target object and the reference position according to the corrected target image coordinate information and a preset conversion relation, wherein the preset conversion relation is used for converting coordinates in an image coordinate system into coordinates in a world coordinate system;
wherein, according to the real-time image, determining the area image of the target area and the target image of the target object includes:
performing region detection on the real-time image, determining the outline of the target region, and determining the image in the outline of the target region as the region image; performing target detection on the area image, determining the outline of the target object, and determining an image in the outline of the target object as the target image;
the real-time correction of the target image coordinate information according to the region image coordinate information and the preset image coordinate information of the target region in the preset image includes:
matching the region image coordinate information with preset image coordinate information of a target region in a preset image, and determining an image coordinate difference between the region image coordinate information and the preset image coordinate information; and determining a correction focal length coefficient, an image translation amount and an image deflection amount according to the image coordinate difference so as to correct the target image coordinate information based on the correction focal length coefficient, the image translation amount and the image deflection amount, wherein the correction focal length coefficient is used for correcting the zoom difference between the area outline of the target area in the real-time image and the area outline in the preset image, and the image translation amount and the image deflection amount are used for correcting the image position difference between the area outline of the target area in the real-time image and the area outline in the preset image.
2. The image-based distance determination method according to claim 1, wherein the manner of determining the preset conversion relationship is:
calibrating the fixedly arranged camera shooting mechanism to obtain a camera internal parameter and a camera external parameter of the camera shooting mechanism;
determining a conversion relation between an image coordinate system of the image pickup mechanism and a world coordinate system based on the camera internal parameters and the camera external parameters;
acquiring actual world coordinate parameters of the target area in a world coordinate system, acquiring calibration image coordinate information of the target area in a calibration image, and converting the calibration image coordinate information into theoretical world coordinate parameters in the world coordinate system by utilizing the conversion relation;
and carrying out parameter fitting correction on the conversion relation according to the plurality of groups of world coordinate parameter pairs obtained in the mode to obtain the preset conversion relation, wherein one group of world coordinate parameter pairs comprises one theoretical world coordinate parameter and a corresponding actual world coordinate parameter.
3. The image-based distance determination method of claim 2, further comprising:
according to a plurality of calibration images used when the camera shooting mechanism is calibrated, determining coordinate information of a calibration image where a target area is located in each calibration image;
and solving an average value of the coordinate information of the target area in each calibration image to obtain the preset image coordinate information.
4. The image-based distance determination method of claim 1, wherein the image-based distance determination system further comprises an actuator disposed at the reference location, the method further comprising:
judging whether the distance information is in a preset range or not;
when the distance information is in a preset range, a control instruction is generated based on the distance information and is sent to the executing mechanism, so that the executing mechanism executes actions based on the control instruction.
5. An image-based distance determining system is characterized by comprising an electronic device and a camera mechanism,
the space position of the image pickup mechanism is fixed, and the shooting range covers a target area and is used for shooting a real-time image containing the target area, wherein the target area is one of the moving ranges of a target object;
the electronic device is configured to perform target detection on the real-time image, determine whether a target object is included in a target area of the real-time image, and execute the image-based distance determining method according to any one of claims 1 to 4 if the target object is included in the real-time image.
CN202110592461.2A 2021-05-28 2021-05-28 Image-based distance determination method and system Active CN113269824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110592461.2A CN113269824B (en) 2021-05-28 2021-05-28 Image-based distance determination method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110592461.2A CN113269824B (en) 2021-05-28 2021-05-28 Image-based distance determination method and system

Publications (2)

Publication Number Publication Date
CN113269824A CN113269824A (en) 2021-08-17
CN113269824B true CN113269824B (en) 2023-07-07

Family

ID=77233475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110592461.2A Active CN113269824B (en) 2021-05-28 2021-05-28 Image-based distance determination method and system

Country Status (1)

Country Link
CN (1) CN113269824B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018043437A1 (en) * 2016-09-01 2018-03-08 公立大学法人会津大学 Image distance calculation device, and computer-readable non-transitory recording medium with an image distance calculation program recorded thereon
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
WO2019184885A1 (en) * 2018-03-30 2019-10-03 杭州海康威视数字技术股份有限公司 Method, apparatus and electronic device for calibrating extrinsic parameters of camera
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device
CN112771575A (en) * 2020-03-30 2021-05-07 深圳市大疆创新科技有限公司 Distance determination method, movable platform and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018043437A1 (en) * 2016-09-01 2018-03-08 公立大学法人会津大学 Image distance calculation device, and computer-readable non-transitory recording medium with an image distance calculation program recorded thereon
WO2019184885A1 (en) * 2018-03-30 2019-10-03 杭州海康威视数字技术股份有限公司 Method, apparatus and electronic device for calibrating extrinsic parameters of camera
CN108981672A (en) * 2018-07-19 2018-12-11 华南师范大学 Hatch door real-time location method based on monocular robot in conjunction with distance measuring sensor
CN110648367A (en) * 2019-08-15 2020-01-03 大连理工江苏研究院有限公司 Geometric object positioning method based on multilayer depth and color visual information
CN112771575A (en) * 2020-03-30 2021-05-07 深圳市大疆创新科技有限公司 Distance determination method, movable platform and computer readable storage medium
CN112232279A (en) * 2020-11-04 2021-01-15 杭州海康威视数字技术股份有限公司 Personnel spacing detection method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Monitoring activities from multiple video streams: establishing a common coordinate frame;L. Lee 等;IEEE Transactions on Pattern Analysis and Machine Intelligence(第8期);758 - 767 *
基于DSP的车道偏离检测与车辆前向车距检测;刘金清 等;计算机系统应用(第03期);269-276 *
基于信息融合的移动机器人目标识别与定位;黄朝美 等;计算机测量与控制(第11期);192-195 *

Also Published As

Publication number Publication date
CN113269824A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
Shah et al. A simple calibration procedure for fish-eye (high distortion) lens camera
CN107270810B (en) The projector calibrating method and device of multi-faceted projection
JP5075757B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
US20150262346A1 (en) Image processing apparatus, image processing method, and image processing program
JP2015203652A (en) Information processing unit and information processing method
JP2010041417A (en) Image processing unit, image processing method, image processing program, and imaging apparatus
KR101589167B1 (en) System and Method for Correcting Perspective Distortion Image Using Depth Information
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN108362205B (en) Space distance measuring method based on fringe projection
JPWO2016135856A1 (en) Three-dimensional shape measurement system and measurement method thereof
CN106846395B (en) Method and system for calculating area of target graph in photo
CN113269824B (en) Image-based distance determination method and system
CN110619664B (en) Laser pattern-assisted camera distance posture calculation method and server
US20210243422A1 (en) Data processing apparatus, data processing method, and program
Wang et al. Distance measurement using single non-metric CCD camera
CN109328459B (en) Intelligent terminal, 3D imaging method thereof and 3D imaging system
JP2010198554A (en) Device and method for correcting image
JP2010041416A (en) Image processing unit, image processing method, image processing program, and imaging apparatus
JP2014235063A (en) Information processing apparatus and information processing method
CN110728714B (en) Image processing method and device, storage medium and electronic equipment
CN117455914B (en) Intelligent control method and system for production line of photovoltaic junction box
CN112197701B (en) Three-dimensional data extraction method applied to large-breadth workpiece
TWI643498B (en) Method and image capture device for computing a lens angle in a single image
CN105872319B (en) A kind of depth of field measurement method
CN117917689A (en) Robot camera calibration method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant