CN111174722A - Three-dimensional contour reconstruction method and device - Google Patents

Three-dimensional contour reconstruction method and device Download PDF

Info

Publication number
CN111174722A
CN111174722A CN201811345014.1A CN201811345014A CN111174722A CN 111174722 A CN111174722 A CN 111174722A CN 201811345014 A CN201811345014 A CN 201811345014A CN 111174722 A CN111174722 A CN 111174722A
Authority
CN
China
Prior art keywords
image
processed
light spot
calibration
coordinate value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811345014.1A
Other languages
Chinese (zh)
Inventor
刘文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811345014.1A priority Critical patent/CN111174722A/en
Publication of CN111174722A publication Critical patent/CN111174722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a three-dimensional contour reconstruction method and a three-dimensional contour reconstruction device, wherein an image to be processed is acquired by an image acquisition device, wherein the image to be processed is a light spot image formed on a to-be-detected road surface by lattice structured light projected to the to-be-detected road surface by a lattice structured light generation device acquired by the image acquisition device. And processing the image to be processed, acquiring the three-dimensional space coordinate information of the image to be processed after image processing by utilizing a pre-established functional relation model, and acquiring the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information. The method and the device adopt an image acquisition device and a dot matrix structured light combined curve fitting calibration algorithm to extract the information of the image to be detected so as to realize three-dimensional contour reconstruction.

Description

Three-dimensional contour reconstruction method and device
Technical Field
The invention relates to the technical field of intelligent path detection, in particular to a three-dimensional contour reconstruction method and device.
Background
Unmanned driving has a wide application background in the aspects of intelligent transportation, automobile safety auxiliary driving, automatic or remote control driving of vehicles, patrol of factories and warehouses and the like. Meanwhile, the unmanned vehicle is also called as a wheeled robot and has great application in military and aerospace directions. The ultimate goal of intelligent vehicle development is to replace manual driving with automated driving technology. The unmanned vehicle can continuously drive without interruption, thereby reducing traffic accidents caused by fatigue driving and solving the problem of drunk driving; unmanned vehicles can follow the traffic rules according to the preset program to drive, and the accident rate caused by not following the traffic rules is effectively reduced.
In recent years, intelligent vehicle active safety systems based on computer vision have gradually become a research hotspot in various automobile manufacturers and scientific research institutions. As an indispensable part of an active safety system for intelligent vehicle visual navigation, research on three-dimensional detection of obstacles is also widely regarded.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for reconstructing a three-dimensional contour to improve the above-mentioned problems.
The embodiment of the invention provides a three-dimensional contour reconstruction method, which is applied to data processing equipment in a three-dimensional contour reconstruction system, wherein the three-dimensional contour reconstruction system also comprises a lattice structured light generating device and an image acquisition device which is in communication connection with the data processing equipment, and the method comprises the following steps:
acquiring an image to be processed acquired by the image acquisition device, wherein the image to be processed is a light spot image formed on a pavement to be detected by lattice structured light projected to the pavement to be detected by the lattice structured light generation device acquired by the image acquisition device;
and processing the image to be processed, acquiring the three-dimensional space coordinate information of the image to be processed after image processing by utilizing a pre-established functional relation model, and acquiring the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
Further, the functional relationship model is obtained by:
acquiring a calibration image acquired by the image acquisition device, wherein the calibration image is a light spot image formed on a calibration plate by lattice structured light projected to the calibration plate by the lattice structured light generation device acquired by the image acquisition device;
and establishing a functional relation model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
Furthermore, the calibration plate is scribed with coordinate axes, and the step of establishing a functional relationship model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate comprises:
obtaining a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis by contrasting the coordinate axis;
acquiring a distance value between the image acquisition device and the calibration plate;
combining the first calibration coordinate value and the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate;
performing image processing on the calibration image to obtain pixel coordinate values of light spots in the calibration image;
and establishing a functional relation model between the three-dimensional space coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
Further, the step of performing image processing on the image to be processed, obtaining three-dimensional space coordinate information of the image to be processed after the image processing by using a pre-established functional relationship model, and obtaining three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information includes:
extracting a first coordinate value of a light spot in the image to be processed;
obtaining a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relation model;
and performing three-dimensional space surface fitting according to a second coordinate value of the image to be processed in the three-dimensional space to obtain three-dimensional contour reconstruction information of the road surface to be detected.
Further, the step of extracting the first coordinate value of the light spot in the image to be processed includes:
performing threshold segmentation on the image to be processed to filter out a background image contained in the image to be processed;
performing edge detection on the image to be processed after threshold segmentation to obtain the outer edge of the light spot in the image to be processed;
and obtaining coordinates of all pixel points contained in the outer edge of the light spot, and calculating a first coordinate value of the light spot of the image to be processed by using the coordinates of all the pixel points.
The embodiment of the invention also provides a three-dimensional contour reconstruction device, which is applied to data processing equipment in a three-dimensional contour reconstruction system, the three-dimensional contour reconstruction system further comprises a dot matrix structured light generating device and an image acquisition device in communication connection with the data processing equipment, and the three-dimensional contour reconstruction device comprises a first image acquisition module and an image processing module:
the first image acquisition module is used for acquiring an image to be processed acquired by the image acquisition device, wherein the dot matrix structured light acquired by the image acquisition device to be processed is projected to a spot image formed by dot matrix structured light on a road surface to be detected by the dot matrix structured light generation device;
the image processing module is used for processing the image to be processed, obtaining the three-dimensional space coordinate information of the image to be processed after the image processing by utilizing a pre-established functional relation model, and obtaining the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
Further, the step of obtaining the functional relationship model, the three-dimensional contour reconstruction apparatus further includes a second image obtaining module and a data processing module, configured to obtain the functional relationship model:
the second image acquisition module is configured to acquire a calibration image acquired by the image acquisition device, where the calibration image is a light spot image formed on a calibration plate by lattice structured light projected to the calibration plate by the lattice structured light generation device acquired by the image acquisition device;
and the data processing module is used for establishing a functional relation model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
Further, the data processing module comprises a first acquisition unit, a second acquisition unit, a first data processing unit, an image processing unit and a second data processing unit;
the first obtaining unit is used for obtaining a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis according to the coordinate axis;
the second acquisition unit is used for acquiring a distance value between the image acquisition device and the calibration plate;
the first data processing unit is used for combining the first calibration coordinate value and the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate;
the image processing unit is used for carrying out image processing on the calibration image to obtain pixel coordinate values of light spots in the calibration image;
and the second data processing unit is used for establishing a functional relationship model between the three-dimensional space coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
Further, the image processing module comprises an extraction unit, a third data processing unit and a fitting unit;
the extraction unit is used for extracting a first coordinate value of a light spot in the image to be processed;
the third data processing unit is used for obtaining a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relation model;
and the fitting unit is used for performing three-dimensional space surface fitting according to a second coordinate value of the image to be processed in a three-dimensional space to obtain three-dimensional contour reconstruction information of the road surface to be detected.
Further, the extraction unit comprises a first image processing subunit, a second image processing subunit and a data processing subunit;
the first image processing subunit is configured to perform threshold segmentation on the image to be processed to filter out a background image included in the image to be processed;
the second image processing subunit is configured to perform edge detection on the image to be processed after the threshold segmentation, so as to obtain an outer edge of a light spot in the image to be processed;
and the data processing subunit is used for obtaining the coordinates of all pixel points contained in the outer edge of the light spot and calculating a first coordinate value of the light spot of the image to be processed by using the coordinates of all the pixel points.
The embodiment of the invention provides a three-dimensional contour reconstruction method and a three-dimensional contour reconstruction device, wherein an image to be processed acquired by an image acquisition device is acquired, and the image to be processed is a light spot image formed on a road surface to be detected by lattice structured light projected to the road surface to be detected by a lattice structured light generation device acquired by the image acquisition device. And processing the image to be processed, acquiring the three-dimensional space coordinate information of the image to be processed after image processing by utilizing a pre-established functional relation model, and acquiring the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information. The method and the device adopt an image acquisition device and a dot matrix structured light combined curve fitting calibration algorithm to extract the information of the image to be detected, thereby realizing the reconstruction of the three-dimensional profile.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of a three-dimensional contour reconstruction system according to an embodiment of the present invention.
Fig. 2 is a block diagram of a data processing apparatus according to an embodiment of the present invention.
Fig. 3 is a flowchart of a three-dimensional contour reconstruction method applied to the three-dimensional contour reconstruction system shown in fig. 1 according to an embodiment of the present invention.
Fig. 4 is a distribution diagram of the lattice-structured light emitted by the lattice-structured light generating apparatus according to the embodiment of the present invention.
Fig. 5 is a flowchart of the sub-steps of step S21 in fig. 3.
Fig. 6 is a flowchart of steps for establishing a functional relationship model according to an embodiment of the present invention.
Fig. 7 is a functional block diagram of a three-dimensional contour reconstruction apparatus according to an embodiment of the present invention.
Fig. 8 is another functional block diagram of a three-dimensional contour reconstruction apparatus according to an embodiment of the present invention.
Fig. 9 is a functional block diagram of the data processing module in fig. 8.
Fig. 10 is a functional block diagram of the image processing module of fig. 7.
Fig. 11 is a functional block diagram of the extraction unit in fig. 10.
Icon: 1-a three-dimensional contour reconstruction system; 10-an image acquisition device; 20-a data processing device; 21-a memory; 22-a processor; 23-a three-dimensional contour reconstruction device; 231-a first image acquisition module; 232-an image processing module; 2321-an extraction unit; 23211 — first image processing subunit; 23212-a second image processing subunit; 23213-data processing subunit; 2322-a third data processing unit; 2323 — fitting unit; 233-a second image acquisition module; 234-a data processing module; 2341 — a first obtaining unit; 2342 — a second obtaining unit; 2343 — a first data processing unit; 2344-an image processing unit; 2345 — a second data processing unit; 30-lattice structured light generating device.
Detailed Description
The inventor finds that the existing method for three-dimensional detection of the obstacle mainly comprises an optical flight time method, a stereoscopic vision method, a laser line scanning method, a structured light scanning method and the like.
The optical time-of-flight method is a method for detecting obstacles by using laser radar. The method can acquire information of the distance, the relative speed, the azimuth angle and the like of the front obstacle. However, the laser radar has a large volume, a complex technology and expensive equipment, and the popularization of the laser radar is limited.
The stereoscopic vision method is a commonly used obstacle detection method in the visual navigation of the intelligent vehicle, and the classic image identification and matching algorithm is complex and has large calculation amount, so that the real-time requirement is difficult to meet.
The laser line scanning method is a relatively mature three-dimensional acquisition mode, a laser source projects a thin line laser onto an object, a camera photographs the object, the line laser deformed on the object is extracted, two-dimensional information of the object is obtained, and the obtained information is stored. The problems of high requirement on scanning speed, high accuracy in medium and long distance and high cost exist in the aspect of application of the intelligent vehicle, and the intelligent vehicle is not easy to popularize and use.
The structured light scanning method comprises the steps of projecting a coding pattern on the surface of an object through a projector, shooting a deformed pattern on the surface of the object through a camera, matching codes in the image by utilizing coding information, and finally calculating space coordinate information of the surface of the object by using a triangulation principle. The projector is required to generate variable structured light, and the intensity of the structured light is difficult to adapt to the detection requirement in the natural environment.
Based on the research findings, the embodiment of the invention provides a three-dimensional contour reconstruction method and device, the method and device adopt an image acquisition device and a dot matrix structured light combined curve fitting calibration algorithm to extract information of an image to be detected so as to realize three-dimensional contour reconstruction, and the scheme has the advantages of simple structure, low calculation complexity and easiness in realization and popularization.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1 and fig. 2, an embodiment of the present invention provides a three-dimensional contour reconstruction system 1 based on the above research, where the three-dimensional contour reconstruction system 1 includes a data processing device 20, a lattice structured light generating device 30, and an image collecting device 10. The image acquisition device 10 is in communication connection with the data processing device 20.
The communication connection may be realized through Serial communication connection, parallel communication connection, and wireless communication, such as a Universal Serial Bus (USB) protocol, a Small Computer System Interface (SCSI) protocol, and a ZigBee (ZigBee) protocol.
The data processing device 20 comprises a memory 21, a processor 22 and a three-dimensional contour reconstruction means 23.
The memory 21 and the processor 22 are electrically connected directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The three-dimensional contour reconstruction device 23 includes at least one software functional module which can be stored in the memory 21 in the form of software or firmware (firmware). The processor 22 is configured to execute an executable computer program stored in the memory 21, such as a software functional module and a computer program included in the three-dimensional contour reconstruction apparatus 23, so as to implement the three-dimensional contour reconstruction method.
The Memory 21 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 21 is configured to store a program, and the processor 22 executes the program after receiving the execution instruction.
The processor 22 may be an integrated circuit chip having signal processing capabilities. The Processor 22 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 22 may be any conventional processor or the like.
It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that the data processing apparatus 20 may also include more or fewer components than shown in fig. 2, or have a different configuration than shown in fig. 1. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Alternatively, the specific type of the data processing device 20 is not limited, and may be, for example, but not limited to, a smart phone, a Personal Computer (PC), a tablet PC, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a web server, a data server, and the like having a processing function.
Fig. 3 is a flowchart of a three-dimensional contour reconstruction method applied to the data processing device 20 in the three-dimensional contour reconstruction system 1 shown in fig. 1, and the steps included in the method will be described in detail below.
In step S10, the image to be processed acquired by the image acquisition device 10 is acquired.
The image to be processed is a light spot image formed on the road surface to be detected by the lattice structure light projected to the road surface to be detected by the lattice structure light generating device 30 and collected by the image collecting device 10.
Specifically, in the embodiment of the present invention, the image capturing device 10 is a monocular camera. The distance between the lattice structured light generating device 30 and the road surface to be detected is adjustable, the spatial arrangement order of the light spots of the lattice structured light, and the density of the light spots of the lattice structured light can be adjusted. The lattice structured light generating device 30 includes a light source and a grating, and the grating is disposed on a light path from the light source to the road surface to be measured.
Referring to fig. 4, fig. 4 is a distribution diagram of the lattice structured light, in the embodiment of the present invention, the measurement can be performed under a poor illumination condition to obtain the accurate three-dimensional profile reconstruction information of the road surface to be measured, and the grating is utilized to project the high-brightness laser lattice, and the lattice density is increased and decreased by the position of the lattice structured light generating device 30 away from the road surface to be measured, so as to meet the requirements of measurement with different spatial resolutions and make the measurement more flexible.
And step S20, performing image processing on the image to be processed, obtaining the three-dimensional space coordinate information of the image to be processed after the image processing by using a pre-established functional relation model, and obtaining the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
In the embodiment of the present invention, step S20 further includes step S21, step S22, and step S23.
Referring to fig. 5, in step S21 in the embodiment of the present invention, an image of the image to be processed is processed, and a first coordinate value of a light spot in the image to be processed is extracted. The step of extracting the first coordinate value of the light spot in the image to be processed may include step S211, step S212, and step S213.
Step S211, performing threshold segmentation on the image to be processed to filter out a background image included in the image to be processed.
Step S212, edge detection is carried out on the image to be processed after threshold segmentation, so as to obtain the outer edge of the light spot in the image to be processed.
Performing threshold segmentation on the whole image acquired by a camera, finding a better threshold, filtering interference in a background, performing edge detection on the image to be processed after threshold processing by adopting a gradient extremum Canny operator, denoising the image f (x, y) to be processed to obtain an image I (x, y),
Figure BDA0001863539990000122
in the formula
Figure BDA0001863539990000123
The method is characterized by comprising the following steps of performing convolution operation, wherein f (x, y) is an image to be processed, I (x, y) is an image obtained after the image to be processed is processed, and G (x, y) is a convolution template of a Canny operator.
Calculating gradient values M (I, j) and directions N (I, j) of all points I (I, j) in the x direction and the y direction of all points I (I, j) in I (x, y);
Figure BDA0001863539990000121
N(i,j)=arctan(Gy(i,j)/Gx(i, j)), wherein GxAnd GyConvolution templates in the x and y directions, respectively.
Step S213, obtaining coordinates of all pixel points included in the outer edge of the light spot, and calculating a first coordinate value of the light spot of the image to be processed by using the coordinates of all pixel points.
Specifically, a Canny operator for detecting a gradient extreme value is used for finding the outer edge of the light spot, the longitudinal and transverse coordinates of all pixel points are summed respectively, and then the average value is obtained to obtain the coordinate value of the central point.
Assuming that there are four pixel points in one light spot, and the pixel coordinates of the four pixel points are (5, 10), (6, 10), (5, 11) and (6, 11), respectively, the pixel coordinate of the centroid of the light spot is (5.5, 10.5).
In the embodiment of the invention, in order to obtain the three-dimensional space coordinate information of the image to be processed after image processing, a functional relation model needs to be established in advance. The step of establishing the functional relation model comprises the following steps:
and acquiring a calibration image acquired by the image acquisition device 10, wherein the calibration image is a light spot image formed on a calibration plate by the lattice structured light projected to the calibration plate by the lattice structured light device acquired by the image acquisition device 10.
And establishing a functional relation model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
Referring to fig. 6, the step of establishing a functional relationship model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate further includes step S100, step S200, step S300, step S400, and step S500.
And S100, obtaining a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis by contrasting the coordinate axis.
Step S200, obtaining a distance value between the image capturing device 10 and the calibration board.
And step S300, combining the first calibration coordinate value and the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate.
And step S400, carrying out image processing on the calibration image to obtain the pixel coordinate value of the light spot in the calibration image.
Step S500, establishing a functional relation model between the three-dimensional space coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
In the embodiment of the invention, a first calibration coordinate value of the light spots projected on the calibration plate on the coordinate axis is obtained by contrasting the coordinate axis, the first calibration coordinate value is space two-dimensional information of the light spots in the calibration image, and the number of the light spots is multiple.
And acquiring a distance value between the image acquisition device and the calibration plate according to a triangulation principle, and combining the first calibration coordinate value with the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate.
And performing two-dimensional curve fitting on the pixel coordinate values of the light spots in each calibration image, performing three-dimensional curve fitting on the three-dimensional space coordinate values of the light spots, and performing infinite subdivision by combining three-dimensional curve fitting data and two-dimensional curve fitting data until a mathematical relation, namely the functional relation model, corresponding to the three-dimensional space coordinate values and the two-dimensional curve fitting data is obtained.
It should be noted that the two-dimensional curve fitting and the three-dimensional curve fitting are fitted based on a curve fitting algorithm, the algorithm is set in the data processing device 20, and a specific method for obtaining the curve fitting may adopt the prior art, which is not described herein again.
And step S22, obtaining the three-dimensional space coordinate information of the image to be processed after the image processing by using the pre-established functional relation model.
And obtaining a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relation model.
And step S23, obtaining the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
And performing three-dimensional space surface fitting according to a second coordinate value of the image to be processed in the three-dimensional space to obtain three-dimensional contour reconstruction information of the road surface to be detected.
It should be noted that, in the embodiment of the present invention, the pre-established functional relationship model is established in advance in a laboratory or in other cases by using a calibration board, calibration paper provided with coordinates, a dot-matrix structured light generating device, a monocular camera, and other hardware facilities in combination with the above steps of establishing the functional relationship model. The established functional relation model is stored in the data processing device 20 to complete the three-dimensional contour reconstruction.
Referring to fig. 7, fig. 7 is a functional block diagram of a three-dimensional contour reconstruction device 23 applied to the three-dimensional contour reconstruction system 1 according to an embodiment of the present invention. The three-dimensional contour reconstruction device 23 comprises a first image acquisition module 231 and an image processing module 232.
The first image obtaining module 231 is configured to obtain the image to be processed, which is collected by the image collecting device 10. The lattice structure light generating device 30 collected by the to-be-processed image collecting device 10 projects the lattice structure light projected to the road surface to be detected to form a light spot image on the road surface to be detected.
The image processing module 232 is configured to perform image processing on the image to be processed, obtain three-dimensional space coordinate information of the image to be processed after the image processing by using a pre-established functional relationship model, and obtain three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
Referring to fig. 8, the three-dimensional contour reconstructing apparatus 23 further includes a second image obtaining module 233 and a data processing module 234, which together complete the establishment of the functional relationship model.
The second image obtaining module 233 is configured to obtain a calibration image collected by the image collecting device 10. The calibration image is a light spot image formed on the calibration plate by the lattice structured light projected to the calibration plate by the lattice structured light device collected by the image collecting device 10.
The data processing module 234 is configured to establish a functional relationship model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
Referring to fig. 9, the data processing module 234 includes a first acquiring unit 2341, a second acquiring unit 2342, a first data processing unit 2343, an image processing unit 2344, and a second data processing unit 2345.
The first obtaining unit 2341 is configured to obtain, with respect to the coordinate axis, a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis.
The second obtaining unit 2342 is configured to obtain a distance value between the image capturing apparatus 10 and the calibration board.
The first data processing unit 2343 is configured to combine the first calibration coordinate value and the distance value to obtain a three-dimensional spatial coordinate value of the light spot projected on the calibration board.
The image processing unit 2344 is configured to perform image processing on the calibration image to obtain pixel coordinate values of the light spot in the calibration image.
The second data processing unit 2345 is configured to establish a functional relationship model between the three-dimensional spatial coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
Referring to fig. 10, the image processing module 232 includes an extracting unit 2321, a third data processing unit 2322 and a fitting unit 2323.
The extracting unit 2321 is configured to extract a first coordinate value of a light spot in the image to be processed.
The third data processing unit 2322 is configured to obtain a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relationship model.
The fitting unit 2323 is configured to perform three-dimensional space surface fitting according to the second coordinate value of the image to be processed in the three-dimensional space, so as to obtain three-dimensional contour reconstruction information of the road surface to be detected.
Referring to fig. 11, specifically, the extracting unit 2321 includes a first image processing sub-unit 23211, a second image processing sub-unit 23212 and a data processing sub-unit 23213.
The first image processing subunit 23211 is configured to perform threshold segmentation on the image to be processed, so as to filter out a background image included in the image to be processed.
The second image processing subunit 23212 is configured to perform edge detection on the image to be processed after the threshold segmentation, so as to obtain an outer edge of the light spot in the image to be processed.
The data processing subunit 23213 is configured to obtain coordinates of all pixel points included in the outer edge of the light spot, and calculate a first coordinate value of the light spot of the image to be processed by using the coordinates of all pixel points.
In the embodiment of the invention, after the three-dimensional contour reconstruction information obtained by the method and the device is used, the three-dimensional contour reconstruction information is transmitted to an intelligent vehicle decision system to provide reference information for system intelligent path selection, so that unmanned driving is realized, and compared with a navigation system based on laser, radar and GPS, the navigation and control system based on the machine vision of the method and the device has the following advantages: the obtained information is visual, and the requirements of autonomy and remote control can be met simultaneously.
The course, position and obstacle measurement task can be basically completed by only a vision system, and the requirements of positioning, path planning, motion control and local obstacle avoidance of a navigation and control system are further met. Compared with laser and radar navigation systems, the problem of mutual interference of radiation and signals does not exist. The specific intelligent vehicle decision method can be realized by adopting the prior art, and is not described herein again.
In summary, the embodiments of the present invention provide a three-dimensional contour reconstruction method and apparatus. The method comprises the steps of obtaining an image to be processed, which is acquired by an image acquisition device 10, wherein the image to be processed is a light spot image formed on a road surface to be detected by lattice structure light projected to the road surface to be detected by a lattice structure light generation device 30 acquired by the image acquisition device 10. And processing the image to be processed, acquiring the three-dimensional space coordinate information of the image to be processed after image processing by utilizing a pre-established functional relation model, and acquiring the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information. The method and the device adopt a monocular image acquisition device and a dot matrix structured light combined curve fitting calibration algorithm to extract the information of the image to be detected, thereby realizing three-dimensional contour reconstruction and providing three-dimensional contour reconstruction information for path detection. The scheme has the advantages of simple structure, low calculation complexity and easy realization and popularization.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A three-dimensional contour reconstruction method is applied to data processing equipment in a three-dimensional contour reconstruction system, the three-dimensional contour reconstruction system further comprises a lattice structured light generating device and an image acquisition device which is in communication connection with the data processing equipment, and the method comprises the following steps:
acquiring an image to be processed acquired by the image acquisition device, wherein the image to be processed is a light spot image formed on a pavement to be detected by lattice structured light projected to the pavement to be detected by the lattice structured light generation device acquired by the image acquisition device;
and processing the image to be processed, acquiring the three-dimensional space coordinate information of the image to be processed after image processing by utilizing a pre-established functional relation model, and acquiring the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
2. The three-dimensional contour reconstruction method according to claim 1, wherein the functional relationship model is obtained by:
acquiring a calibration image acquired by the image acquisition device, wherein the calibration image is a light spot image formed on a calibration plate by lattice structured light projected to the calibration plate by the lattice structured light generation device acquired by the image acquisition device;
and establishing a functional relation model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
3. The method for reconstructing a three-dimensional contour according to claim 2, wherein the calibration plate is scribed with coordinate axes, and the step of establishing a functional relationship model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate comprises:
obtaining a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis by contrasting the coordinate axis;
acquiring a distance value between the image acquisition device and the calibration plate;
combining the first calibration coordinate value and the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate;
performing image processing on the calibration image to obtain pixel coordinate values of light spots in the calibration image;
and establishing a functional relation model between the three-dimensional space coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
4. The three-dimensional contour reconstruction method according to claim 1, wherein the step of performing image processing on the image to be processed, obtaining three-dimensional space coordinate information of the image to be processed after the image processing by using a pre-established functional relationship model, and obtaining three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information comprises:
extracting a first coordinate value of a light spot in the image to be processed;
obtaining a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relation model;
and performing three-dimensional space surface fitting according to a second coordinate value of the image to be processed in the three-dimensional space to obtain three-dimensional contour reconstruction information of the road surface to be detected.
5. The three-dimensional contour reconstruction method according to claim 4, wherein the step of extracting the first coordinate value of the light spot in the image to be processed comprises:
performing threshold segmentation on the image to be processed to filter out a background image contained in the image to be processed;
performing edge detection on the image to be processed after threshold segmentation to obtain the outer edge of the light spot in the image to be processed;
and obtaining coordinates of all pixel points contained in the outer edge of the light spot, and calculating a first coordinate value of the light spot of the image to be processed by using the coordinates of all the pixel points.
6. The three-dimensional contour reconstruction device is applied to data processing equipment in a three-dimensional contour reconstruction system, the three-dimensional contour reconstruction system further comprises a lattice structured light generating device and an image acquisition device which is in communication connection with the data processing equipment, and the three-dimensional contour reconstruction device comprises a first image acquisition module and an image processing module:
the first image acquisition module is used for acquiring an image to be processed acquired by the image acquisition device, wherein the dot matrix structured light acquired by the image acquisition device to be processed is projected to a spot image formed by dot matrix structured light on a road surface to be detected by the dot matrix structured light generation device;
the image processing module is used for processing the image to be processed, obtaining the three-dimensional space coordinate information of the image to be processed after the image processing by utilizing a pre-established functional relation model, and obtaining the three-dimensional contour reconstruction information of the road surface to be detected according to the three-dimensional space coordinate information.
7. The three-dimensional contour reconstruction apparatus according to claim 6, further comprising a second image acquisition module and a data processing module for obtaining the functional relationship model:
the second image acquisition module is configured to acquire a calibration image acquired by the image acquisition device, where the calibration image is a light spot image formed on a calibration plate by lattice structured light projected to the calibration plate by the lattice structured light generation device acquired by the image acquisition device;
and the data processing module is used for establishing a functional relation model according to the pixel coordinate information of the light spot in the calibration image and the three-dimensional space coordinate information of the light spot projected on the calibration plate.
8. The three-dimensional contour reconstruction device according to claim 7, wherein the calibration plate is scribed with coordinate axes, and the data processing module comprises a first obtaining unit, a second obtaining unit, a first data processing unit, an image processing unit and a second data processing unit;
the first obtaining unit is used for obtaining a first calibration coordinate value of the light spot projected on the calibration plate on the coordinate axis according to the coordinate axis; (these have been confirmed by the inventors here and are therefore not modified)
The second acquisition unit is used for acquiring a distance value between the image acquisition device and the calibration plate; (these have been confirmed by the inventors here and are therefore not modified)
The first data processing unit is used for combining the first calibration coordinate value and the distance value to obtain a three-dimensional space coordinate value of the light spot projected on the calibration plate;
the image processing unit is used for carrying out image processing on the calibration image to obtain pixel coordinate values of light spots in the calibration image;
and the second data processing unit is used for establishing a functional relationship model between the three-dimensional space coordinate value of the light spot projected on the calibration plate and the pixel coordinate value of the light spot in the calibration image.
9. The three-dimensional contour reconstruction apparatus according to claim 6, wherein the image processing module includes an extraction unit, a third data processing unit, and a fitting unit;
the extraction unit is used for extracting a first coordinate value of a light spot in the image to be processed;
the third data processing unit is used for obtaining a second coordinate value of the image to be processed in the three-dimensional space according to the first coordinate value of the light spot in the image to be processed and a pre-established functional relation model;
and the fitting unit is used for performing three-dimensional space surface fitting according to a second coordinate value of the image to be processed in a three-dimensional space to obtain three-dimensional contour reconstruction information of the road surface to be detected.
10. The three-dimensional contour reconstruction apparatus according to claim 9, wherein the extraction unit includes a first image processing subunit, a second image processing subunit, and a data processing subunit;
the first image processing subunit is configured to perform threshold segmentation on the image to be processed to filter out a background image included in the image to be processed;
the second image processing subunit is configured to perform edge detection on the image to be processed after the threshold segmentation, so as to obtain an outer edge of a light spot in the image to be processed;
and the data processing subunit is used for obtaining the coordinates of all pixel points contained in the outer edge of the light spot and calculating a first coordinate value of the light spot of the image to be processed by using the coordinates of all the pixel points.
CN201811345014.1A 2018-11-13 2018-11-13 Three-dimensional contour reconstruction method and device Pending CN111174722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811345014.1A CN111174722A (en) 2018-11-13 2018-11-13 Three-dimensional contour reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811345014.1A CN111174722A (en) 2018-11-13 2018-11-13 Three-dimensional contour reconstruction method and device

Publications (1)

Publication Number Publication Date
CN111174722A true CN111174722A (en) 2020-05-19

Family

ID=70646163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811345014.1A Pending CN111174722A (en) 2018-11-13 2018-11-13 Three-dimensional contour reconstruction method and device

Country Status (1)

Country Link
CN (1) CN111174722A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739138A (en) * 2020-06-23 2020-10-02 广东省航空航天装备技术研究所 Three-dimensional imaging method, three-dimensional imaging apparatus, electronic device, and storage medium
CN111739153A (en) * 2020-06-23 2020-10-02 广东省航空航天装备技术研究所 Processing method, device and equipment based on three-dimensional imaging and storage medium
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111854642A (en) * 2020-07-23 2020-10-30 浙江汉振智能技术有限公司 Multi-line laser three-dimensional imaging method and system based on random dot matrix
CN112515660A (en) * 2020-11-30 2021-03-19 居天科技(深圳)有限公司 Laser radar human body modeling method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1116703A (en) * 1994-08-08 1996-02-14 陈明彻 Non-contact 3-D profile real-time measuring method and system
CN102538708A (en) * 2011-12-23 2012-07-04 北京理工大学 Measurement system for three-dimensional shape of optional surface
CN102589476A (en) * 2012-02-13 2012-07-18 天津大学 High-speed scanning and overall imaging three-dimensional (3D) measurement method
CN103322937A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Method and device for measuring depth of object using structured light method
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor
CN104034263A (en) * 2014-07-02 2014-09-10 北京理工大学 Non-contact measurement method for sizes of forged pieces
CN104487800A (en) * 2012-07-15 2015-04-01 巴特里有限责任公司 Portable three-dimensional metrology with data displayed on the measured surface
CN104897142A (en) * 2015-06-11 2015-09-09 湖北工业大学 Three-dimensional target for binocular or multi-view vision dimension measuring
CN106600647A (en) * 2016-06-30 2017-04-26 华南理工大学 Binocular visual multi-line projection structured light calibration method
CN106705898A (en) * 2017-01-24 2017-05-24 浙江四点灵机器人股份有限公司 Method for measuring planeness through lattice structure light
CN107039885A (en) * 2017-05-04 2017-08-11 深圳奥比中光科技有限公司 The laser array being imaged applied to 3D

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1116703A (en) * 1994-08-08 1996-02-14 陈明彻 Non-contact 3-D profile real-time measuring method and system
CN102538708A (en) * 2011-12-23 2012-07-04 北京理工大学 Measurement system for three-dimensional shape of optional surface
CN102589476A (en) * 2012-02-13 2012-07-18 天津大学 High-speed scanning and overall imaging three-dimensional (3D) measurement method
CN103322937A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Method and device for measuring depth of object using structured light method
CN104487800A (en) * 2012-07-15 2015-04-01 巴特里有限责任公司 Portable three-dimensional metrology with data displayed on the measured surface
CN103411553A (en) * 2013-08-13 2013-11-27 天津大学 Fast calibration method of multiple line structured light visual sensor
CN104034263A (en) * 2014-07-02 2014-09-10 北京理工大学 Non-contact measurement method for sizes of forged pieces
CN104897142A (en) * 2015-06-11 2015-09-09 湖北工业大学 Three-dimensional target for binocular or multi-view vision dimension measuring
CN106600647A (en) * 2016-06-30 2017-04-26 华南理工大学 Binocular visual multi-line projection structured light calibration method
CN106705898A (en) * 2017-01-24 2017-05-24 浙江四点灵机器人股份有限公司 Method for measuring planeness through lattice structure light
CN107039885A (en) * 2017-05-04 2017-08-11 深圳奥比中光科技有限公司 The laser array being imaged applied to 3D

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张三喜: "《弹道特征参数摄像测量》", 31 March 2014, 国防工业出版社 *
张广军: "《光电测试技术》", 31 March 2008, 中国计量出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739138A (en) * 2020-06-23 2020-10-02 广东省航空航天装备技术研究所 Three-dimensional imaging method, three-dimensional imaging apparatus, electronic device, and storage medium
CN111739153A (en) * 2020-06-23 2020-10-02 广东省航空航天装备技术研究所 Processing method, device and equipment based on three-dimensional imaging and storage medium
CN111798524A (en) * 2020-07-14 2020-10-20 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111798524B (en) * 2020-07-14 2023-07-21 华侨大学 Calibration system and method based on inverted low-resolution camera
CN111854642A (en) * 2020-07-23 2020-10-30 浙江汉振智能技术有限公司 Multi-line laser three-dimensional imaging method and system based on random dot matrix
CN111854642B (en) * 2020-07-23 2021-08-10 浙江汉振智能技术有限公司 Multi-line laser three-dimensional imaging method and system based on random dot matrix
CN112515660A (en) * 2020-11-30 2021-03-19 居天科技(深圳)有限公司 Laser radar human body modeling method

Similar Documents

Publication Publication Date Title
US11320833B2 (en) Data processing method, apparatus and terminal
CN111174722A (en) Three-dimensional contour reconstruction method and device
JP6931096B2 (en) Methods and devices for calibrating external parameters of onboard sensors, and related vehicles
EP3876141A1 (en) Object detection method, related device and computer storage medium
CN109751973B (en) Three-dimensional measuring device, three-dimensional measuring method, and storage medium
CN107133985B (en) Automatic calibration method for vehicle-mounted camera based on lane line vanishing point
CN110988912A (en) Road target and distance detection method, system and device for automatic driving vehicle
WO2018020954A1 (en) Database construction system for machine-learning
Perrollaz et al. A visibility-based approach for occupancy grid computation in disparity space
Pinggera et al. High-performance long range obstacle detection using stereo vision
US11783507B2 (en) Camera calibration apparatus and operating method
CN112567264A (en) Apparatus and method for acquiring coordinate transformation information
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
CN112634359A (en) Vehicle anti-collision early warning method and device, terminal equipment and storage medium
CN111627001A (en) Image detection method and device
EP3029602A1 (en) Method and apparatus for detecting a free driving space
CN114091521B (en) Method, device and equipment for detecting vehicle course angle and storage medium
CN111767843B (en) Three-dimensional position prediction method, device, equipment and storage medium
US20230162513A1 (en) Vehicle environment modeling with a camera
Li et al. On automatic and dynamic camera calibration based on traffic visual surveillance
CN112639822A (en) Data processing method and device
CN114677660A (en) Model training and road detection method and device
CN114384486A (en) Data processing method and device
Klappstein Optical-flow based detection of moving objects in traffic scenes
CN115236696B (en) Method and device for determining obstacle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519

RJ01 Rejection of invention patent application after publication