CN114283053A - Binocular point cloud determination method, device and equipment - Google Patents

Binocular point cloud determination method, device and equipment Download PDF

Info

Publication number
CN114283053A
CN114283053A CN202010987332.9A CN202010987332A CN114283053A CN 114283053 A CN114283053 A CN 114283053A CN 202010987332 A CN202010987332 A CN 202010987332A CN 114283053 A CN114283053 A CN 114283053A
Authority
CN
China
Prior art keywords
image area
image
processed
point cloud
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010987332.9A
Other languages
Chinese (zh)
Inventor
杨焕星
王珂
刘树明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Navinfo Co Ltd
Original Assignee
Navinfo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Navinfo Co Ltd filed Critical Navinfo Co Ltd
Priority to CN202010987332.9A priority Critical patent/CN114283053A/en
Publication of CN114283053A publication Critical patent/CN114283053A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device and equipment for determining binocular point cloud, wherein the method comprises the following steps: respectively acquiring a left eye image and a right eye image; dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area; respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise close-range images; and determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area. The binocular point cloud determining method, device and equipment can improve accuracy of the binocular point cloud.

Description

Binocular point cloud determination method, device and equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method, a device and equipment for determining binocular point cloud.
Background
With the development of science and technology, the automatic driving technology is also applied in a certain range. The reliability of the autopilot map is the key to improving the safety of autopilot.
The key point of improving the reliability of the automatic driving map is that the accuracy of generating the binocular point cloud needs to be improved. The existing binocular point cloud is obtained by acquiring a high-resolution binocular image, wherein the binocular image comprises a left eye image and a right eye image. And then, carrying out stereo matching on the left eye image and the right eye image to obtain a disparity map, and generating binocular point cloud through the disparity map.
However, when the near view image in the left eye image and the near view image in the right eye image are stereoscopically matched, the parallax image generated is low in accuracy due to the large parallax, and further the binocular point cloud is low in accuracy.
Disclosure of Invention
The application provides a method, a device and equipment for determining binocular point cloud, which can improve the accuracy of the binocular point cloud.
In a first aspect, the present application provides a method for determining a binocular point cloud, including: respectively acquiring a left eye image and a right eye image; dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area; respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise close-range images; and determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area.
Optionally, obtaining determined binocular point cloud data according to the first image area, the third image area, the processed second image area, and the processed fourth image area, where the determining includes: respectively cutting the first image area and the third image area to obtain a processed first image area and a processed third image area; determining first point cloud data according to the processed first image area and the processed third image area; determining second point cloud data according to the processed second image area and the processed fourth image area; and determining binocular point cloud data according to the first point cloud data and the second point cloud data.
By the method, the sizes of the first image area and the third image area can be reduced, the generation speed of the first point cloud data is increased, and the generation speed of the binocular point cloud data is increased.
Optionally, determining the first point cloud data according to the processed first image area and the processed third image area, including: determining a first disparity map according to the processed first image area and the processed third image area; configuring a first parameter of a binocular camera according to the processed first image area and the processed third image area, wherein a left eye image and a right eye image are images shot by two cameras of the binocular camera; and determining first point cloud data according to the first disparity map and the first parameters.
By the method, parameters of the binocular camera can be adjusted according to changes of the first image area and the third image area, so that the new parameters of the binocular camera are matched with the processed first image area and the processed third image area, and accuracy of generating the first point cloud data can be improved.
Optionally, determining the second point cloud data according to the processed second image area and the processed fourth image area, including: determining a second disparity map according to the processed second image area and the processed fourth image area; configuring a second parameter of the binocular camera according to the processed second image area and the processed fourth image area, wherein the left eye image and the right eye image are images shot by two cameras of the binocular camera; and determining second point cloud data according to the second disparity map and the second parameter.
By the method, parameters of the binocular camera can be adjusted according to changes of the second image area and the fourth image area, so that the new parameters of the binocular camera are matched with the processed second image area and the processed fourth image area, and the accuracy of generating the second point cloud data can be improved.
Optionally, the processed second image area is 1/k of the second image area, the processed fourth image area is 1/k of the fourth image area, and k is any number greater than 1 and less than or equal to 10.
By the method, more accurate parallax can be obtained.
Optionally, the respectively performing cropping processing on the first image area and the third image area to obtain a processed first image area and a processed third image area includes: respectively cutting the edge parts of the first image area and the third image area to obtain the central part of the first image area and the central part of the third image area; a central portion of the first image area is determined as a processed first image area, and a central portion of the third image area is determined as a processed third image area.
By the method, the number of pixels of the first image area and the third image area can be reduced, the processing speed is further increased, and the efficiency of determining the binocular point cloud is improved.
Optionally, the first image area is an upper half area of the left eye image, and the second image area is a lower half area of the left eye image; the third image area is the upper half area of the right eye image, and the fourth image area is the lower half area of the right eye image.
In general, the upper half part of an image is a long shot, and the lower half part of the image is a short shot, through the method, the left eye image and the right eye image can be divided into an upper part and a lower part, namely, the long shot part and the short shot part, and then the upper part and the lower part are processed by adopting corresponding modes such as cutting or zooming, so that the efficiency and the accuracy of determining the binocular point cloud are improved.
In a second aspect, the present application provides a binocular point cloud determining apparatus, including:
and the acquisition module is used for respectively acquiring the left eye image and the right eye image.
And the processing module is used for dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area.
The processing module is further configured to perform scaling processing on the second image area and the fourth image area respectively to obtain a processed second image area and a processed fourth image area, where the second image area and the fourth image area include a close-range image.
And the determining module is used for determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area.
Optionally, the determining module is specifically configured to perform cropping processing on the first image area and the third image area respectively to obtain a processed first image area and a processed third image area; determining first point cloud data according to the processed first image area and the processed third image area; determining second point cloud data according to the processed second image area and the processed fourth image area; and determining binocular point cloud data according to the first point cloud data and the second point cloud data.
Optionally, the determining module is specifically configured to determine the first disparity map according to the processed first image region and the processed third image region; configuring a first parameter of a binocular camera according to the processed first image area and the processed third image area, wherein a left eye image and a right eye image are images shot by two cameras of the binocular camera; and determining first point cloud data according to the first disparity map and the first parameters.
Optionally, the determining module is specifically configured to determine the second disparity map according to the processed second image region and the processed fourth image region; configuring a second parameter of the binocular camera according to the processed second image area and the processed fourth image area, wherein the left eye image and the right eye image are images shot by two cameras of the binocular camera; and determining second point cloud data according to the second disparity map and the second parameter.
Optionally, the processed second image area is 1/k of the second image area, the processed fourth image area is 1/k of the fourth image area, and k is any number greater than 1 and less than or equal to 10.
Optionally, the determining module is specifically configured to respectively crop edge portions of the first image area and the third image area to obtain a central portion of the first image area and a central portion of the third image area; a central portion of the first image area is determined as a processed first image area, and a central portion of the third image area is determined as a processed third image area.
Optionally, the first image area is an upper half area of the first image, and the second image area is a lower half area of the left eye image; the third image area is the upper half area of the right eye image, and the fourth image area is the lower half area of the right eye image.
In a third aspect, the present application provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect or the alternatives of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method as described in the first aspect or the alternatives thereof when executed by a processor.
According to the binocular point cloud determining method, device and equipment, the left eye image and the right eye image are respectively obtained, the left eye image is divided to obtain the first image area and the second image area, the right eye image is divided to obtain the third image area and the fourth image area, and therefore the left eye image and the right eye image are divided into four image areas; then, respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise close-range images; and finally, determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area. Because the second image area and the fourth image area including the close-range image in the left eye image and the right eye image are subjected to scaling processing, the search space of the second image area and the fourth image area in the epipolar line direction is reduced, and the parallax of the close-range image included in the left eye image and the right eye image is further reduced, the accuracy of binocular point cloud data can be improved when the second image area and the fourth image area are subjected to stereo matching and corresponding point cloud data are determined.
Drawings
Fig. 1 is a schematic view of a left eye image provided in the present application;
FIG. 2 is a schematic diagram of a right eye image provided herein;
FIG. 3 is a schematic diagram of an annotated left eye image provided herein;
FIG. 4 is a schematic diagram of an annotated right eye image provided herein;
fig. 5 is a schematic diagram of a binocular point cloud determination system provided by the present application;
fig. 6 is a schematic flowchart of a method for determining a binocular point cloud according to the present disclosure;
fig. 7 is a schematic view of binocular imaging provided herein;
fig. 8 is a schematic view of another binocular imaging provided herein;
fig. 9 is a schematic flowchart of a method for determining a binocular point cloud according to the present disclosure;
FIG. 10 is a schematic diagram of image cropping provided herein;
fig. 11 is a schematic diagram of a disparity map provided in the present application;
FIG. 12 is a schematic view of a binocular point cloud provided herein;
fig. 13 is a schematic structural diagram of a binocular point cloud determining apparatus provided in the present application;
fig. 14 is a schematic structural diagram of an electronic device provided in the present application.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the following, related concepts related to the embodiments of the present application are explained.
Left eye image: when two image acquisition devices are fixed together, an image can be obtained when the images are acquired at the same time, wherein the image acquired by the image acquisition device on the left side is called a left eye image.
Right eye image: when two image acquisition devices are fixed together, an image can be obtained respectively when the images are acquired at the same time, wherein the image acquired by the right acquisition device is called a right eye image.
Binocular image: the left eye image and the right eye image are synthesized into a binocular image.
Binocular stereo matching: stereo matching refers to a process of matching on epipolar lines by using left and right correction maps, and usually, stereo matching uses a left map as a reference to find a position of a pixel corresponding to each pixel in the left map to a right map.
Point cloud: the point cloud is a massive point set which expresses target space distribution and target surface characteristics under the same space reference system, and after the space coordinates of each sampling point on the surface of the object are obtained, a point set is obtained, which is called as the point cloud.
Original drawing: refers to an original image acquired by an image acquisition device, a camera, a video camera, and the like.
Correction drawing: the image acquisition device is used for acquiring an original image which is acquired by the image acquisition device and is obtained after the original image is corrected by using calibration parameters.
Parallax map: the method comprises the steps that a certain point in a physical world is projected on a left eye image and a right eye image to form two points, an image acquisition device is calibrated to obtain the corrected left eye image and the corrected right eye image respectively, the pixel positions of the point on the corrected left eye image are x1 and y1, the pixel positions of the pixel points on the corrected right eye image are x2 and y2, epipolar geometric constraint y1 is y2, pixel differences x1-x2 are called parallaxes, and an image formed by the parallaxes of all pixel points is called a parallax map.
With the development of science and technology, the automatic driving technology is also applied in a certain range. The reliability of the autopilot map is the key to improving the safety of autopilot. In order to improve the reliability of the automatic driving map, the accuracy of generating the binocular point cloud needs to be improved. The existing binocular point cloud is obtained by obtaining a binocular image with high resolution, namely a left eye image and a right eye image, wherein the left eye image is shown in figure 1, the right eye image is shown in figure 2, then the left eye image and the right eye image are subjected to stereo matching to obtain a disparity map, and the binocular point cloud is generated through the disparity map. However, in the method, when stereo matching is performed on a close-range image in the left eye image and a close-range image in the right eye image corresponding to the high-resolution binocular image, the accuracy of the generated disparity map is low due to the large disparity, and the accuracy of generating the binocular point cloud is low. Specifically, the left eye image after labeling shown in fig. 3 is formed by setting a plurality of labeling points on the left eye image shown in fig. 1, and then the pixel coordinates of each labeling point on the left eye image are obtained. For any one of the labeling points, obtaining the parallax corresponding to the labeling point according to the pixel coordinate of the labeling point on the left eye image and the parallax map, superposing the obtained parallax on the pixel coordinate of the labeling point on the right eye image to form the pixel coordinate of the labeling point on the right eye image, and labeling the corresponding position of the right eye image to form the labeled right eye image shown in fig. 4. As can be seen from fig. 4, after the disparity is superimposed on the far labeling points, the positions corresponding to the right eye image and the lane lines are basically overlapped, the matching degree is higher, however, as the distance is closer and closer, the deviation of the labeling points is larger and larger, the visible disparity is larger and larger, the accuracy of the generated disparity map is lower, and further the accuracy of generating the binocular point cloud is lower.
Based on the above description, if the zoom processing is performed on the close-up image in the left-eye image and the close-up image in the right-eye image, respectively, the search space in the epipolar direction can be made small, the parallax thereof can be made small, and the stereo matching effect can be improved. Based on the above, the application provides a binocular point cloud determining method, which includes the steps of respectively obtaining a left eye image and a right eye image, dividing the left eye image to obtain a first image area and a second image area, dividing the right eye image to obtain a third image area and a fourth image area, and dividing the left eye image and the right eye image into four image areas; obtaining a processed second image area and a processed fourth image area by respectively carrying out scaling processing on the second image area and the fourth image area, wherein the second image area and the fourth image area comprise close-range images; determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area; the near view images included by the left view image and the right view image can be zoomed, so that the parallax error of the near view images included by the left view image and the right view image is reduced, the parallax image is obtained by performing stereo matching through the second image area and the fourth image area, and when corresponding point cloud data is obtained, the accuracy of the parallax image is improved by improving the effect of stereo matching, and the accuracy of binocular point cloud data is improved.
The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 5 is a schematic diagram of a binocular point cloud determination system provided by the present application, and as shown in fig. 5, the system may be applied to scenes such as automatic driving high-precision map generation, automatic driving, assisted driving, and the like, and the system includes a left image capturing device 11 and a right image capturing device 12 and an electronic device 13, where the relative positions of the left image capturing device 11 and the right image capturing device 12 are fixed.
Specifically, the image capturing devices on the left and right sides may be cameras, video cameras, and the like. And the left image acquisition device 11 is used for acquiring a left eye image. And the right image acquisition device 12 is used for acquiring a right eye image.
The electronic device 13, which may be a server, a terminal device, or a processor chip, is configured to generate a binocular point cloud from the left eye image and the right eye image. Specifically, the electronic device 13 may be configured to perform correction processing on the original left eye image and the original right eye image acquired by the image acquisition device to obtain a corrected image. The electronic device 13 is further configured to divide the left eye image into a first image area and a second image area, and divide the right eye image into a third image area and a fourth image area, specifically, since the upper half part of the image shot in the natural scene is a long shot, and the lower half part of the image is a short shot, when the left eye image and the right eye image are divided into an upper part and a lower part, respectively, and then the long shot of the upper half part is correspondingly processed to generate first point cloud data corresponding to the upper half part of the image, for example, after the upper half part is cut, the first point cloud data is generated according to the cut image, so that the number of pixels can be reduced, and the efficiency of generating the first point cloud data is improved; and correspondingly processing the close view of the lower half part to generate second point cloud data corresponding to the image of the lower half part, for example, zooming the lower half part and generating the second point cloud data according to the zoomed image, so that the search space in the polar line direction can be reduced, the parallax can be reduced, and the accuracy of the second point cloud data can be improved. The electronic device 13 is further configured to determine binocular point cloud data after performing stitching processing on the first point cloud data and the second point cloud data.
Fig. 6 is a schematic flowchart of a method for determining a binocular point cloud provided by the present application, an execution subject of the method is a determination apparatus for the binocular point cloud, the apparatus may be an electronic device, and may be all or part of the apparatus, as shown in fig. 6, the method includes:
s601, respectively acquiring a left eye image and a right eye image.
The determination device of the binocular point cloud acquires a left eye image or a right eye image, which may be acquired by an image acquisition component integrated on the device, for example, a camera integrated on the device; or may be obtained through a physical storage medium or a network storage medium.
S602, dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area.
For example, the left eye image and the right eye image may be divided into upper and lower parts, respectively. Specifically, the left eye image may be divided into an upper portion and a lower portion, and the right eye image may be divided into an upper portion and a lower portion according to the same ratio, for example, the left eye image is divided into an upper portion and a lower portion, and the right eye image is divided into an upper portion and a lower portion.
When any image is divided, the division ratio can be set according to the actual situation, for example, the left eye image can be divided into an upper part and a lower part according to the ratio of 1: 1; the device can also be divided into an upper part and a lower part according to the ratio of 2: 1.
And S603, respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise a close-range image.
For example, the second image region is scaled to 1/2 of the original second image region at the same scale; the fourth image area is scaled to 1/2 of the original fourth image area resulting in a processed second image area and a processed fourth image area.
It can be understood that, in a natural scene, in general, a near view image is mostly located in the lower half of an image captured by a camera. Correspondingly, the second image area and the fourth image area can respectively correspond to the lower half area of the left eye image and the right eye image.
S604, determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area.
Specifically, stereo matching can be performed on the first image area and the third image area through a convolutional neural network, a disparity map corresponding to the first image area and the third image area is determined, and then a first group of point cloud data corresponding to the first image area and the third image area is determined according to the disparity map; and performing stereo matching on the disparity maps of the processed second image area and the processed fourth image area through a convolutional neural network to determine a second disparity map corresponding to the disparity maps, determining a second group of point cloud data corresponding to the processed second image area and the processed fourth image area according to the disparity maps, and finally determining binocular point cloud data corresponding to the left eye image and the right eye image according to the two groups of point cloud data.
The point cloud is a massive point set which expresses target space distribution and target surface characteristics under the same spatial reference system, and after the spatial coordinates of each sampling point on the surface of the object are obtained, a point set is obtained and is called as the point cloud.
Through the step S603, the second image area and the fourth image area are respectively scaled, so that the search space in the epipolar line direction is reduced, the parallax between the second image area and the fourth image area is reduced, and when the second parallax image is obtained by performing stereo matching between the second image area and the fourth image area, the stereo matching effect can be improved, and the accuracy of the parallax image is improved.
This will be specifically described below with reference to the schematic drawings. Fig. 7 is a schematic view of binocular imaging provided by the present application, as shown in fig. 7, where 71 denotes a second image region, 72 denotes a fourth image region, a point P denotes any point in physical space, and P1Representing point P in the second image areaProjected point, P2A projection point, C, representing point P in the fourth image area1Representing the optical centre, C, of the camera corresponding to the second image area2Indicating the optical center of the camera corresponding to the fourth image area, point P, C1And C2Formed plane P C1 C2Called epipolar plane, the intersection of this plane with the second image area and the fourth image area being called epipolar line, as shown, L1Representing the epipolar line, L, of the point P in correspondence with the second image region2Representing the epipolar line to which point P corresponds in the fourth image region. Wherein, P1And P2Can be represented by pixels. As can be seen from fig. 7, the projection of any point in the physical space on the second image area and the fourth image area is always located on a pair of polar lines.
Based on this, for any point P in the second image region1Only need to be at L where it is located1Corresponding limit L2The point corresponding to the point in the fourth image area can be determined by searching, and the searching in the whole fourth image area is not needed. Based on the principle shown in fig. 7, when the second image region and the fourth image region are respectively scaled in step S602 to obtain a processed second image region and a processed fourth image region, and to obtain fig. 8, fig. 8 is another schematic view of binocular imaging provided by the present application, as shown in fig. 8, for example, 1/2 for scaling the second image region to the original second image region; scaling the fourth image area to 1/2 the original fourth image area, the point P in the processed second image area is reduced while the processed second image area and the processed fourth image area are reduced1Corresponding polar line L1Is also shortened, correspondingly, the polar line L1Corresponding polar line L2Is also shortened, so that, when the point P in the processed second image area is determined1At the corresponding point in the processed fourth image area, due to the epipolar line L1Corresponding polar line L2The length of (2) is shortened, so the search space is also reduced correspondingly.
The binocular stereo matching is a process of matching on an epipolar line by using a left correction graph and a right correction graph, and the stereo matching usually takes the left graph as a reference, and finds the position of a pixel corresponding to each pixel in the left graph to the right graph, wherein any group of epipolar lines corresponding to the left correction graph and the right correction graph are on the same straight line. As can be seen from the above analysis, the position of a corresponding pixel in the right image for any pixel on the left image is necessarily on the epipolar line pair where the pixel is located. That is, if the pixel position of the projection point of any point of the physical world on the corrected left eye image is (x1, y1) and the pixel position of the projection point on the corrected right eye image is (x2, y2), the parallax of the point in the corrected left eye image and the corrected right eye image is x1-x 2. Due to the reduction of the epipolar line length, the search space is reduced and the parallax is correspondingly reduced.
According to the method, the left eye image and the right eye image are respectively obtained, the left eye image is divided to obtain a first image area and a second image area, the right eye image is divided to obtain a third image area and a fourth image area, and the left eye image and the right eye image can be divided into four image areas; obtaining a processed second image area and a processed fourth image area by respectively carrying out scaling processing on the second image area and the fourth image area, wherein the second image area and the fourth image area comprise close-range images; determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area; the method has the advantages that the near-field images included by the left-eye image and the right-eye image can be zoomed, so that the search space of the near-field images in the polar line direction is reduced, the parallax of the near-field images included by the left-eye image and the right-eye image is reduced, the parallax image is obtained by performing stereo matching on the second image area and the fourth image area, and when corresponding point cloud data are obtained, the accuracy of the parallax image can be improved by improving the effect of stereo matching, and the accuracy of binocular point cloud data is improved.
The determination of the binocular point cloud data according to the first image area, the third image area, the processed second image area, and the processed fourth image area in S604 is further described below with an embodiment.
Fig. 9 is a schematic flowchart of a method for determining a binocular point cloud provided by the present application, and as shown in fig. 9, the method includes:
and S901, respectively acquiring a left eye image and a right eye image.
S901 is similar to S601, and specific description can refer to S601, which is not described herein.
S902, dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area.
In general, as shown in fig. 1, in an image captured in a natural scene, the upper half of the image is mostly a distant view image, and the lower half of the image is mostly a near view image, and when a left eye image and a right eye image are divided, the division can be performed according to this rule. For example, the left eye image may be divided into an upper half area and a lower half area, where the first image area is the upper half area of the left eye image and the second image area is the lower half area of the left eye image; and dividing the left eye image into an upper half area and a lower half area, wherein the third image area is the upper half area of the right eye image, and the fourth image area is the lower half area of the right eye image. By the method, the left eye image and the right eye image can be respectively divided into two areas comprising different contents.
And S903, respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise close-range images.
It should be noted that when the binocular point cloud determining apparatus executes S902 and S903, S902 may be executed first, and then S903 may be executed; or, first execute S903, then execute S902; or, S902 and S903 are executed simultaneously, and as to the execution sequence of the steps, the embodiment of the present application is not limited.
Illustratively, the processed second image region is 1/k of the second image region, the processed fourth image region is 1/k of the fourth image region, k is any number greater than 1 and less than or equal to 10, preferably, k may be 4, that is, the processed second image region may be 1/4 of the second image region, and the processed fourth image region may be 1/4 of the fourth image region. At this time, the accuracy of the second disparity map generated according to the processed second image region and the processed fourth image region is high, and further when the first point cloud data is determined according to the processed second image region and the processed fourth image region, the accuracy of stereo matching is high due to the high accuracy of the disparity map, and the accuracy of the generated first point cloud data is high.
And S904, respectively carrying out cutting processing on the first image area and the third image area to obtain a processed first image area and a processed third image area.
Images acquired by a high-precision camera and the like are often large and have large corresponding data volume, and under the condition that the first point cloud data is obtained by directly processing the images, the equipment has a slow operation speed due to the large data volume. After the clipping processing is carried out, the data volume can be reduced, and further the operation speed of the equipment is improved.
In practical application, since the key information is located in the central area of the image in the image captured by the vehicle-mounted camera, in order to ensure that the key information corresponding to the first image area and the third image area is not lost, in a possible implementation manner, when the first image area and the third image area are cropped, the edge portions of the first image area and the third image area can be respectively cropped to obtain the central portion of the first image area and the central portion of the third image area; a central portion of the first image area is determined as a processed first image area, and a central portion of the third image area is determined as a processed third image area.
For example, fig. 10 is a schematic diagram of image cropping provided in the present application, and as shown in fig. 10, an edge portion of a first image area is cropped to obtain a central portion of the first image area; the central portion of the first image area is determined as the processed first image area.
By the method, the first image area and the third image area are cut, so that on one hand, the image data volume can be reduced; on the other hand, as the key information is more in the central area of the image in the image shot by the vehicle-mounted camera, the method can be used for cutting, and the key information corresponding to the first image area and the third image area can be ensured not to be lost.
S905, determining first point cloud data according to the processed first image area and the processed third image area.
For example, when determining the first point cloud data, a first disparity map may be determined according to the processed first image region and the processed third image region, a first parameter of the binocular camera may be configured according to the processed first image region and the processed third image region, a left eye image and a right eye image are images captured by two cameras of the binocular camera, and then the first point cloud data may be determined according to the first disparity map and the first parameter.
Fig. 11 is a schematic diagram of a disparity map provided in the present application, and a possible implementation manner of determining the first disparity map according to the processed first image region and the processed third image region is to input the processed first image region and the processed third image region into a trained Convolutional Neural Network (CNN) to perform binocular stereo matching, so as to obtain the first disparity map.
The processed first image area and the processed third image area are obtained by cutting the first image area and the third image area, and when the first parameter of the binocular camera is configured according to the processed first image area and the processed third image area, the first parameter of the binocular camera can be configured according to a cutting proportion.
For example, the parameters of the binocular camera may be configured according to the following equations (1) and (2):
new_cx=k1*old_cx (1)
new_cy=k2*old_cy (2)
wherein k is1And k2Indicating the cutting ratio, in particular, k1And k2May be the same or different, k1And k2If the first image area and the third image area are the same, the proportion is the same when the first image area and the third image area are cut; k is a radical of1And k2When different, it means that the ratio of the first image region to the third image region is different, for example, k1=k2When the image is 80%, it indicates that the processed first image region and the processed third image region are 80% of the first image region and the third image region, respectively; k is a radical of1=50%,k2When 80%, it means that the processed first image region is 50% of the sum of the first image regions, and the processed third image region is 80% of the third image region. old _ cx represents parameters of a binocular camera x axis corresponding to the first image area and the third image area, old _ cy represents parameters of a binocular camera y axis corresponding to the first image area and the third image area, new _ cx represents parameters of a binocular camera x axis corresponding to the processed first image area and the processed third image area, and new _ cy represents parameters of a binocular camera y axis corresponding to the processed first image area and the processed third image area.
And S906, determining second point cloud data according to the processed second image area and the processed fourth image area.
For example, when determining the second point cloud, the second disparity map may be determined according to the processed second image area and the processed fourth image area, the second parameter of the binocular camera may be configured according to the processed second image area and the processed fourth image area, the left eye image and the right eye image are images captured by two cameras of the binocular camera, and then the second point cloud data may be determined according to the second disparity map and the second parameter.
One possible implementation manner of determining the second disparity map according to the processed second image region and the processed fourth image region is to input the processed second image region and the processed fourth image region into a trained convolutional neural network model, and perform binocular stereo matching to obtain the second disparity map.
According to the processed second image area and the processed fourth image area, configuring the second parameter of the binocular camera may be configuring the second parameter of the binocular camera according to a scaling ratio of scaling the second image area and the fourth image area in a pixel coordinate system, and configuring a focal length, an x-axis parameter and a y-axis parameter of the camera according to formulas (3), (4) and (5):
new_fx=fx*k2 (3)
new_cx=cx*k2 (4)
new_cy=cy*k2 (5)
wherein k is2Indicating scaling, e.g. k21/4, indicating that the processed second image area is 1/4 of the original second image area, the processed fourth image area is 1/4 of the original fourth image area, new _ fx indicates the camera focal length after configuration, new _ cx indicates the camera parameter of the x axis after configuration, new _ cy indicates the camera parameter of the y axis after configuration, fx indicates the camera focal length before configuration, cx indicates the camera parameter of the x axis before configuration, and cy indicates the camera parameter of the y axis before configuration.
And S907, determining binocular point cloud data according to the first point cloud data and the second point cloud data.
And splicing the first point cloud data and the second point cloud data to obtain binocular point cloud data, wherein the first point cloud data corresponds to the upper half part of the binocular point cloud, and the second point cloud data corresponds to the lower half part of the binocular point cloud.
Fig. 12 is a schematic diagram of a binocular point cloud provided by the present application. As shown in fig. 12, the image is composed of a plurality of points, wherein the upper half point corresponds to the first point cloud data and the lower half point corresponds to the second point cloud data.
On the basis of the above embodiment, further, by respectively performing cropping processing on the first image area and the third image area, a processed first image area and a processed third image area are obtained, the first point cloud data is determined according to the processed first image area and the processed third image area, the second point cloud data is determined according to the processed second image area and the processed fourth image area, and the binocular point cloud data is determined according to the first point cloud data and the second point cloud data.
Fig. 13 is a schematic structural diagram of a determination apparatus for a binocular point cloud provided in the present application, and as shown in fig. 13, the determination apparatus for the binocular point cloud includes:
the acquiring module 131 is configured to acquire the left eye image and the right eye image respectively.
The processing module 132 is configured to divide the left eye image to obtain a first image area and a second image area, and divide the right eye image to obtain a third image area and a fourth image area.
The processing module 132 is further configured to perform scaling processing on the second image area and the fourth image area respectively to obtain a processed second image area and a processed fourth image area, where the second image area and the fourth image area include a close-range image.
The determining module 133 is configured to determine binocular point cloud data according to the first image area, the third image area, the processed second image area, and the processed fourth image area.
Optionally, the determining module 133 is specifically configured to perform cropping processing on the first image area and the third image area respectively to obtain a processed first image area and a processed third image area; determining first point cloud data according to the processed first image area and the processed third image area; determining second point cloud data according to the processed second image area and the processed fourth image area; and determining binocular point cloud data according to the first point cloud data and the second point cloud data.
Optionally, the determining module 133 is specifically configured to determine the first disparity map according to the processed first image region and the processed third image region; configuring a first parameter of a binocular camera according to the processed first image area and the processed third image area, wherein a left eye image and a right eye image are images shot by two cameras of the binocular camera; and determining first point cloud data according to the first disparity map and the first parameters.
Optionally, the determining module 133 is specifically configured to determine the second disparity map according to the processed second image region and the processed fourth image region; configuring a second parameter of the binocular camera according to the processed second image area and the processed fourth image area, wherein the left eye image and the right eye image are images shot by two cameras of the binocular camera; and determining second point cloud data according to the second disparity map and the second parameter.
Optionally, the processed second image area is 1/k of the second image area, the processed fourth image area is 1/k of the fourth image area, k is any number greater than 1 and less than or equal to 10, and specifically, k may be 4.
Optionally, the determining module 133 is specifically configured to respectively crop edge portions of the first image area and the third image area to obtain a central portion of the first image area and a central portion of the third image area; a central portion of the first image area is determined as a processed first image area, and a central portion of the third image area is determined as a processed third image area.
Optionally, the first image area is an upper half area of the left eye image, and the second image area is a lower half area of the left eye image; the third image area is the upper half area of the right eye image, and the fourth image area is the lower half area of the right eye image.
The determination device of the binocular point cloud can execute the determination method of the binocular point cloud, and the content and the effect of the determination device of the binocular point cloud can refer to the embodiment part of the method, which is not described again.
Fig. 14 is a schematic structural diagram of an electronic device provided in the present application, and as shown in fig. 14, the electronic device of this embodiment includes: a processor 141, a memory 142; processor 141 is communicatively coupled to memory 142. The memory 142 is used to store computer programs. Processor 141 is used to call the computer program stored in memory 142 to implement the method in the above-described method embodiments.
Optionally, the electronic device further comprises: a transceiver 143 for enabling communication with other devices.
The electronic device may execute the above binocular point cloud determining method, and the content and effect thereof may refer to the method embodiment section, which is not described again.
The application also provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions are used for realizing the determination method of the binocular point cloud.
The content and effect of the method for determining the binocular point cloud can be referred to in the embodiment of the method, and details are not repeated here.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims. It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A binocular point cloud determination method is characterized by comprising the following steps:
respectively acquiring a left eye image and a right eye image;
dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area;
respectively carrying out zooming processing on the second image area and the fourth image area to obtain a processed second image area and a processed fourth image area, wherein the second image area and the fourth image area comprise a close-range image;
and determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area.
2. The method according to claim 1, wherein the first image region is an upper half region of the left eye image and the second image region is a lower half region of the left eye image;
the third image area is the upper half area of the right eye image, and the fourth image area is the lower half area of the right eye image.
3. The method according to claim 1 or 2, wherein obtaining determined binocular point cloud data from the first image area, the third image area, the processed second image area and the processed fourth image area comprises:
respectively cutting the first image area and the third image area to obtain a processed first image area and a processed third image area;
determining first point cloud data according to the processed first image area and the processed third image area;
determining second point cloud data according to the processed second image area and the processed fourth image area;
and determining the binocular point cloud data according to the first point cloud data and the second point cloud data.
4. The method of claim 3, wherein determining first point cloud data from the processed first image region and the processed third image region comprises:
determining a first disparity map according to the processed first image area and the processed third image area;
configuring a first parameter of a binocular camera according to the processed first image area and the processed third image area, wherein the left eye image and the right eye image are images shot by two cameras of the binocular camera;
and determining the first point cloud data according to the first disparity map and the first parameter.
5. The method of claim 3, wherein determining second point cloud data from the processed second image region and the processed fourth image region comprises:
determining a second disparity map according to the processed second image area and the processed fourth image area;
configuring a second parameter of a binocular camera according to the processed second image area and the processed fourth image area, wherein the first image and the right eye image are images shot by two cameras of the binocular camera;
and determining the second point cloud data according to the second disparity map and the second parameter.
6. The method according to claim 1, wherein the processed second image region is 1/k of the second image region, the processed fourth image region is 1/k of the fourth image region, and k is any number greater than 1 and less than or equal to 10.
7. The method according to claim 3, wherein the performing the cropping processing on the first image area and the third image area respectively to obtain a processed first image area and a processed third image area comprises:
respectively cutting the edge parts of the first image area and the third image area to obtain the central part of the first image area and the central part of the third image area;
determining a central portion of the first image area as a processed first image area, and determining a central portion of the third image area as a processed third image area.
8. A binocular point cloud determination apparatus, comprising:
the acquisition module is used for respectively acquiring a left eye image and a right eye image;
the processing module is used for dividing the left eye image to obtain a first image area and a second image area, and dividing the right eye image to obtain a third image area and a fourth image area;
the processing module is further configured to perform scaling processing on the second image region and the fourth image region respectively to obtain a processed second image region and a processed fourth image region, where the second image region and the fourth image region include a close-range image;
and the determining module is used for determining binocular point cloud data according to the first image area, the third image area, the processed second image area and the processed fourth image area.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
10. A computer-readable storage medium having computer-executable instructions stored thereon, which when executed by a processor, are configured to implement the method of any one of claims 1 to 7.
CN202010987332.9A 2020-09-18 2020-09-18 Binocular point cloud determination method, device and equipment Pending CN114283053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010987332.9A CN114283053A (en) 2020-09-18 2020-09-18 Binocular point cloud determination method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010987332.9A CN114283053A (en) 2020-09-18 2020-09-18 Binocular point cloud determination method, device and equipment

Publications (1)

Publication Number Publication Date
CN114283053A true CN114283053A (en) 2022-04-05

Family

ID=80867407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010987332.9A Pending CN114283053A (en) 2020-09-18 2020-09-18 Binocular point cloud determination method, device and equipment

Country Status (1)

Country Link
CN (1) CN114283053A (en)

Similar Documents

Publication Publication Date Title
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
EP2328125B1 (en) Image splicing method and device
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
CN109961401A (en) A kind of method for correcting image and storage medium of binocular camera
KR20130107840A (en) Apparatus and method of generating and consuming 3d data format for generation of realized panorama image
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
JP7184748B2 (en) A method for generating layered depth data for a scene
CN115035235A (en) Three-dimensional reconstruction method and device
CN110895821A (en) Image processing device, storage medium storing image processing program, and driving support system
CN113253845A (en) View display method, device, medium and electronic equipment based on eye tracking
CN112632415B (en) Web map real-time generation method and image processing server
CN110348351A (en) Image semantic segmentation method, terminal and readable storage medium
CN112270701B (en) Parallax prediction method, system and storage medium based on packet distance network
Lin et al. Real-time low-cost omni-directional stereo vision via bi-polar spherical cameras
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
JP2020191624A (en) Electronic apparatus and control method for the same
CN114283053A (en) Binocular point cloud determination method, device and equipment
JPH1023311A (en) Image information input method and device therefor
CN115861145A (en) Image processing method based on machine vision
CN116189140A (en) Binocular vision-based vehicle three-dimensional target detection algorithm
CN111292380A (en) Image processing method and device
US20130076868A1 (en) Stereoscopic imaging apparatus, face detection apparatus and methods of controlling operation of same
CN110148086B (en) Depth filling method and device for sparse depth map and three-dimensional reconstruction method and device
CN114187415A (en) Topographic map generation method and device
JP6595878B2 (en) Element image group generation apparatus and program thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination