WO2018214151A1 - Procédé de traitement d'image, dispositif terminal et support d'informations informatique - Google Patents

Procédé de traitement d'image, dispositif terminal et support d'informations informatique Download PDF

Info

Publication number
WO2018214151A1
WO2018214151A1 PCT/CN2017/086100 CN2017086100W WO2018214151A1 WO 2018214151 A1 WO2018214151 A1 WO 2018214151A1 CN 2017086100 W CN2017086100 W CN 2017086100W WO 2018214151 A1 WO2018214151 A1 WO 2018214151A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
contour line
pixels
entropy
Prior art date
Application number
PCT/CN2017/086100
Other languages
English (en)
Chinese (zh)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to CN201780028671.0A priority Critical patent/CN109478326B/zh
Priority to PCT/CN2017/086100 priority patent/WO2018214151A1/fr
Publication of WO2018214151A1 publication Critical patent/WO2018214151A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the field of computer vision, and in particular to an image processing method, a terminal device, and a computer storage medium.
  • the recognition of the corresponding points on the two images is an important form of machine vision. It is based on the principle of parallax and uses the imaging device to acquire two images of the measured object from different positions. By calculating the positional deviation between the corresponding points of the image, A method for obtaining three-dimensional geometric information of an object. Among them, binocular stereo vision has been widely used in robot navigation, precision industrial measurement, object recognition, virtual reality, scene reconstruction and surveying.
  • each pixel in one image and each pixel in another image are separately calculated for each other. This method is feasible, but due to the large amount of computation. Greatly reduced image processing speed.
  • the technical problem to be solved by the present invention is to provide an image processing method, a terminal device and a computer storage medium, which can greatly improve the speed of image processing.
  • the technical solution adopted by the present invention is to provide an image processing method, including: acquiring a first image and a second image, wherein a first image and a second image have a corresponding relationship; Performing an edge calculation on the second image to determine a first contour line in the first image, a second contour line in the second image, and determining a pixel in the second contour line corresponding to the pixel in the first contour line to Determining a correspondence relationship between the first contour line and the second contour line; using the correspondence relationship to determine pixels in the second image corresponding to pixels in a region other than the first contour line in the first image.
  • the technical solution adopted by the present invention is: providing a terminal device, the terminal device comprising: a processor, a memory, a computer operation instruction and data stored in the memory, and the processor executing the computer operation instruction, for: Acquiring a first image and a second image from the memory, wherein the first image has a corresponding relationship with the second image; performing edge calculation on the first image and the second image respectively to determine a first contour line in the first image, a second contour line in the second image; determining the second contour line a pixel corresponding to a pixel in the first contour line to determine a correspondence relationship between the first contour line and the second contour line; using the correspondence relationship to determine the second image and the first contour line in the first image The pixel corresponding to the pixel in the area.
  • another technical solution adopted by the present invention is to provide a computer storage medium, the device stores program data, and the program data can be executed to implement the corresponding method as above.
  • the invention has the beneficial effects that, in the image processing process, the corresponding relationship between the second contour line and the first contour line is first determined, and the corresponding relationship is used to determine the second image and A pixel corresponding to a pixel in a region other than the first contour line in the first image.
  • the technical solution of the present invention utilizes the correspondence between the first contour line and the second contour line, thereby reducing the number of pixel points that need to be aligned when calculating corresponding pixels of pixels other than the first contour line. In turn, the amount of calculation is reduced, and the speed of image processing is improved.
  • FIG. 1 is a schematic flow chart of an embodiment of an image processing method according to the present invention.
  • step S13 is a schematic flowchart of step S13 in an embodiment of an image processing method according to the present invention
  • FIG. 3 is a schematic diagram showing an example of acquiring an epipolar line in an embodiment of an image processing method according to the present invention
  • step S14 is a schematic flowchart of step S14 in an embodiment of an image processing method according to the present invention.
  • FIG. 5 is a schematic diagram of an example of step S14 in an embodiment of the image processing method of the present invention.
  • step S132 is a schematic flowchart of step S132 in an embodiment of an image processing method according to the present invention.
  • FIG. 7 is a schematic flowchart of step S132 in an embodiment of an image processing method according to the present invention.
  • FIG. 8 is a schematic diagram showing an example of entropy calculation in an embodiment of an image processing method according to the present invention.
  • step S132 is a schematic flowchart of step S132 in an embodiment of an image processing method according to the present invention.
  • FIG. 10 is a schematic flowchart of step S143 in an embodiment of an image processing method according to the present invention.
  • step S143 is a schematic flowchart of step S143 in an embodiment of an image processing method according to the present invention.
  • step S143 is a schematic flowchart of step S143 in an embodiment of an image processing method according to the present invention.
  • FIG. 13 is a schematic diagram of a frame of an embodiment of a terminal device of the present invention.
  • Figure 14 is a schematic diagram of a frame of another embodiment of the terminal device of the present invention.
  • FIG. 1 is an embodiment of an image processing method according to the present invention, including:
  • Step S11 acquiring a first image and a second image
  • the first image and the second image are two images respectively acquired by two cameras of the binocular camera, and the two images have both differences and corresponding relationships, based on the correspondence relationship.
  • the image is subjected to operations such as matching processing.
  • the binocular camera can be used for direct shooting acquisition when acquiring the two images.
  • the two images may not be two images corresponding to the dual purpose, but two images of the same or similar relationship may exist, for example, the user may separately use the terminal device having the camera function.
  • the scenes in the two images need to have overlapping parts.
  • the user may photograph the same object, the scene, and the like from different angles by using a camera or the like, thereby obtaining the first image and the second image having the corresponding relationship to perform image processing.
  • Step S12 Perform edge calculation on the first image and the second image respectively to obtain a first contour line in the first image and a second contour line in the second image;
  • the edge calculation of the image is performed to detect the edge contour of the object in the image to obtain the contour of the object in the image, and then the contour is used to divide the image into different regions.
  • the first contour line and the second contour line obtained may be one, or multiple, and the area enclosed by the contour line may be a closed area or an open area. It is not specifically limited.
  • edge detection there are many types of methods for edge detection, such as differential operator method, template matching method, wavelet detection method, neural network method, etc. Each type of detection method has different specific methods. Edge recognition based on differential operators is currently a more common method, usually using first or second derivatives to detect edges. In the differential operator method, there are detection methods such as Roberts, Sobel, Prewitt, Canny, Laplacian, Log and MATLAB simulation. In the application, different operators can be selected according to the actual situation. The specific algorithm is not limited in the present invention.
  • Step S13 determining pixels in the second contour line corresponding to pixels in the first contour line to determine a correspondence relationship between the first contour line and the second contour line;
  • the first contour line is a line having a width of a first preset value
  • the second contour line is a line having a width of a second preset value.
  • the width of the first preset value refers to a width that can cover a certain number of pixels, for example, 5 pixels, 10 pixels, and the like, and is not limited thereto; similarly, the width is the second preset value.
  • the second preset value may be the same as or different from the first preset value; in an application scenario, The second preset value is greater than the first preset value, such that when determining the correspondence between the first contour line and the second contour line, the determination can be made by using pixels in a relatively larger range in the second image, thereby improving the above The accuracy of the correspondence.
  • any image corresponding recognition technology may be adopted when determining the pixels in the second contour line corresponding to the pixels in the first contour line, which is not limited herein.
  • Step S14 Using the correspondence relationship to determine pixels in the second image corresponding to pixels in a region other than the first contour line in the first image.
  • the corresponding relationship between the second contour line and the first contour line is first determined, and the corresponding relationship is used to determine the area in the second image and the area other than the first contour line in the first image.
  • the pixel corresponding to the pixel utilizes the correspondence between the first contour line and the second contour line, thereby reducing the number of pixel points that need to be aligned when calculating corresponding pixels of pixels other than the first contour line. In turn, the amount of calculation is reduced, and the speed of image processing is improved.
  • step S13 includes: sub-step S131 and sub-step S132.
  • Sub-step S131 acquiring a first constraint line of the first pixel of the first contour line in the first image, and a second constraint line corresponding to the second image, and obtaining the second constraint line and the second contour line Intersection point
  • the constraint line may specifically be a polar line, where the polar line refers to the intersection of the pole face in the opposite pole geometry and the plane of the two images.
  • one of the first pixels J is randomly set in the first image, and the projection centers corresponding to the first image and the second image are respectively known as C 1 and C 2 , and then the plane JC 1
  • the intersection lines x and y of C 2 and the planes ⁇ and ⁇ of the first image and the second image respectively are the polar lines corresponding to the first pixel J in the first image and the second image, that is, the first constraint line and the second line. Constraint line.
  • the intersection of the second constraint line and the second contour line in the second image can be simply obtained.
  • Sub-step S132 searching for the mutual information of the pixel of the first pixel and the intersection point one by one, and determining the second pixel of the second contour line corresponding to the first pixel in the second image to determine the first contour line and the second The correspondence of the outlines.
  • the intersection point of the second constraint line and the second contour line is, to some extent, a small line segment having a certain length. At this time, mutual information is obtained for the first pixel and the pixel located at the intersection point, that is, mutual information is obtained for the first pixel and the pixel located on the small line segment.
  • the mutual information indicates the degree of correlation between a certain pixel in the first image and a certain pixel in the second image, wherein the greater the mutual information, the greater the degree of correlation between the two pixels.
  • the first pixel Since the first pixel is located on the first contour line, it is easy to understand that the first pixel is necessarily located at an intersection of the first contour line and the first constraint line, and corresponding to the first pixel in the second image according to the correspondence relationship
  • the two pixels also correspond to the intersection of the second contour line and the second constraint line, that is, a certain pixel in the small line segment. In this case, it is only necessary to obtain mutual information for the pixels at the intersection of the first pixel and the second constraint line and the second contour line, and determine the pixels in the second image corresponding to the obtained maximum mutual information value. It is a second pixel corresponding to the first pixel.
  • step S14 includes: sub-step S141, sub-step S142, and sub-step S143.
  • Sub-step S141 acquiring a third constraint line of the third pixel of the area outside the first contour line in the first image, and a fourth constraint line corresponding to the second image, to obtain the first contour line and the third constraint line All first intersecting line segments obtained by intersecting, and all second intersecting line segments obtained by intersecting the second contour line with the fourth constraint line;
  • Sub-step S142 acquiring a second intersecting line segment corresponding to the first intersecting line segment where the third pixel is located according to the correspondence relationship, and acquiring an end point of the second intersecting line segment;
  • the correspondence relationship refers to a correspondence relationship between the first contour line in the first image and the second contour line in the second image. Since the correspondence relationship between the first contour line and the second contour line is determined, the positional relationship between the first intersecting line segment and the first contour line and the positional relationship between the second intersecting line segment and the second contour line can be determined and the first a second intersecting line segment in the second image corresponding to the first intersecting line segment where the first pixel is located in the image.
  • Sub-step S143 searching for the mutual information of the third pixel and the pixel corresponding to the second intersecting line segment after the endpoint is removed, and obtaining a fourth pixel corresponding to the third pixel in the second image to obtain the second image.
  • the pixels on the second contour line correspond to the pixels on the first contour line in the first image, and therefore, the first contour line is acquired
  • the pixels corresponding to the pixels in the area other than the pixels do not need to consider the pixels on the second contour line, That is, there is no need to consider the pixels on the endpoints of the second intersecting line segment.
  • the first constraint line 212 of the first pixel M in the first image 21 and the second constraint line 222 corresponding to the second image 22 are obtained by the method for calculating the polar line in the prior art. . Then, according to the above steps S131 and S132, all the pixels on the first contour line 211 in the first image 21 are uniformly found in the corresponding pixels in the second image 22, and obtained in the second image 22, and the first image.
  • the first contour line 211 in the image 21 corresponds to the contour line 223.
  • first intersecting line segment wired segments K 1 G 1 , G 1 H 1 , H 1 I 1 , I 1 E 1 , E 1 F 1 , F 1 L 1 the second intersecting line segment wired segments K 2 G 2 , G 2 H 2 , H 2 I 2 , I 2 E 2 , E 2 F 2 , F 2 L 2 .
  • the second pixel in the second image corresponding to one pixel M is also correspondingly defined in a second intersection between the contour lines 223a and 223b corresponding to the first intersecting line segment E 1 F 1 in the first image 21 in the second image 22 .
  • On the line segment E 2 F 2 when acquiring the second pixel corresponding to the first pixel M in the second image 22, only the second intersecting line segment of the first pixel M and the second image 22 is needed. It is sufficient to remove the pixel of the endpoint from the E 2 F 2 to obtain mutual information, and it is not necessary to obtain mutual information for all the pixels on the first pixel M and the corresponding second intersecting line segment E 2 F 2 .
  • the pixel in the second image can be further limited to be located at the third pixel
  • the first intersecting line segment corresponding to the second intersecting line segment after the end point is removed, thereby greatly reducing the amount of calculation, and can significantly improve the speed of image processing.
  • step S132 includes: sub-step S1321, sub-step S1322:
  • Sub-step S1321 using all pixels in the first image and all pixels in the second image, calculating a first edge entropy of the first pixel, a first edge entropy of the fifth pixel of the intersection, and the first pixel and the fifth The first joint entropy of the pixel;
  • Sub-step S1322 calculating mutual information between the first pixel and the fifth pixel according to the first edge entropy of the first pixel, the first edge entropy of the fifth pixel, and the first joint entropy of the first pixel and the fifth pixel, a second pixel of the second contour line in the second image corresponding to the first pixel to obtain a correspondence relationship between the first contour line and the second contour line.
  • H(x), H(y) represent the edge entropy of pixel x, y, respectively
  • H(x, y) represents pixel x and pixel y.
  • Joint entropy The entropy referred to here is information entropy, variable The greater the uncertainty, the greater the entropy.
  • the pixel having the largest mutual information value is the second pixel corresponding to the first pixel.
  • step S132 includes: sub-step S1323, sub-step S1324;
  • Sub-step S1323 calculating a second edge entropy of the first pixel, a second edge entropy of the sixth pixel of the intersection, and using a first contour line pixel in the first image and a second contour line pixel in the second image a second joint entropy of the first pixel and the sixth pixel;
  • Sub-step S1324 calculating mutual information between the first pixel and the sixth pixel according to the second edge entropy of the first pixel, the second edge entropy of the sixth pixel, and the second joint entropy of the first pixel and the sixth pixel, a second pixel of the second contour line in the second image corresponding to the first pixel to obtain a correspondence relationship between the first contour line and the second contour line.
  • edge contour of an object refers to the most significant part of the image local intensity change in the image, that is, the region where the pixel value abruptly changes. Then, the pixel value in other regions except the contour line changes very little. Uniform, to some extent even negligible.
  • the area enclosed by the contour line is regarded as a blank, that is, the pixels therein are ignored.
  • the regions i′, j′′, k′ enclosed by the first contour line in the first image 21 and the second contour line in the second image 22 may be included.
  • step S132 includes: sub-step S1325, sub-step S1326, sub-step S1327;
  • Sub-step S1325 respectively obtaining the respective mean values of the pixels surrounding the first contour line and the second contour line in the first image and the second contour line, respectively, and enclosing the first contour line and the second contour line respectively All pixels of each region are set to a single pixel having respective mean values;
  • Sub-step S1326 using a first contour line pixel in the first image, a single pixel having a respective mean value of a region surrounded by all the first contour lines, a second contour line pixel in the second image, and all second contour lines a single pixel having respective mean values of the enclosed regions, calculating a third edge entropy of the first pixel, a third edge entropy of the seventh pixel of the intersection, and a third joint entropy of the first pixel and the seventh pixel;
  • Sub-step S1327 calculating mutual information of the first pixel and the seventh pixel according to the third edge entropy of the first pixel, the third edge entropy of the seventh pixel, and the third joint entropy of the first pixel and the seventh pixel, a second pixel in the second image corresponding to the first pixel to obtain a correspondence relationship between the first contour line and the second contour line.
  • the respective mean values of the pixels surrounding the first contour line and the second contour line respectively mean that the pixel values of all the pixels in the region surrounded by the contour lines are averaged, and the obtained average values are The respective mean values of the pixels that surround each region.
  • a single pixel refers to a region surrounded by a first contour line and a second contour line as a pixel, and the pixel value of the pixel is the mean value of the enclosed region.
  • the pixel values of the pixels in other regions than the contour line change very little and are relatively uniform, and can be considered to be a plurality of pixels having the same pixel value to some extent.
  • the present embodiment by considering the pixels of the area surrounded by the contour lines as individual pixels, it is only necessary to use the pixels of the contour line and the above-mentioned individual pixels when performing entropy calculation in the process of seeking mutual information, and thus The amount of computation is reduced.
  • the pixel value of a single pixel is an average value of the pixel values of all the pixels in the contoured area, the calculation of the mutual information does not bring too much error, that is, the present embodiment guarantees mutual The accuracy of information calculation greatly reduces the amount of calculation and improves the speed of image processing.
  • step S143 includes: sub-step S1431, sub-step S1432:
  • Sub-step S1431 using all the pixels in the first image and all the pixels in the second image, calculating the fourth edge entropy of the third pixel, and removing the fourth edge of the eighth pixel of the corresponding second intersecting line segment after the endpoint is removed Edge entropy and fourth joint entropy of the third pixel and the prime eighth pixel;
  • Sub-step S1432 calculating mutual information between the third pixel and the eighth pixel according to the fourth edge entropy of the third pixel, the fourth edge entropy of the eighth pixel, and the fourth joint entropy of the third pixel and the eighth pixel. And a fourth pixel in the second image corresponding to the third pixel to acquire a pixel in the second image corresponding to a pixel in a region other than the first contour line in the first image.
  • the method for calculating mutual information and the relationship between edge entropy and joint entropy are The beneficial effects and the like of the present embodiment are similar to those in the above embodiment. For details, refer to the above embodiment, and details are not described herein again.
  • step S143 includes: sub-step S1433, sub-step S1434;
  • Sub-step S1433 calculating the fifth edge entropy of the third pixel and the corresponding second intersecting line segment after removing the endpoint by using the first contour line pixel in the first image and the second contour line pixel in the second image a fifth edge entropy of nine pixels and a fifth joint entropy of the third pixel and the ninth pixel;
  • Sub-step S1434 calculating the mutual information of the third pixel and the ninth pixel according to the fifth edge entropy of the third pixel, the fifth edge entropy of the ninth pixel, and the fifth joint entropy of the third pixel and the ninth pixel. And a fourth pixel in the second image corresponding to the third pixel to acquire a pixel in the second image corresponding to a pixel in a region other than the first contour line in the first image.
  • step S143 includes: sub-step S1435, sub-step S1435, sub-step S1437;
  • Sub-step S1435 respectively obtaining the respective mean values of the pixels surrounding the first contour line and the second contour line in the first image and the second contour line, respectively, and enclosing the first contour line and the second contour line respectively All pixels of each region are set to a single pixel having respective mean values;
  • Sub-step S1436 using a first contour line pixel in the first image, a single pixel having a respective mean value of a region surrounded by all the first contour lines, a second contour line pixel in the second image, and all second contour lines a single pixel having respective mean values of the enclosed regions, calculating a sixth edge entropy of the third pixel, a sixth edge entropy of the tenth pixel of the corresponding second intersecting line segment after removing the endpoint, and the third pixel and the third pixel Sixth joint entropy of ten pixels;
  • Sub-step S1437 calculating mutual information between the third pixel and the tenth pixel according to the sixth edge entropy of the third pixel, the sixth edge entropy of the tenth pixel, and the sixth joint entropy of the third pixel and the tenth pixel. And a fourth pixel in the second image corresponding to the third pixel to acquire a pixel in the second image corresponding to a pixel in a region other than the first contour line in the first image.
  • an embodiment of the terminal device of the present invention includes a processor 31 and a memory 32 , wherein the memory 32 is coupled to the processor 31 .
  • the computer 32 stores the computer operation instructions and data
  • the processor 31 executes the computer operation instructions for: acquiring the first image and the second image from the memory 32, wherein the first image and the second image have a corresponding relationship; An image and a second image respectively perform edge calculation to determine a first contour line in the first image and a second contour line in the second image; and determine the first contour line and the first contour line a pixel corresponding to the pixel, determining a correspondence relationship between the first contour line and the second contour line; using a correspondence relationship to determine a pixel in the second image corresponding to a region other than the first contour line in the first image Pixels.
  • the processor 31 executes a computer operation instruction, and is further configured to: acquire a first constraint line of the first pixel of the first contour line in the first image, and a second corresponding to the second image Constraining the line, and obtaining an intersection of the second constraint line and the second contour line; obtaining mutual information for the pixels of the first pixel and the intersection point, and determining the second contour line corresponding to the first pixel in the second image a second pixel to determine a correspondence between the first contour line and the second contour line.
  • the processor 31 determines pixels in the second contour line corresponding to the first contour line pixels to determine a correspondence between the first contour line and the second contour line.
  • the method includes: acquiring a third constraint line of the third pixel of the area outside the first contour line in the first image, and corresponding to the fourth constraint line in the second image, to obtain the first contour line intersecting the third constraint line Obtaining all the first intersecting line segments, and all the second intersecting line segments obtained by intersecting the second contour line and the fourth constraint line; acquiring a second intersecting line segment corresponding to the first intersecting line segment where the third pixel is located according to the correspondence relationship, and Obtaining an endpoint of the second intersecting line segment; searching for the mutual information of the third pixel and the pixel corresponding to the second intersecting line segment after removing the end point, and obtaining a fourth pixel corresponding to the third pixel in the second image, Pixels in the second image corresponding to pixels in a region other than the first contour line in the first image are acquired.
  • the processor 31 uses the correspondence to determine pixels in the second image that correspond to pixels in a region other than the first contour in the first image, including: utilizing Calculating a first edge entropy of the first pixel, a first edge entropy of the fifth pixel of the intersection, and a first of the first pixel and the fifth pixel, all pixels in the first image and all pixels in the second image Joint entropy; calculating mutual information of the first pixel and the fifth pixel according to the first edge entropy of the first pixel, the first edge entropy of the fifth pixel, and the first joint entropy of the first pixel and the fifth pixel.
  • the processor 31 obtains mutual information for the first pixel and the pixels of the intersection point, including: utilizing a first contour line pixel in the first image and a second contour in the second image a line pixel, calculating a second edge entropy of the first pixel, a second edge entropy of the sixth pixel of the intersection, and a second joint entropy of the first pixel and the sixth pixel; according to the second edge entropy of the first pixel, The second edge entropy of the sixth pixel and the second joint entropy of the first pixel and the sixth pixel calculate mutual information of the first pixel and the sixth pixel.
  • the processor 31 obtains mutual information for the first pixel and the pixels of the intersection point, including: obtaining a first contour, a first contour, and a second contour in the second image, respectively.
  • the processor 31 obtains mutual information for the pixels of the third pixel and the corresponding second intersecting line segment after removing the endpoint, including: utilizing all the first images a pixel and all pixels in the second image, calculating a fourth edge entropy of the third pixel, removing a fourth edge entropy of the eighth pixel of the corresponding second intersecting line segment after the endpoint, and the third pixel and a fourth joint entropy of the eighth pixel; calculating a mutual relationship between the third pixel and the eighth pixel according to the fourth edge entropy of the third pixel, the fourth edge entropy, and the fourth joint entropy of the third pixel and the eighth pixel information.
  • the processor 31 obtains mutual information for the pixels of the third pixel and the corresponding second intersecting line segment after removing the endpoint, including: using the first image a contour line pixel and a second contour line pixel in the second image, calculating a fifth edge entropy of the third pixel, and a fifth of the ninth pixel of the corresponding second intersecting line segment after the end point is removed Edge entropy and fifth joint entropy of the third pixel and the ninth pixel; calculating according to the fifth edge entropy of the third pixel, the fifth edge entropy of the ninth pixel, and the fifth joint entropy of the third pixel and the ninth pixel Mutual information between three pixels and ninth pixels.
  • the processor 31 obtains mutual information for the pixels of the third pixel and the corresponding second intersecting line segment after the endpoint is removed, including: obtaining the first image, respectively The first contour line and the second contour line in the two images respectively represent the respective mean values of the pixels of the region, and all the pixels of each region surrounded by the first contour line and the second contour line are respectively set to respectively a single pixel having a respective mean value; utilizing a first contour line pixel in the first image, a single pixel having a respective mean of the regions enclosed by all of the first contour lines, a second contour line pixel in the second image, and all of a single pixel having a respective mean value of a region enclosed by the two contour lines, calculating a sixth edge entropy of the third pixel, and a tenth pixel of the corresponding second intersecting line segment after the end point is removed Six edge entropy and a sixth joint entropy of the third pixel and the tenth pixel; calculating according to the sixth edge en
  • the terminal device further includes: a first camera 33 and a second camera 34; the first camera 33 and the second camera 34 are respectively configured to acquire the first image and the second image, and The first image and the second image are stored in the memory 32.
  • the apparatus stores program data, and the program data can be executed to implement the method in the image processing method embodiment of the present invention.
  • the computer storage medium may be at least one of a floppy disk drive, a hard disk drive, a CD-ROM reader, a magneto-optical disk reader, a CPU (for RAM), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé de traitement d'image, un dispositif terminal et un support d'informations informatique. Le procédé consiste : à acquérir une première image et une seconde image (S11), la première image présentant une corrélation avec la seconde image ; à réaliser respectivement un calcul de bord sur la première image et la seconde image, de façon à obtenir une première ligne de contour dans la première image et une seconde ligne de contour dans la seconde image (S12) ; à déterminer des pixels dans la seconde ligne de contour correspondant à des pixels dans la première ligne de contour, de façon à déterminer une corrélation entre la première ligne de contour et la seconde ligne de contour (S13) ; et à déterminer, à l'aide de la corrélation, des pixels dans la seconde image correspondant à des pixels dans une zone à l'extérieur de la première ligne de contour dans la première image (S14). Grâce à la mise en œuvre du procédé, la vitesse de traitement d'image peut être considérablement améliorée.
PCT/CN2017/086100 2017-05-26 2017-05-26 Procédé de traitement d'image, dispositif terminal et support d'informations informatique WO2018214151A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780028671.0A CN109478326B (zh) 2017-05-26 2017-05-26 一种图像处理方法、终端设备及计算机存储介质
PCT/CN2017/086100 WO2018214151A1 (fr) 2017-05-26 2017-05-26 Procédé de traitement d'image, dispositif terminal et support d'informations informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/086100 WO2018214151A1 (fr) 2017-05-26 2017-05-26 Procédé de traitement d'image, dispositif terminal et support d'informations informatique

Publications (1)

Publication Number Publication Date
WO2018214151A1 true WO2018214151A1 (fr) 2018-11-29

Family

ID=64395197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/086100 WO2018214151A1 (fr) 2017-05-26 2017-05-26 Procédé de traitement d'image, dispositif terminal et support d'informations informatique

Country Status (2)

Country Link
CN (1) CN109478326B (fr)
WO (1) WO2018214151A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166319A1 (en) * 2008-12-26 2010-07-01 Fujifilm Corporation Image processing apparatus, image processing method, and image processing program
CN103761708A (zh) * 2013-12-30 2014-04-30 浙江大学 基于轮廓匹配的图像修复方法
CN104021568A (zh) * 2014-06-25 2014-09-03 山东大学 基于轮廓多边形拟合的可见光与红外图像的自动配准方法
CN105957009A (zh) * 2016-05-06 2016-09-21 安徽伟合电子科技有限公司 一种基于插值过渡的图像拼接方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040101184A1 (en) * 2002-11-26 2004-05-27 Radhika Sivaramakrishna Automatic contouring of tissues in CT images
JP4490987B2 (ja) * 2007-04-26 2010-06-30 株式会社東芝 高解像度化装置および方法
CN101312539B (zh) * 2008-07-03 2010-11-10 浙江大学 用于三维电视的分级图像深度提取方法
JP6015267B2 (ja) * 2012-09-13 2016-10-26 オムロン株式会社 画像処理装置、画像処理プログラム、これを記録したコンピュータ読み取り可能な記録媒体、および、画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166319A1 (en) * 2008-12-26 2010-07-01 Fujifilm Corporation Image processing apparatus, image processing method, and image processing program
CN103761708A (zh) * 2013-12-30 2014-04-30 浙江大学 基于轮廓匹配的图像修复方法
CN104021568A (zh) * 2014-06-25 2014-09-03 山东大学 基于轮廓多边形拟合的可见光与红外图像的自动配准方法
CN105957009A (zh) * 2016-05-06 2016-09-21 安徽伟合电子科技有限公司 一种基于插值过渡的图像拼接方法

Also Published As

Publication number Publication date
CN109478326B (zh) 2021-11-05
CN109478326A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
CN106558080B (zh) 一种单目相机外参在线标定方法
JP7173772B2 (ja) 深度値推定を用いた映像処理方法及び装置
Gomez-Ojeda et al. PL-SVO: Semi-direct monocular visual odometry by combining points and line segments
KR102647351B1 (ko) 3차원의 포인트 클라우드를 이용한 모델링 방법 및 모델링 장치
US10559090B2 (en) Method and apparatus for calculating dual-camera relative position, and device
CN107077744B (zh) 使用边缘的三维模型生成的方法和系统
US8447099B2 (en) Forming 3D models using two images
CN107507277B (zh) 三维点云重构方法和装置、服务器及可读存储介质
WO2015135323A1 (fr) Procédé et dispositif de poursuite par caméra
CN106570913B (zh) 基于特征的单目slam快速初始化方法
US20120177284A1 (en) Forming 3d models using multiple images
JP2018523881A (ja) データを位置合わせする方法及びシステム
CN110310331B (zh) 一种基于直线特征与点云特征结合的位姿估计方法
CN107274483A (zh) 一种物体三维模型构建方法
JPWO2016208404A1 (ja) 情報処理装置および方法、並びにプログラム
JP6557640B2 (ja) カメラキャリブレーション装置、カメラキャリブレーション方法及びカメラキャリブレーションプログラム
US11475629B2 (en) Method for 3D reconstruction of an object
WO2019019160A1 (fr) Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique
CN105931231A (zh) 一种基于全连接随机场联合能量最小化的立体匹配方法
WO2018214151A1 (fr) Procédé de traitement d'image, dispositif terminal et support d'informations informatique
CN111210476B (zh) 一种同时定位与建图的方法及装置
CN109341530B (zh) 一种双目立体视觉中物点定位方法及系统
JP2017085297A (ja) 画像処理装置、画像処理方法およびプログラム
Du et al. Optimization of stereo vision depth estimation using edge-based disparity map
Sun et al. Ncc feature matching optimized algorithm based on constraint fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910996

Country of ref document: EP

Kind code of ref document: A1