WO2019019160A1 - Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique - Google Patents
Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique Download PDFInfo
- Publication number
- WO2019019160A1 WO2019019160A1 PCT/CN2017/094932 CN2017094932W WO2019019160A1 WO 2019019160 A1 WO2019019160 A1 WO 2019019160A1 CN 2017094932 W CN2017094932 W CN 2017094932W WO 2019019160 A1 WO2019019160 A1 WO 2019019160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- image
- detected
- feature value
- depth
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
Definitions
- the invention belongs to the technical field of information analysis, and in particular relates to an image information acquisition method, an image processing device and a computer storage medium.
- Binocular stereo vision is an important branch of computer vision. Binocular stereo vision is a method of simulating the principle of human vision. It uses a computer to passively perceive distance. It uses two identical cameras to image the same object from different positions to obtain the stereoscopic view of the object. The image pair, according to the pixel matching relationship between the images, calculates the offset between the pixels by the principle of triangulation to obtain the three-dimensional information of the object, and obtains the depth information of the object, and can calculate the actual distance between the object and the camera. The three-dimensional size of the object, the actual distance between the two points.
- the camera is generally used to perform imaging shooting from multiple angles, and then the depth information of the occluded pixels is restored according to multiple images captured by multiple cameras at multiple angles, thereby obtaining the depth of the occluded portion of the object. Eliminate dead ends.
- the camera is added to eliminate the dead angle.
- Embodiments of the present invention provide an image information acquisition method, an image processing device, and a computer storage medium for reducing unnecessary hardware expenditures in multi-view stereo vision detection.
- a first aspect of the embodiments of the present invention provides a method for acquiring image information, including:
- the to-be-detected point is a pixel in the first image or the second image
- the to-be-detected point is not included in the matching area
- the matching area is the first image
- an area included in the second image the first image is captured by a first camera
- the second image is captured by a second camera
- the first image and the second image are taken at different angles Images obtained from the same target;
- the depth of the target pixel is taken as the depth of the point to be detected.
- a second aspect of the embodiments of the present invention provides an image processing device, where the image processing device includes:
- the memory is configured to store an operation instruction
- the processor is configured to acquire an actual image feature value of the to-be-detected point, where the to-be-detected point is a pixel in the first image or the second image, and the to-be-detected point is not included in the matching area, and the matching An area is an area included in the first image and the second image, the first image is captured by a first camera, and the second image is captured by a second camera, the first image and the first The image obtained by capturing the same target at different angles is used to obtain a set of feature values corresponding to the first image and the second image, where the set of feature values includes actual pixels of the matching region.
- An image feature value configured to find a target pixel point in the matching area according to the actual image feature value of the to-be-detected point and the feature value set, and the actual image feature value of the target pixel point and the to-be-detected
- the difference between the actual image feature values of the points is smaller than the first preset difference rate value; and the depth of the target pixel points is used as the depth of the to-be-detected point;
- the sensor is configured to acquire the first image and the second image.
- a third aspect of an embodiment of the present invention provides a computer storage medium comprising instructions which, when run on a computer, cause the computer to perform the methods described in the above aspects.
- the first camera and the second camera respectively capture the same target at different angles to obtain the first image and the second image, and the pixels existing only in the first image or the second image
- a point is called a point to be detected, and an area included in the first image and the second image is referred to as a matching area
- an actual image feature value of the point to be detected is obtained by detecting an image of the point to be detected, by detecting the first image and the The second image obtains the actual image feature values of each pixel in the matching region to obtain a feature value set, and finds the target pixel according to the actual image feature value and the feature value set of the point to be detected.
- the difference between the actual image feature value of the target pixel and the actual image feature value of the point to be detected is smaller than the first preset value, and the depth of the target pixel is obtained as the depth of the point to be detected.
- the point to be tested reduces unnecessary hardware expenses.
- FIG. 1 is a schematic diagram of an application scenario according to an embodiment of the present invention.
- FIG. 2A is a flowchart of an embodiment of an image information acquiring method according to an embodiment of the present invention
- 2B is a schematic technical diagram of depth calculation according to an embodiment of the present invention.
- 3A is a flowchart of another embodiment of an image information acquiring method according to an embodiment of the present invention.
- FIG. 3B is a schematic diagram of a technical method for determining a polar line according to an embodiment of the present invention.
- FIG. 4 is a device diagram of an embodiment of an image processing apparatus according to an embodiment of the present invention.
- the embodiment of the invention is applicable to the application scenario shown in FIG. 1 .
- Point a and point b on object A are projected onto sensor 1 through lens 1, and point a and point b are occluded for the line of sight of lens 2, so the effective depths of point a and point b cannot be calculated.
- an additional camera is used to perform from multiple angles. Shooting to eliminate dead ends, but when there are multiple directions in which objects are blocked, correspondingly, multiple cameras are required, which increases hardware costs.
- the resulting image reduces the unnecessary hardware expenditure by restoring the to-be-detected point in the occluded portion by finding the target pixel point in the unoccluded portion, that is, the matching region.
- an embodiment of the image information acquiring method in the embodiment of the present invention includes:
- the first camera captures the first image
- the second camera captures the second image. Since the first image and the second image are shot at different angles of the same target, the first image and the second image are The same area and different areas are also included. For convenience of description, the same area in the first image and the second image is referred to as a matching area, and may also be referred to as a coincident area in practical applications.
- the device may detect an image of the point to be detected by an image analysis method to obtain an actual image feature value of the point to be detected, where the point to be detected is a pixel in the first image or the second image and is not included in the matching area, and
- the actual image feature value may be an actual ambiguity value or an actual sharpness value, etc., which is not limited herein.
- f(x, y) represents the gray value of the image at the point (x, y)
- Nx, Ny respectively represent the width and height of the image
- s represents the actual point (x, y) Image feature value.
- the embodiment of the present invention can be applied to a multi-view stereo vision technology, that is, includes at least two
- the embodiment of the present invention uses two cameras as an example for description.
- step 202 the first image and the second image are detected by image analysis to obtain a feature value set corresponding to the first image and the second image, wherein the feature value set The actual image feature value of each pixel in the matching area, that is, the area included in the first image and the second image is included.
- the device obtains the actual image feature value of the point to be detected in step 201, and obtains the feature value set according to step 202.
- the two processes do not have a sequence relationship.
- Step 201 may be performed first, or step 202 may be performed first. , or at the same time, specifically not limited here.
- the device After obtaining the feature value set and the actual image feature value of the point to be detected, the device finds the target pixel point in the matching area according to the two, wherein the actual image feature value of the target pixel point and the actual image feature of the point to be detected
- the actual image feature values of the pixel of the matching area and the point to be detected for example, the actual pixel point of the matching area is set. If the image feature value is a, and the actual image feature value of the point to be detected is b, the difference rate can be obtained by the formula (ab)/a, and the obtained difference rate is compared with the first preset difference rate value. If the difference is smaller than the first preset difference value, the pixel of the matching area is determined as the target pixel.
- the depth of the target pixel is used as the depth of the point to be detected.
- the device may determine the depth of the target pixel according to a preset algorithm, and the target pixel is The depth is taken as the depth of the point to be detected.
- depth refers to the distance from a point in the scene to the XY plane where the camera center is located.
- a depth map can be used to represent the depth information of each point in the scene, that is, each pixel in the depth map records the distance from a certain point in the scene to the XY plane where the camera center is located.
- a special hardware device can be used to actively acquire depth information of each pixel in the image, such as using an infrared pulse light source to transmit a signal to a scene, and then detecting by using an infrared sensor.
- the image or the plurality of viewpoint images are stereo-matched to restore the depth information of the object, including: (1) performing stereo matching on the image pair to obtain a parallax image of the corresponding point; and (2) calculating the depth according to the relationship between the parallax and the depth of the corresponding point.
- Z is used to indicate the depth of the pixel point
- B is used to indicate the distance between the optical center of the first camera and the optical center of the second camera
- f is used to indicate the focal length of the first camera or the second camera, corresponding to x and x' It is the distance between the pixel point and the projection point of the camera center on the image plane, and the difference between them (x-x') is used to represent the parallax of the pixel point.
- the device finds the target pixel point in the matching area by using the feature value set corresponding to the first image and the second image and the actual image feature value of the point to be detected, and The depth of the pixel is used as the depth of the point to be detected, and the depth of the point to be detected is calculated, and no additional camera is needed, which reduces unnecessary hardware expenditure.
- FIG. 3A is a flowchart of another embodiment of an image information acquiring method according to an embodiment of the present invention.
- steps 301 to 302 in FIG. 3A are similar to steps 201 to 202 in FIG. 2A, and details are not described herein again.
- step 303 Determine whether the target actual image feature value exists in the feature value set; if yes, execute step 304; if no, perform step 306.
- step 304 is performed; if not, Then step 306 is performed.
- the device selects one pixel from the pixel corresponding to the target actual image feature value as the target pixel.
- the device determines that there is a target actual image feature value in the feature value set, it can be understood that The target actual image feature value may be one or more, and the corresponding pixel point is also one or more, so the device may randomly select one pixel point from the corresponding pixel point as the target pixel point.
- target pixel points there are various ways to select target pixel points. For example, a pixel point with the smallest difference between the actual image feature values of the point to be detected may be selected as the target pixel point, so the selection of the target pixel point.
- the method is not limited here.
- the depth of the target pixel is the depth of the point to be detected.
- the step 305 in FIG. 3A is similar to the step 204 in FIG. 2A, and details are not described herein again.
- the device selects one pixel from the matching region, which may be referred to as a first reference point in the embodiment of the present invention, and obtains a reference value of the first reference point, where
- the reference value includes at least the reference actual image feature value and the reference theoretical image feature value, and the reference actual image feature value of the first reference point may be obtained by image detection technology, and the manner of obtaining the reference actual image feature value of the first reference point is compared with FIG. 2A
- Step 201 of the illustrated embodiment obtains similar actual image feature values of the point to be detected, and details are not described herein again.
- the reference theoretical image feature value of the first reference point can be obtained by a preset calculation formula.
- the device calculates the theoretical image feature value of the point to be detected according to the detected actual image feature value of the to-be-detected point, and the calculation process may include the following process: setting the first reference point Referring to the actual image feature value R1, the reference theoretical image feature value of the first reference point is M1, the actual image feature value of the point to be detected is R2, and the theoretical image feature value of the point to be detected is M2. In practical applications, it can be considered R1/M1 ⁇ R2/M2, so the theoretical image feature value of the point to be detected can be estimated by this formula.
- the depth of the point to be detected is calculated according to a preset formula.
- the depth of the point to be detected may be calculated as follows:
- n the aperture value of the camera
- c the theoretical ambiguity value of the point to be detected
- U the depth of the point to be detected
- F the focal length of the lens of the camera
- d the focal length of the lens of the camera
- d is fixed when the camera system is fixed
- m is the depth of field, where the depth of field can be understood as the distance between the front and back of the subject measured by the camera lens or other imager front edge. Therefore, n, c, F, d, and m are all known, so the depth U of the point to be detected can be calculated.
- the depth of the point to be detected is verified to ensure the depth of the point to be detected.
- the area of the matching area in the first image is referred to as a first area
- the area of the matching area in the second image is referred to as a second area
- the device may be used in the contour extraction method in the prior art.
- the purpose of the contour extraction method is to obtain a peripheral contour feature of the image
- the step of the contour extraction method may include first finding the extracted edge Any point on the contour of the image is used as the starting point, and from this starting point, the starting point field is searched in one direction, and the next contour boundary point of the detected image is continuously found, and finally the complete contour area is obtained, and the The closed edge of the outline area.
- the first closed edge and the second closed edge match, the first world point and the second world point are found according to the first closed edge or the second closed edge, and the polar plane.
- the depth of the target intersection is taken as the target value.
- the device finds the first closed edge in the first region and the second closed edge in the second region, since the number of the first closed edge and the second closed edge may be one or more, it is necessary to determine and A closed edge that matches the second closed edge.
- the first closed edge and the second closed edge may be matched by a preset matching algorithm, and specifically, the point on the first closed edge may be correlated with the point on the second closed edge, for example, The correlation value of each point on the first closed edge and each point on each second closed edge is accumulated to obtain an accumulated value, and the largest accumulated is found among the second closed edges The second closed edge corresponding to the value is considered to match the first closed edge.
- the projection point on the imaging plane of the first camera is P 1
- the projection point on the imaging plane of the second camera is P 2
- C 1 and C 2 are the optical centers of the first camera and the second camera, respectively, that is, the origin of the camera coordinate system.
- the line connecting C 1 and C 2 is the baseline.
- the intersection of the baseline and the imaging plane of the first camera is called the intersection point e 1
- the intersection point e 1 is the pole of the first camera.
- the intersection of the baseline and the imaging plane of the second camera is called the intersection point e 2
- the intersection point e 2 is the first point.
- the poles of the two cameras which are the projection coordinates of the optical centers C 1 and C 2 of the two cameras on the corresponding camera imaging plane.
- the triangular plane composed of P, C 1 and C 2 is called the polar plane ⁇ .
- the intersection line ⁇ and the intersection planes of the two camera imaging planes l 1 and l 2 are called polar lines, and it can be said that l 1 is the polar line corresponding to the point P 1 , and l 2 is the polar line corresponding to the point P 2 .
- a point M is taken from the first closed edge, and the polar plane formed by the point M, the optical center of the first camera, and the optical center of the second camera is determined.
- the intersection line of the imaging plane of the first camera is an epipolar line. In the two-dimensional plane, the polar line has at least two intersection points with the first closed edge.
- the at least two intersection points are referred to as a first world point and a second world point, and in a quadrilateral region composed of four points of the first world point, the second world point, the optical center of the first camera, and the optical center of the second camera, find a point where the diagonal intersects in the quadrilateral
- the depth of the target intersection point is determined as the target value to verify the depth of the point to be detected. It can be understood that, in practical applications, a point may be taken from the second closed edge to form a plane with the optical center of the first camera and the optical center of the second camera, which is not limited herein.
- step 314 Verify that the depth of the point to be detected is greater than the target value. If yes, go to step 314; if no, go to step 315.
- step 314 is performed; when the depth of the point to be detected is not greater than the target value, step 315 is performed.
- the device confirms that the depth of the point to be detected is reliable, that is, confirms that the depth of the point to be detected passes the verification.
- the device When the depth of the point to be detected is not greater than the target value, the device will be deeper than the pixel point adjacent to the point to be detected
- the average value of the degree is taken as the depth of the point to be detected, and the adjacent pixel points include a plurality of pixel points, which may include pixel points in the matching area, and the depth of the pixel points in the matching area is known, and may also include not matching
- the pixel points in the area it should be noted that the pixel points in the non-matching area are points that have been estimated to have depth and whose depth has been verified, that is, if the pixel points adjacent to the detected point are not in the matching area, and the estimation is performed. If the depth is not verified, the depth of the pixel adjacent to the detected point is calculated, and the depth of the point is not considered.
- the first image and the second image are combined into a depth image, that is, the image obtained by the matching of the matching regions in the two images.
- the first region and the first image are After the second area of the two images is matched, the first area of the first image and the second area of the second image are overlapped to obtain a superimposed image, that is, a depth image, where the matching area and the occlusion area are included, and the occlusion is included.
- the area includes an area in the first image from which the first area is removed, and an area in the second image from which the second area is removed.
- the device Since the number of gray levels that each pixel of the gray image may have is determined by the depth of the pixel, when the depth distribution of a certain region in the occlusion region does not correspond to the gray value distribution of the corresponding image The device performs smooth pre-processing on the area.
- the area is referred to as a sub-occlusion area, that is, the depth distribution of the sub-occlusion area does not correspond to the gray value distribution of the sub-occlusion area, for example,
- the depth distribution of the sub-occlusion area is a progressively increasing distribution, and the gray value distribution of the sub-occlusion area is firstly smaller and then larger, so the device needs to perform smooth pre-processing on the sub-occlusion area, and the sub-occlusion is performed in the sub-occlusion area.
- a target occlusion point is selected in the region, wherein a depth difference between a depth of the target occlusion point and an adjacent point of the target occlusion point is greater than a preset difference.
- the device determines the target occlusion point
- the depth of the adjacent point of the target occlusion point is obtained, and the depth of the adjacent point is averaged to obtain the depth average of the adjacent point, and the device can target the depth average The depth of the occlusion point.
- the device passes the first image and the second image through steps 315 to 317.
- the process is an optional step, which is not limited herein.
- the present invention further provides an apparatus.
- FIG. 4 it is a device diagram of a device according to an embodiment of the present invention.
- the device 40 includes a memory 410, a processor 420, and a sensor 430.
- the memory 410 is configured to store an operation instruction
- the processor 420 is configured to perform the following steps by calling an operation instruction stored in the memory 410:
- the sensor 430 is configured to acquire the first image and the second image.
- processor 420 may also be referred to as a central processing unit (English full name: Central Processing Unit, English abbreviation: CPU).
- the memory 410 is configured to store operation instructions and data, so that the processor 420 invokes the above operation instructions to implement corresponding operations, and may include a read only memory and a random access memory. A portion of the memory 410 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
- NVRAM Non-Volatile Random Access Memory
- the apparatus 40 also includes a bus system 440 that couples various components of the device 40, the various components including the sensor 410, the memory 420, and the processor 430, wherein the bus system 440 includes, in addition to the data bus, It can also include a power bus, a control bus, a status signal bus, and the like. However, for clarity of description, various buses are labeled as bus system 440 in the figure.
- Processor 420 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may pass through the processor 420. The integrated logic of the hardware or the instruction in the form of software is completed.
- the processor 420 may be a general-purpose processor, a digital signal processor (English name: Digital Signal Processing, English abbreviation: DSP), an application specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), ready-made programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processing
- ASIC Application Specific Integrated Circuit
- FPGA ready-made programmable Gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
- the software modules can be located in a conventional computer storage medium of the art, such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
- the computer storage medium is located in memory 410, and processor 420 reads the information in memory 410 and, in conjunction with its hardware, performs the steps of the above method.
- the specific implementation of the processor 420 to find the target pixel in the matching area according to the first actual image feature value and the feature value set may be:
- the target actual image feature value exists in the feature value set, the difference between the target actual image feature value and the actual image feature value is less than the first preset difference rate value; if yes, from the One pixel of the pixel corresponding to the target actual image feature value is selected as the target pixel.
- processor 420 may also invoke an operation instruction in the memory 410 to perform the following steps:
- the reference theoretical image feature value of the first reference point is obtained according to a preset calculation formula, and the first reference point is a pixel point in the matching region; according to the first reference Calculating a theoretical image feature value of the point to be detected by the reference value of the point and the actual image feature value of the point to be detected, the reference value of the first reference point includes the reference actual image feature value of the first reference point and the reference theoretical image feature value; The theoretical image feature value of the detection point is calculated to obtain the depth of the point to be detected.
- processor 420 may also invoke an operation instruction in the memory 410 to perform the following steps:
- the average value of the depths of the pixel points adjacent to the point to be detected is taken as the depth of the point to be detected.
- processor 420 may also invoke an operation instruction in the memory 410 to perform the following steps:
- the first closed edge and the second closed edge are respectively correspondingly found in the first region and the second region by the contour extraction method, the first region is a region of the matching region in the first image, and the second region is a matching region in the first The area in the second image;
- the first closed edge and the second closed edge match, the first world point and the second world point are found according to the first closed edge or the second closed edge, and the polar plane;
- the depth of the target intersection is taken as the target value.
- processor 420 may also invoke an operation instruction in the memory 410 to perform the following steps:
- the depth image including a matching area and an occlusion area, the occlusion area including an area in the first image from which the first area is removed and an area in the second image from which the second area is removed;
- the occlusion region is preprocessed, and the sub-occlusion region is included in the occlusion region.
- the specific implementation of the pre-processing of the sub-occlusion area by the processor 420 in the above embodiment may be:
- the depth average of the adjacent points of the target occlusion point is taken as the depth of the target occlusion point.
- the first camera and the second camera respectively capture the same target at different angles to obtain the first image and the second image, and the pixels existing only in the first image or the second image are referred to as to be detected.
- a point, an area included in the first image and the second image is referred to as a matching area, and an actual image feature value of the point to be detected is obtained by detecting an image of the point to be detected, by detecting the first image
- the difference between the actual image feature value and the point to be detected is smaller than the first preset value, and the depth of the target pixel is obtained as the depth of the point to be detected.
- the camera is not required to be added to eliminate the occluded portion. That is, only the portion of the first image or the second image exists, and the target pixel to be detected in the occluded portion is restored by finding the target pixel in the unoccluded portion, that is, the matching region, thereby reducing unnecessary hardware expenditure. .
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- the medium includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Un mode de réalisation de la présente invention concerne un procédé d'acquisition d'informations d'image, un dispositif de traitement d'image et un support de stockage informatique, qui sont utilisés pendant une détection de vision stéréoscopique multivue de façon à réduire les dépenses matérielles inutiles. Le procédé dans le mode de réalisation de la présente invention consiste : à acquérir une valeur de caractéristique d'image réelle d'un point à détecter, le point à détecter étant un point de pixel dans une première image ou une seconde image, et le point à détecter n'étant pas contenu dans une région de correspondance, la région de correspondance étant une région comprise dans la première image et la seconde image; à acquérir un ensemble de valeurs de caractéristiques correspondant à la première image et à la seconde image, l'ensemble de valeurs de caractéristiques comprenant des valeurs de caractéristiques d'image réelles de chaque point de pixel dans la région de correspondance; en fonction de la valeur de caractéristique d'image réelle du point à détecter et de l'ensemble de valeurs de caractéristique, à rechercher dans la région de correspondance un point de pixel cible, le taux de différence entre une valeur de caractéristique d'image réelle du point de pixel cible et la valeur de caractéristique d'image réelle du point à détecter étant inférieur à un premier débit préconfiguré de valeur de différence; à utiliser la profondeur du point de pixel cible en tant que profondeur du point à détecter.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/094932 WO2019019160A1 (fr) | 2017-07-28 | 2017-07-28 | Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique |
CN201780092646.9A CN110800020B (zh) | 2017-07-28 | 2017-07-28 | 一种图像信息获取方法、图像处理设备及计算机存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/094932 WO2019019160A1 (fr) | 2017-07-28 | 2017-07-28 | Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019019160A1 true WO2019019160A1 (fr) | 2019-01-31 |
Family
ID=65039334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/094932 WO2019019160A1 (fr) | 2017-07-28 | 2017-07-28 | Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110800020B (fr) |
WO (1) | WO2019019160A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117575886A (zh) * | 2024-01-15 | 2024-02-20 | 之江实验室 | 一种图像边缘检测器、检测方法、电子设备、介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111627061B (zh) * | 2020-06-03 | 2023-07-11 | 如你所视(北京)科技有限公司 | 位姿检测方法、装置以及电子设备、存储介质 |
CN113484852B (zh) * | 2021-07-07 | 2023-11-07 | 烟台艾睿光电科技有限公司 | 一种测距方法及系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102204262A (zh) * | 2008-10-28 | 2011-09-28 | 皇家飞利浦电子股份有限公司 | 图像特性的遮挡数据的生成 |
US20130135439A1 (en) * | 2011-11-29 | 2013-05-30 | Fujitsu Limited | Stereoscopic image generating device and stereoscopic image generating method |
CN103679739A (zh) * | 2013-12-26 | 2014-03-26 | 清华大学 | 基于遮挡区域检测的虚拟视图生成方法 |
CN104063702A (zh) * | 2014-07-16 | 2014-09-24 | 中南大学 | 一种基于遮挡修复和局部相似性匹配的三维步态识别方法 |
CN104574331A (zh) * | 2013-10-22 | 2015-04-29 | 中兴通讯股份有限公司 | 一种数据处理方法、装置、计算机存储介质及用户终端 |
CN105184780A (zh) * | 2015-08-26 | 2015-12-23 | 京东方科技集团股份有限公司 | 一种立体视觉深度的预测方法和系统 |
CN105279786A (zh) * | 2014-07-03 | 2016-01-27 | 顾海松 | 物体三维模型的获取方法和系统 |
CN106355570A (zh) * | 2016-10-21 | 2017-01-25 | 昆明理工大学 | 一种结合深度特征的双目立体视觉匹配方法 |
-
2017
- 2017-07-28 WO PCT/CN2017/094932 patent/WO2019019160A1/fr active Application Filing
- 2017-07-28 CN CN201780092646.9A patent/CN110800020B/zh active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102204262A (zh) * | 2008-10-28 | 2011-09-28 | 皇家飞利浦电子股份有限公司 | 图像特性的遮挡数据的生成 |
US20130135439A1 (en) * | 2011-11-29 | 2013-05-30 | Fujitsu Limited | Stereoscopic image generating device and stereoscopic image generating method |
CN104574331A (zh) * | 2013-10-22 | 2015-04-29 | 中兴通讯股份有限公司 | 一种数据处理方法、装置、计算机存储介质及用户终端 |
CN103679739A (zh) * | 2013-12-26 | 2014-03-26 | 清华大学 | 基于遮挡区域检测的虚拟视图生成方法 |
CN105279786A (zh) * | 2014-07-03 | 2016-01-27 | 顾海松 | 物体三维模型的获取方法和系统 |
CN104063702A (zh) * | 2014-07-16 | 2014-09-24 | 中南大学 | 一种基于遮挡修复和局部相似性匹配的三维步态识别方法 |
CN105184780A (zh) * | 2015-08-26 | 2015-12-23 | 京东方科技集团股份有限公司 | 一种立体视觉深度的预测方法和系统 |
CN106355570A (zh) * | 2016-10-21 | 2017-01-25 | 昆明理工大学 | 一种结合深度特征的双目立体视觉匹配方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117575886A (zh) * | 2024-01-15 | 2024-02-20 | 之江实验室 | 一种图像边缘检测器、检测方法、电子设备、介质 |
CN117575886B (zh) * | 2024-01-15 | 2024-04-05 | 之江实验室 | 一种图像边缘检测器、检测方法、电子设备、介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110800020A (zh) | 2020-02-14 |
CN110800020B (zh) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11830141B2 (en) | Systems and methods for 3D facial modeling | |
US9424650B2 (en) | Sensor fusion for depth estimation | |
EP2064675B1 (fr) | Procédé pour déterminer une carte de profondeur à partir d'images, dispositif pour déterminer une carte de profondeur | |
WO2017076106A1 (fr) | Procédé et dispositif d'assemblage d'images | |
JP6291469B2 (ja) | 障害物検出装置および障害物検出方法 | |
JP2018534698A (ja) | Rgbdカメラ姿勢のラージスケール判定のための方法およびシステム | |
CN106981078B (zh) | 视线校正方法、装置、智能会议终端及存储介质 | |
CN111160232B (zh) | 正面人脸重建方法、装置及系统 | |
WO2020147346A1 (fr) | Procédé, système et appareil de reconnaissance d'images | |
KR20130104691A (ko) | 영상 처리 장치 및 방법 | |
WO2014180255A1 (fr) | Procédé de traitement de données, appareil, support d'informations informatique et terminal d'utilisateur | |
CN111882655A (zh) | 三维重建的方法、装置、系统、计算机设备和存储介质 | |
WO2019019160A1 (fr) | Procédé d'acquisition d'informations d'image, dispositif de traitement d'image et support de stockage informatique | |
CN111046845A (zh) | 活体检测方法、装置及系统 | |
CN106558038B (zh) | 一种水天线检测方法及装置 | |
WO2023142352A1 (fr) | Procédé et dispositif d'acquisition d'image de profondeur, terminal, système d'imagerie et support | |
CN111160233B (zh) | 基于三维成像辅助的人脸活体检测方法、介质及系统 | |
AU2022375768A1 (en) | Methods, storage media, and systems for generating a three-dimensional line segment | |
JPH06160087A (ja) | 画像を用いた距離測定方法及びその装置 | |
KR101869226B1 (ko) | 이종 카메라를 위한 시차 맵 생성 방법 | |
CN111080689B (zh) | 确定面部深度图的方法和装置 | |
JP2958462B1 (ja) | ステレオ画像の輝度修正方法及び装置 | |
CN117710467B (zh) | 无人机定位方法、设备及飞行器 | |
EP4423725A1 (fr) | Procédés, supports de stockage et systèmes pour générer un segment de ligne tridimensionnel | |
CN116721109A (zh) | 一种双目视觉图像半全局匹配方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17919379 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17919379 Country of ref document: EP Kind code of ref document: A1 |