WO2020051747A1 - 获取物体轮廓的方法、图像处理装置以及计算机存储介质 - Google Patents

获取物体轮廓的方法、图像处理装置以及计算机存储介质 Download PDF

Info

Publication number
WO2020051747A1
WO2020051747A1 PCT/CN2018/104893 CN2018104893W WO2020051747A1 WO 2020051747 A1 WO2020051747 A1 WO 2020051747A1 CN 2018104893 W CN2018104893 W CN 2018104893W WO 2020051747 A1 WO2020051747 A1 WO 2020051747A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
viewpoint
light source
depth information
shadow
Prior art date
Application number
PCT/CN2018/104893
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to PCT/CN2018/104893 priority Critical patent/WO2020051747A1/zh
Publication of WO2020051747A1 publication Critical patent/WO2020051747A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the technical field of object contour acquisition, and in particular, to a method for acquiring an object contour, an image processing device, and a computer storage medium.
  • Obtaining the surface contour information of an object plays a vital role in areas such as automated assembly or product surface treatment.
  • using 3D sensors to obtain images of objects is currently an easy way to achieve.
  • the accuracy of a 3D sensor is limited by its matching accuracy and the number of pixels, and also by the occlusion caused by insufficient viewing angle.
  • the general method is to sweep the object to obtain the surface contour of the object. This method takes a long time and cannot quickly obtain the surface contour of the object; and because it needs to move, it makes it difficult to locate, which is not conducive to engineering applications, and the system cost is too high, and it needs to be equipped with a corresponding displacement table. The requirements are high; in addition, the accuracy of this approach is still limited by the accuracy of the sensor.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method for obtaining an outline of an object, an image processing device, and a computer storage medium, which can quickly obtain a highly accurate outline of an object.
  • An embodiment of the present invention provides a method for acquiring an outline of an object.
  • the method includes:
  • the depth information of the contour of the object is calculated by using the positional relationship of the contour shadow of the object between the multiple object images.
  • an embodiment of the present invention further provides a device for acquiring an outline of an object, including:
  • a light source array arranged on the periphery of the object, the light source array including a plurality of light sources for lighting the object at different angles;
  • An image acquisition component the image acquisition component includes at least one camera, and each of the cameras is configured to acquire multiple object images acquired from objects at different lighting angles at the same viewpoint;
  • a controller respectively coupled to the light source array and the image acquisition component, for controlling the light source array to illuminate an object at different angles, and controlling the image acquisition component to acquire light at different illumination angles Multiple object images obtained by object collection;
  • An image processing device which is respectively coupled to the light source array, the image acquisition component, and the controller, and is configured to calculate the contour of the object by using the positional relationship of the contour shadow of the object between the multiple object images Depth information.
  • an embodiment of the present invention further provides an image processing apparatus including a memory and a processor coupled to each other;
  • the memory is used for storing computer instructions and data
  • the processor executes the computer instructions for implementing a function of an image processing device in the apparatus for acquiring an outline of an object as described above.
  • an embodiment of the present invention further provides a computer-readable storage medium, where the computer storage medium stores program data, and the program data can be executed to implement the method described above.
  • the depth information of the contours of the objects can be calculated, avoiding the Obtaining an image of an object while in motion can increase the accuracy of the depth information of the acquired object contour while speeding up the acquisition of the image of the object.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for obtaining an outline of an object according to the present invention
  • FIG. 2a and FIG. 2b are lighting schematic diagrams of an application scenario of a method for obtaining an outline of an object according to the present invention
  • FIG. 3 is a schematic diagram of a method for obtaining an outline of an object according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of another embodiment of a method for acquiring an outline of an object according to the present invention.
  • FIG. 5 is a schematic flowchart of another embodiment of a method for acquiring an outline of an object according to the present invention.
  • FIG. 6 is a schematic diagram of a contour acquisition principle of another application scenario of the method for acquiring the contour of an object according to the present invention.
  • step S503 shown in FIG. 5 in still another embodiment
  • FIG. 8 is a schematic structural block diagram of an embodiment of a device for acquiring an outline of an object according to the present invention.
  • FIG. 9 is a schematic block diagram of a structure of an embodiment of an image processing apparatus according to the present invention.
  • FIG. 10 is a schematic block diagram of a computer storage medium according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for obtaining an outline of an object according to the present invention.
  • the method is used to obtain the contour of an object, and is executed by a contour acquisition device, and includes the following steps:
  • S101 Acquire multiple object images acquired from objects at different lighting angles.
  • the multiple object images are all obtained by collecting and processing the same target space. Due to different lighting angles, the positions of the contour shadows of the objects in the collected multiple object images may be different.
  • the contour acquisition device includes a light source array and an image collector arranged on the periphery of an object.
  • the S101 specifically includes controlling each light source group in the light source array located on the periphery of the object to light the object in turn, wherein each light source group includes at least one position of the light source, and at least between different light source groups in the light source array. There is a difference in the position of one light source; multiple object images obtained from the object are acquired each time the light is illuminated.
  • the light source array includes light sources L1, L2, L3,...
  • the contour acquisition device first controls the light source L1 to light the object, and the image collector 21 captures the image of the object to obtain the first object image T1 when the light source L1 lights; as shown in Figure 2b, then the light source L1 is turned off and the light source is turned on L2, the image collector 21 collects the object to obtain the second object image T2 when the light source L2 lights up.
  • each light source in the light source array is controlled to light the object in turn, and each light source can form a different light source.
  • the group for example, turns on the light source L1 and the light source L2 at the same time, or turns on the light sources L1, L2, and L3 at the same time, and so on. It can be found that the outline shadow of the object in the first object image T1 and the outline shadow of the object in the second object image T2 are at different positions in the image.
  • S102 Use the positional relationship between the outline shadows of the objects among the multiple object images to calculate the depth information of the outlines of the objects.
  • the obtained object image includes a shadow point P1 corresponding to the point P
  • the light source L5 lights the object
  • the obtained object image includes the shadow point P5 corresponding to the point P. Since the points P1 and P5 are the corresponding shadow positions of the point P at different lighting angles, it can be obtained that the intersection point of the straight line L1P1 and the straight line L5P5 is Is the position of the P point, that is, the depth information of the P point can be calculated. Similarly, the depth information of other points on the surface of the object can also be calculated, and the depth information of the contour of the object can be obtained.
  • the depth information of the contours of the objects can be calculated. Avoiding the need to acquire the image of the object during the movement in other acquisition methods, while accelerating the speed of acquiring the image of the object, it can improve the accuracy of the depth information of the acquired object contour.
  • the method for obtaining an outline of an object in the present invention includes the following steps:
  • Step S401 Acquire multiple object images obtained by collecting objects at different lighting angles; the multiple object images include multiple single-view object images obtained by collecting objects at different lighting angles at the same viewpoint.
  • Step S401 may be as described in S101 above, and is not repeated here. The difference is that multiple object images in this embodiment are acquired at the same viewpoint position.
  • S402 Find out the outline shadow of the object in each single-viewpoint object image by using the image data features of the object's light and shadow.
  • the contour shadow of the object includes a contour line of the object and a contour line of the object shadow.
  • the material of the background surface where the object is located is different from the material of the object itself.
  • the reflection coefficient of the light of the object itself is greater than that of the background surface.
  • the light source L1 lights the object
  • the reflected light intensity of the object's light-emitting surface is greater than the reflected light intensity of the background surface, and without considering the diffraction of light, etc., the position of the shadow portion of the background surface due to the occlusion of the object causes the reflected light intensity of the part to be 0; therefore, different brightness areas appear in the first object image T1 obtained by the image collector 21 collecting the objects, respectively: corresponding to the bright area of the object itself, and the part of the background surface that is not blocked by the object The dark light area and the shadow portion corresponding to the part blocked by the object on the background surface.
  • the contour line m1 of the object connecting the bright light area and the shadow area is the corresponding high point of the object in the target space.
  • the contour line of the object, and the shadow contour line n1 connecting the shadow area and the dark light area is the contour line of the object shadow at the high point of the object in the target space.
  • S403 Calculate position information of the contour of the object corresponding to the contour shadow by using the association relationship of the contour shadow between all the single-viewpoint object images.
  • each object image can only obtain the contour line of the object and the contour line of the object shadow corresponding to the high point on a contour line of the object at the corresponding lighting angle. . Therefore, it is necessary to further match each object image so as to associate all the shadow contour lines of the objects corresponding to the high points on any contour line on the object, so as to obtain correlations with the high points on any contour line on the object. Location information.
  • the contour acquisition device performs pixel matching on the object images acquired at each lighting angle, respectively.
  • the pixel P1 and the pixels corresponding to the same spatial point P in the first object image T1 when the light source L1 is illuminated and the second object image T2 when the light source L5 is illuminated are respectively set by using a set combination algorithm.
  • the point P5 is matched, and finally the pixel point P1 and the pixel point P5 are associated, so as to obtain position information related to the point P.
  • the pixel matching process in the above process is an existing technical solution.
  • the pixel matching uses a template matching algorithm based on grayscale, etc., which is not an invention point of the present invention, and is not limited here. .
  • S404 Calculate depth information of the contour of the object according to the position information of the contour.
  • the position information related to the contour of the object can be obtained according to step S403. Then, the depth information of the contour can be further calculated according to the position information of the contour. The method can be as described in S102 above, and will not be repeated here. Finally, the contour depth image of the object can be further formed according to the obtained depth information of the contour of the object.
  • multiple single-viewpoint object images acquired from objects at different lighting angles are acquired at the same viewpoint, and the contour corresponding to the contour shadow of the object is calculated by using the contour shadows between all the single-viewpoint object images.
  • Position information of the object and then calculate the depth information of the contour of the object according to the position information of the contour, and then obtain the contour depth image of the object, which can realize the rapid acquisition of the contour depth image of the object under the monocular condition, which takes a short time.
  • the method for obtaining an outline of an object in the present invention includes the following steps:
  • Step S501 Acquire multiple object images acquired from objects at different lighting angles; the multiple object images include multiple groups of single-viewpoint object images corresponding to multiple viewpoints, where a set of single-viewpoint objects for each viewpoint The image includes a plurality of single-viewpoint object images acquired by the viewpoint pair on objects at different lighting angles.
  • Step S501 may be as described in S401 above, and is not repeated here. The difference is that multiple object images in this embodiment are acquired at multiple viewpoint positions.
  • step S502 Use the positional relationship of the contour and shadow of the object between a group of the single-viewpoint object images corresponding to each viewpoint to calculate the depth information of the contour of the object corresponding to each viewpoint. It may be understood that step S502 may be as described in S402 to S404 above, and details are not described herein.
  • S503 Integrate the depth information of the contours of the objects corresponding to all viewpoints to obtain the final depth information of the contours of the objects.
  • the contour depth information of the object obtained at different viewpoints may have the problem that the contour depth information of some contours has poor accuracy or cannot be obtained.
  • Corresponding contour depth information is integrated to improve the accuracy of the contour depth information of the final object.
  • the contour acquisition device first uses the image collectors 21 and 22 set at the A and B viewpoints to collect the object to obtain the first in the A viewpoint, respectively.
  • the first object image T1 and the third object image T3 are acquired when the light source L1 lights the object
  • the second object image T2 and the fourth object image T4 are acquired when the light source L5 lights the object. .
  • the point PA1 and the pixel point PA2 are matched, and finally the pixel point PA1 and the pixel point PA2 are associated, so as to obtain the position information related to the PA point at the A point of view, and further obtain the depth information of the PA point at the A point of view; it can be understood
  • the depth information at the PB point of the A viewpoint can also be obtained through the first object image T1 and the second object image T2, but its accuracy will be poor.
  • mapping the pixel point PB3 and the pixel point PB4 corresponding to the same spatial point PB in the third object image T3 when the light source L1 is illuminated and the fourth object image T4 when the light source L5 is illuminated can also be obtained at B
  • the depth information of the PB point of the viewpoint it can be understood that the accuracy of the depth information of the PA point at the B viewpoint may be poor or impossible to obtain. Therefore, we can use the first object image T1 and the second object image T2 to obtain the depth information of the PA point with higher accuracy at the A point of view and the depth information of the PB point with lower accuracy at the A point of view.
  • T3 and the fourth object image T4 obtain the depth information of the PB point with higher accuracy at the B viewpoint and the depth information of the PA point with lower accuracy at the B viewpoint.
  • the multiple sets of single-viewpoint object images have at least overlapping portions.
  • the overlapping part of the object image of the viewpoint can quickly determine the matching relationship of other parts around it.
  • the depth information is integrated to obtain the final depth information of the contour of the object, and further obtain the final depth image of the contour of the object.
  • multiple single-viewpoint object images acquired from objects at different lighting angles are acquired at multiple viewpoints, and the positions of the outline shadows of the objects between a set of single-viewpoint object images corresponding to each viewpoint are respectively used. Relationship, calculate the depth information of the contour of the object corresponding to each viewpoint, and then integrate the depth information of the contours of the objects corresponding to all viewpoints to obtain the final depth information of the contour of the object, and then obtain the final depth image of the contour of the object , Can achieve a high-precision contour depth image of the object under multi-eye conditions.
  • S701 Obtain BRDF data of the object according to the outline shadow of the object, the information of the different lighting angles, and the position information of the multiple viewpoints.
  • the three-dimensional world angle can be similar to a sphere.
  • the relationship between the plane and the tangent plane has become a key parameter of the BRDF (Bidirectional Reflection Distribution Function) data.
  • BRDF Bidirectional Reflection Distribution Function
  • the final depth image of the contour of the object can be obtained according to the contour of the object, and the position of each light source in the light source array, the information of the incident light of each light source, the position of each viewpoint, and multiple groups corresponding to each viewpoint are obtained. But after point object image, corresponding BRDF data is obtained accordingly.
  • S702 Perform a normal calculation on the surface of the object according to the BRDF data, and set characteristic parameters of the surface material of the object to obtain a final image of the contour of the object.
  • the surface material of the object has an influence on the reflection and absorption of light.
  • the above-mentioned BRDF data can be used for the above.
  • the final depth information of the obtained contour of the object is further refined to obtain a final image of the contour of the object with higher accuracy.
  • FIG. 8 is a schematic block diagram of a structure of an embodiment of an apparatus for acquiring an outline of an object according to the present invention.
  • An embodiment of the present invention provides a device 80 for acquiring an outline of an object.
  • the device 80 may include a light source array 82, an image acquisition component 84, a controller 86, and an image processing device 88 coupled to each other.
  • the light source array 82 is disposed on the periphery of the object.
  • the light source array 82 includes several light sources (not shown) for lighting the object at different angles.
  • the image acquisition component 84 includes at least one camera (not shown). Each camera is used to acquire multiple object images acquired from objects at different lighting angles at the same viewpoint; the controller 86 is used to control the light source array 82 to light objects at different angles, and to control the image acquisition component 84 Acquire multiple object images acquired from objects at different lighting angles; the image processing device 88 is used to use the positional relationship of the contours and shadows of the objects among the multiple object images acquired by the image acquisition component 84 to calculate the contours of the objects Depth information.
  • the controller 86 is specifically configured to control each group of light source groups in the light source array 82 around the object to light the object in turn.
  • the light source group includes light sources at at least one position. At least one light source has different positions among different light source groups in the light source array 82.
  • the multiple object images acquired by the image acquisition component 84 are multiple single-viewpoint object images acquired by the same camera at the same viewpoint for objects at different lighting angles; the image processing device 88 is specifically configured to: Use the image data characteristics of the object's light and shadow to find the contour shadow of the object in each single-viewpoint object image, and use the contour shadows between all the single-viewpoint object images to calculate the position information of the contour corresponding to the contour shadow of the object.
  • the position information of the contour is used to calculate the depth information of the contour of the object, and then the contour depth image of the object is obtained.
  • the multiple object images acquired by the image acquisition component 84 include multiple groups of single-viewpoint object images corresponding to multiple viewpoints, where a set of single-viewpoint object images of each viewpoint includes an image acquisition component Multiple single-viewpoint object images acquired by a camera in 84 at corresponding viewpoints for objects at different lighting angles; the image processing device 88 is specifically configured to separately use a set of single-viewpoint object images corresponding to each viewpoint
  • the positional relationship between the contours and shadows of objects is calculated to obtain the depth information of the contours of the objects corresponding to each viewpoint, and then the depth information of the contours of the objects corresponding to all viewpoints is integrated to obtain the final depth information of the contours of the objects.
  • the positions of the multiple viewpoints can be set in various forms. When the positions of the multiple viewpoints are not parallel to each other, the oblique accuracy of the contour of the surface of the object can be better obtained.
  • the image processing device 88 in the embodiment of the present invention may also be used as the controller 86, that is, the image processing device 88 may also implement the functions of the controller 86. Therefore, the division of the functional parts in the embodiment of the present invention is only a logical function division. In actual implementation, there may be another division manner. For example, multiple functional parts may be combined or integrated into several modules, or It is the separate physical existence of each functional part and so on.
  • the image processing device 88 is further specifically configured to obtain information according to the outline shadow of the object obtained by the image acquisition component 84, different lighting angles provided by the light source array 82, light sources, and the position information of the image acquisition component 84. After setting the characteristic parameters of the surface material of the object in the BRDF data of the object, a normal calculation is performed on the surface of the object according to the BRDF data to obtain a final image of the contour of the object.
  • the apparatus 80 for acquiring an outline of an object provided by the embodiment of the present invention may perform the steps of the foregoing method.
  • the light source array 82 is used to illuminate objects at different angles.
  • the image acquisition component 84 acquires multiple object images acquired from the objects at different lighting angles.
  • the image processing device 88 uses the multiple object images.
  • the positional relationship between the contours and shadows of objects can be calculated to obtain the depth information of the contours of the object, which avoids acquiring the image of the object in motion, can reduce the acquisition time of the image of the object, and can improve the depth information of the acquired object's contour. Accuracy, get high-precision contour images of objects.
  • FIG. 9 is a schematic block diagram of an image processing apparatus according to an embodiment of the present invention.
  • the image processing device 90 in this embodiment is an image processing device in the above-mentioned device for acquiring an outline of an object.
  • the image processing apparatus 90 includes a memory 92 and a processor 94 that are coupled to each other.
  • the memory 92 is configured to store computer instructions and data, and the processor 94 executes the computer instructions.
  • the processor 94 can implement the functions of the image processing device in the apparatus for acquiring an outline of an object as described above. For related content, please refer to the detailed description in the above device for obtaining the outline of an object, which will not be repeated here.
  • the present invention also provides a computer storage medium.
  • the computer storage medium 100 in this embodiment stores program data 102 executable by a processor, and the program data 102 can be executed to implement the method as described above.
  • the computer storage medium 100 may specifically be the above-mentioned memory 92, or may be other, including: a flash disk, a read-only memory (ROM), a random access device (Random Access Memory, RAM), a magnetic disk or an optical disk. Wait.
  • ROM read-only memory
  • RAM Random Access Memory
  • the depth information of the contours of the objects can be calculated, and other information
  • the image of the object needs to be acquired during the movement process, which can improve the accuracy of the depth information of the acquired object contour; and at the same viewpoint, using the obtained multiple single-view object images can quickly acquire the contour depth information of the object , Can achieve the contour depth information of the object quickly under the monocular situation, it takes a short time; and in multiple views, the contour depth information of the object obtained under the monocular situation can be integrated, which can improve the final object's The accuracy of the contour depth information; and by further acquiring the BRDF data of the object, the contour depth information of the object can be refined to obtain a final image of the contour of the object with higher accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例涉及物体轮廓获取领域,公开了一种获取物体轮廓的方法、图像处理装置以及计算机存储介质。其中,该方法包括:获取对在不同打光角度下的物体采集得到的多张物体图像;利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。本发明实施例,可以快速地获取高精度的物体轮廓。

Description

获取物体轮廓的方法、图像处理装置以及计算机存储介质 【技术领域】
本发明涉及物体轮廓获取技术领域,具体涉及一种获取物体轮廓的方法、图像处理装置以及计算机存储介质。
【背景技术】
物体表面轮廓信息的获取对自动化装配或者产品表面处理等领域起到至关重要的作用,而在三维摄像或三维人工智能领域,利用3D传感器获得物体的图像是目前易于实现的方式。
但是,3D传感器精度受限于其匹配精度及像素数,也受限于视角不足带来的遮挡。一般做法是对物体进行环扫以获得物体的表面轮廓。这样的方法耗时很长,无法快速地获取物体的表面轮廓;而且由于需要移动,使得难以定位,不利于工程应用,并且该系统成本过高,需要配套相应的位移工作台,对机械装置的要求很高;另外,该做法的精度仍受限于传感器的精度。
【发明内容】
本发明实施例所要解决的技术问题是提供一种获取物体轮廓的方法、图像处理装置以及计算机存储介质,可以快速地获取高精度的物体轮廓。
本发明实施例提供一种获取物体轮廓的方法,所述方法包括:
获取对在不同打光角度下的物体采集得到的多张物体图像;
利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。
相应的,本发明实施例还提供一种获取物体轮廓的装置,包括:
光源阵列,所述光源阵列设置在所述物体的外围,所述光源阵列包括若干个光源,用于在不同角度下对物体打光;
图像获取组件,所述图像获取组件包括至少一个相机,每个所述相机用于在同一视点获取对在不同打光角度下的物体采集得到的多张物体图像;
控制器,分别与所述光源阵列和所述图像获取组件耦接,用于控制所述光源阵列在不同角度下对物体打光,以及控制所述图像获取组件获取对在不同打光角度下的物体采集得到的多张物体图像;
图像处理设备,分别耦接所述光源阵列、所述图像获取组件和所述控制器,用于利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。
相应的,本发明实施例还提供一种图像处理装置,包括相互耦接的存储器、处理器;
所述存储器用于存储计算机指令以及数据;
所述处理器执行所述计算机指令,用于实现如前所述的获取物体轮廓的装置中的图像处理设备的功能。
相应的,本发明实施例还提供一种计算机可读存储介质,所述计算机存储介质存储有程序数据,所述程序数据能够被执行以实现如前所述的方法。
以上方案,通过获取对在不同打光角度下的物体采集得到的多张物体图像,并利用多张物体图像间物体的轮廓阴影的位置关系,可以计算得到物体的轮廓的深度信息,避免了在运动中来获取物体的图像,在加快物体的图像的获取速度的同时能提高获取的物体轮廓的深度信息的准确性。
【附图说明】
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一种获取物体轮廓的方法一实施例的流程示意图;
图2a和图2b是本发明一种获取物体轮廓的方法一应用场景的打光示意图;
图3是本发明一种获取物体轮廓的方法一应用场景的轮廓获取原理示 意图;
图4是本发明一种获取物体轮廓的方法另一实施例的流程示意图;
图5是本发明一种获取物体轮廓的方法又一实施例的流程示意图;
图6是本发明一种获取物体轮廓的方法另一应用场景的轮廓获取原理示意图;
图7是图5所示的步骤S503在再一实施例中的流程示意图;
图8是本发明一种获取物体轮廓的装置一实施例的结构示意框图;
图9是本发明一种图像处理装置一实施例的结构示意框图;
图10是本发明一种计算机存储介质一实施例的结构示意框图。
【具体实施方式】
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例,例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
请参阅图1,图1是本发明一种获取物体轮廓的方法一实施例的流程示意图。本实施例中,该方法用于获取物体的轮廓,由一轮廓获取装置执行,包括以下步骤:
S101:获取对在不同打光角度下的物体采集得到的多张物体图像。
其中,所述多张物体图像均是对同一目标空间进行采集处理得到,由 于打光角度不同,采集到的多张物体图像中的物体的轮廓阴影的位置会存在差异。
具体地,该轮廓获取装置包括设置在物体外围的光源阵列和图像采集器。例如,该S101具体包括:控制位于物体外围的光源阵列中的各组光源组轮流对物体进行打光,其中,每组光源组包括至少一个位置上的光源,光源阵列中不同的光源组间至少存在一个光源的位置不同;获取每次打光时对物体采集得到的多张物体图像。
请参阅图2a和图2b,光源阵列包括光源L1、L2、L3……。如图2a,轮廓获取装置先控制光源L1对物体进行打光,图像采集器21对物体进行图像采集得到光源L1打光时的第一物体图像T1;如图2b,然后关闭光源L1,打开光源L2,图像采集器21对物体进行采集得到光源L2打光时的第二物体图像T2,按照这样的方式,控制光源阵列中的各光源轮流对物体进行打光,而且各光源可以组成不同的光源组,例如,同时开启光源L1和光源L2,或者同时开启光源L1、L2和L3,等等。可以发现,第一物体图像T1中物体的轮廓阴影与第二物体图像T2中物体的轮廓阴影处于图像中的不同位置。
S102:利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。
请参阅图3,以物体表面的P点为例,当光源L1对物体打光时,获得的物体图像中包括与P点相对应的阴影点P1,而当光源L5对物体进行打光时,获得的物体图像中包括与P点相对应的阴影点P5,由于P1点和P5点均为P点在不同打光角度下对应的阴影位置,则可以得出:直线L1P1和直线L5P5的交点即为P点的位置,即能够求算出P点的深度信息。同样的,也能够求算出物体表面其他点的深度信息,进而得到物体的轮廓的深度信息。
本实施例中,通过获取对在不同打光角度下的物体采集得到的多张物体图像,并利用多张物体图像间物体的轮廓阴影的位置关系,可以计算得到物体的轮廓的深度信息,可以避免其他获取方式中需要在运动过程中来获取物体的图像,在加快物体的图像的获取速度的同时能提高获取的物体轮廓的深度信息的准确性。
请参阅图4,在另一实施例中,本发明中获取物体轮廓的方法包括以下步骤:
S401:获取对在不同打光角度下的物体采集得到的多张物体图像;所述多张物体图像包括在同一视点对在不同打光角度下的物体采集得到的多张单视点物体图像。步骤S401可如上S101所述,在此不做赘述,不同之处在于,本实施例中的多张物体图像是在同一视点位置采集得到的。
S402:利用物体光照阴影的图像数据特征,查找出每张所述单视点物体图像中所述物体的轮廓阴影。
其中,在某一角度下对物体进行打光,由于光沿直线传播的规律,物体所在的目标位置的部分区域必然会由于物体遮挡而出现较其他区域光照强度偏低或者无光照的情况,而在对该目标位置区域采集图像时,光照强度低的区域在图像上的显示会较暗,即形成阴影部分,光照强度高的区域在图像上的显示会较亮,即形成受光部分,可以理解的是。阴影部分与受光部分的连接位置即为物体的轮廓阴影。
具体地,所述物体的轮廓阴影包括物体的轮廓线和物体阴影的轮廓线。如图2a所示,物体所处的背景面的材质与物体本身的材质不同,其中,物体本身的光的反射系数大于背景面的光的反射系数,故,当光源L1对物体进行打光时,物体的迎光面的反射光强度大于背景面的反射光强度,而在不考虑光的衍射等情况下,在背景面的阴影部分位置,则由于物体的遮挡导致该部分的反射光强度为0;因此,图像采集器21对物体进行采集得到的第一物体图像T1中出现了不同的亮度区域,分别为:物体本身所对应的亮光区域、背景面上不被物体遮挡的部分所对应的暗光区域和背景面上被物体遮挡的部分所对应的阴影部分,此时,可以得出,亮光区域与阴影区域相连接的物体轮廓线m1即为物体在该目标空间中的高点对应的物体轮廓线,而阴影区域与暗光区域相连接的阴影轮廓线n1即为物体在该目标空间中的高点的物体阴影的轮廓线。
S403:利用所有所述单视点物体图像间的所述轮廓阴影的关联关系计算得到所述物体的与所述轮廓阴影对应的轮廓的位置信息。
可以理解的是,在同一视点位置的条件下,每张物体图像中只能获得在对应打光角度下,该物体某一条轮廓线上的高点所对应的物体轮廓线和 物体阴影的轮廓线。于是,需要进一步对每张物体图像进行匹配,从而使物体上任意一条轮廓线上的高点所对应的所有的物体阴影轮廓线相关联,从而得到与物体上任意一条轮廓线上的高点相关的位置信息。
具体地,轮廓获取装置分别将每个打光角度下采集得到的物体图像进行像素点匹配。如图3所示,利用设定的合成算法,将光源L1打光时的第一物体图像T1和光源L5打光时的第二物体图像T2中分别对应相同空间点P的像素点P1和像素点P5进行匹配,最终使像素点P1和像素点P5相关联,从而得到与P点相关的位置信息。可以理解的是,上述过程中的像素点匹配的过程都是现有的技术方案,例如该像素点匹配采用基于灰度的模板匹配算法等,其不是本发明的发明点,在此不做限定。
S404:根据所述轮廓的位置信息计算得到所述物体的所述轮廓的深度信息。
可以理解的是,根据步骤S403可以得到与物体轮廓相关的位置信息,那么,可以进一步根据该轮廓的位置信息进一步计算得到该轮廓的深度信息,方法可如上S102所述,在此不做赘述,最终可以根据得到的物体的轮廓的深度信息进一步形成物体的轮廓深度图像。
本实施例中,通过在同一视点获取对在不同打光角度下的物体采集得到的多张单视点物体图像,并利用所有单视点物体图像间的轮廓阴影计算得到物体的与轮廓阴影对应的轮廓的位置信息,再根据轮廓的位置信息计算得到物体的轮廓的深度信息,进而获得物体的轮廓深度图像,可以实现在单目情况下快速获得物体的轮廓深度图像,耗时时间短。
请参阅图5,在又一实施例中,本发明中获取物体轮廓的方法包括以下步骤:
S501:获取对在不同打光角度下的物体采集得到的多张物体图像;所述多张物体图像包括对应多个视点的多组单视点物体图像,其中,每个视点的一组单视点物体图像包括所述视点对在不同打光角度下的物体采集得到的多张单视点物体图像。步骤S501可如上S401所述,在此不做赘述,不同之处在于,本实施例中的多张物体图像是在多个视点位置采集得到的。
S502:分别利用每个视点对应的一组所述单视点物体图像间所述物体的轮廓阴影的位置关系,计算得到每个视点对应的所述物体的轮廓的深度 信息。可以理解的是,步骤S502可如上S402至S404所述,在此不做赘述。
S503:将所有视点对应的物体的轮廓的深度信息进行整合处理,得到所述物体的轮廓的最终深度信息。
可以理解的是,受视点所在角度以及物体的轮廓形状的影响,在不同视点得到的物体的轮廓深度信息会存在部分轮廓的轮廓深度信息精度较差或者无法获取的问题,可以通过将多个视点对应的轮廓的深度信息整合,以提高最终的物体的轮廓深度信息的精度。
如图6所示,以物体轮廓中的PA点和PB点为例,轮廓获取装置先利用设置在A视点和B视点的图像采集器21和22分别对物体进行采集得到A视点下的第一物体图像T1和第二物体图像T2、B视点下的第三物体图像T3和第四物体图像T4。其中,上述第一物体图像T1和第三物体图像T3是在光源L1对物体打光时采集得到的,第二物体图像T2和第四物体图像T4是在光源L5对物体打光时采集得到的。这样,参照图3以及上述相关内容,利用设定的合成算法,将光源L1打光时的第一物体图像T1和光源L5打光时的第二物体图像T2中分别对应相同空间点PA的像素点PA1和像素点PA2进行匹配,最终使像素点PA1和像素点PA2相关联,从而得到在A视点的与PA点相关的位置信息,并进一步得到在A视点的PA点的深度信息;可以理解的是,在A视点的PB点的深度信息,也可以通过第一物体图像T1和第二物体图像T2获得,但其精度会很差。同样的,将光源L1打光时的第三物体图像T3和光源L5打光时的第四物体图像T4中分别对应相同空间点PB的像素点PB3和像素点PB4进行匹配,也可以得到在B视点的PB点的深度信息;可以理解的是,在B视点的PA点的深度信息的精度会很差,或者无法获取。于是,我们可以通过第一物体图像T1和第二物体图像T2得到在A视点的精度较高的PA点的深度信息和在A视点的精度较低的PB点的深度信息,通过第三物体图像T3和第四物体图像T4得到在B视点的精度较高的PB点的深度信息和在B视点的精度较低的PA点的深度信息。由于在多个视点获得的多组单视点物体图像均是对同一目标空间采集得到,即该多组单视点物体图像至少存在重叠部分,为了得到物体的轮廓的最终深度图像,故通过比对不同视点的物体图像的重叠部分,可快速确定其周边其他部分的匹配关系,通过这样的方式,可以将在A视点 的精度较高的PA点的深度信息和在B视点的精度较高的PB点的深度信息进行整合,得到物体的轮廓的最终深度信息,并进一步得到物体的轮廓的最终深度图像。可以理解的是,上述过程中的比对不同视点的物体图像的重叠部分、确定其周边其他部分的匹配关系的过程等都是现有的技术方案,其不是本发明的发明点,在此不做限定。
可以理解的是,当多个视点的位置相互不平行时,可以较好的获取物体表面各个位置的轮廓的深度信息,最终得到物体的轮廓的深度图像更全面,斜向精度更高。
本实施例中,通过在多个视点获取对在不同打光角度下的物体采集得到的多张单视点物体图像,分别利用每个视点对应的一组单视点物体图像间物体的轮廓阴影的位置关系,计算得到每个视点对应的物体的轮廓的深度信息,再将所有视点对应的物体的轮廓的深度信息进行整合处理,得到物体的轮廓的最终深度信息,进而获得物体的轮廓的最终深度图像,可以实现在多目情况下获得物体的高精度的轮廓深度图像。
在再一实施例中,请结合图7,在上述步骤S503之后包括以下步骤:
S701:根据所述物体的轮廓阴影、所述不同打光角度的信息以及所述多个视点的位置信息,获取所述物体的BRDF数据。
三维世界角度可以类似是球体的,光线角度除了纵向180°的变化,还有横向360°的不同发散方向,会有相应的入射光、反射光、入射角和反射角,它们在物体表面的法平面和切平面上的关系成为了BRDF(Bidirectional Reflectance Distribution Function,即双向反射分布函数)数据的关键参数。如前所述,根据物体的轮廓可以获取物体的轮廓的最终深度图像,而在获取到光源阵列中各光源的位置、各光源的入射光的信息、各个视点的位置以及各视点对应的多组但是点物体图像后,即相应地获取到了相应地BRDF数据。
S702:根据所述BRDF数据对所述物体的表面进行法向计算,并对所述物体的表面材质的特性参数进行设置,以获得所述物体的轮廓的最终图像。
可以理解的是,物体的表面材质对光的反射和吸收存在影响,当对物体的表面材质的特性参数进行设置,例如将物体的表面看做同一材质,即 能根据得到的BRDF数据,对上述得到的物体的轮廓的最终深度信息进行进一步的精细化处理,从而获得更高精度的物体的轮廓的最终图像。
请参阅图8,图8是本发明一种获取物体轮廓的装置一实施例的结构示意框图。本发明实施例提供一种获取物体轮廓的装置80,该装置可以包括相互耦接的光源阵列82、图像获取组件84、控制器86和图像处理设备88。
具体地,光源阵列82设置在物体的外围,光源阵列82包括若干个光源(图未示),用于在不同角度下对物体打光;图像获取组件84包括至少一个相机(图未示),每个相机用于在同一视点获取对在不同打光角度下的物体采集得到的多张物体图像;控制器86用于控制光源阵列82在不同角度下对物体打光,以及控制图像获取组件84获取对在不同打光角度下的物体采集得到的多张物体图像;图像处理设备88用于利用图像获取组件84获取的多张物体图像间的物体的轮廓阴影的位置关系,计算得到物体的轮廓的深度信息。
请参阅图2a、图2b及上文相关部分,作为一种可实施方式,控制器86具体用于控制物体外围的光源阵列82中的各组光源组轮流对物体进行打光,其中,每组光源组包括至少一个位置上的光源,光源阵列82中不同的光源组间至少存在一个光源的位置不同。
作为一种可实施方式,图像获取组件84获取的多张物体图像均为同一相机在同一视点对在不同打光角度下的物体采集得到的多张单视点物体图像;图像处理设备88具体用于利用物体光照阴影的图像数据特征,查找出每张单视点物体图像中物体的轮廓阴影,并利用所有单视点物体图像间的轮廓阴影计算得到物体的与轮廓阴影对应的轮廓的位置信息,再根据轮廓的位置信息计算得到物体的轮廓的深度信息,进而获得物体的轮廓深度图像。
作为一种可实施方式,图像获取组件84获取的多张物体图像包括多个相机在对应多个视点的多组单视点物体图像,其中,每个视点的一组单视点物体图像包括图像获取组件84中的一台相机在对应视点获取的对在不同打光角度下的物体采集得到的多张单视点物体图像;图像处理设备88具体用于分别利用每个视点对应的一组单视点物体图像间物体的轮廓阴影的位置关系,计算得到每个视点对应的物体的轮廓的深度信息,再将所有视点 对应的物体的轮廓的深度信息进行整合处理,从而得到物体的轮廓的最终深度信息。可以理解的是,上述多个视点的位置可以有多种形式的设置,当该多个视点的位置相互不平行时,可以较好的获取物体表面的轮廓的斜向精度。
作为一种可实施方式,本发明实施例中的图像处理设备88也可以作为控制器86,即图像处理设备88也可以实现控制器86的功能。因此,本发明实施例中的各功能部分的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个功能部分可以结合或者可以集成到若干个模块中,也可以是各个功能部分单独物理存在等等。
作为一种可实施方式,图像处理设备88具体还用于根据图像获取组件84获取的物体的轮廓阴影、光源阵列82提供的不同打光角度以及光源等信息以及图像获取组件84的位置信息,获取物体的BRDF数据,在对物体的表面材质的特性参数进行设置后,根据BRDF数据对物体的表面进行法向计算,以获得所述物体的轮廓的最终图像。
本发明实施例提供的获取物体轮廓的装置80可以执行上述方法的步骤。相关内容请参见上述方法中的详细说明,在此不再赘述。
本实施例中,通过光源阵列82在不同角度下对物体进行打光,图像获取组件84获取对在不同打光角度下的物体采集得到的多张物体图像,图像处理设备88利用多张物体图像间物体的轮廓阴影的位置关系,可以计算得到物体的轮廓的深度信息,避免了在运动中来获取物体的图像,可以减少物体的图像的获取时间,且能提高获取的物体轮廓的深度信息的准确性,得到高精度的物体的轮廓图像。
请参阅图9,图9是本发明一种图像处理装置一实施例的结构示意框图。本实施例中的图像处理装置90为上述获取物体轮廓的装置中的图像处理设备。
本实施例中,图像处理装置90包括相互耦接的存储器92、处理器94。
存储器92用于存储计算机指令以及数据,处理器94执行该计算机指令,处理器94可以实现如上述的获取物体轮廓的装置中的图像处理设备的功能。相关内容请参见上述获取物体轮廓的装置中的详细说明,在此不再赘述。
请参阅图10,本发明还提供一种计算机存储介质,本实施例中的计算机存储介质100存储有处理器可运行的程序数据102,所述程序数据102可以被执行以实现如上所述的方法,计算机存储介质100可以具体为一上述存储器92,也可以为其他,包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。相关内容请参见上述获取物体轮廓的方法和装置中的详细说明,在此不再赘述。
以上方案,通过获取对在不同打光角度下的物体采集得到的多张物体图像,并利用多张物体图像间物体的轮廓阴影的位置关系,可以计算得到物体的轮廓的深度信息,可以避免其他获取方式中需要在运动过程中来获取物体的图像,能够提高获取的物体轮廓的深度信息的准确性;而在同一视点下,利用获得的多张单视点物体图像可以快速获取物体的轮廓深度信息,可以实现在单目情况下快速获得物体的轮廓深度信息,耗时时间短;而在多个视点下,可以将单目情况下获得的物体的轮廓深度信息进行整合,能提高最终的物体的轮廓深度信息的精度;而通过进一步地获取物体的BRDF数据,则可以对物体的轮廓深度信息进行精细化,从而获得更高精度的物体的轮廓的最终图像。
需要说明的是,以上各实施例均属于同一发明构思,各实施例的描述各有侧重,在个别实施例中描述未详尽之处,可参考其他实施例中的描述。
以上对本发明实施例所提供的智能终端、行为处理方法及系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (15)

  1. 一种获取物体轮廓的方法,其特征在于,所述方法包括:
    获取对在不同打光角度下的物体采集得到的多张物体图像;
    利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    控制所述物体外围的光源阵列中的各组光源组轮流对物体进行打光,其中,每组所述光源组包括至少一个位置上的光源,所述光源阵列中不同的所述光源组间至少存在一个光源的位置不同。
  3. 根据权利要求1所述的方法,其特征在于,所述多张物体图像包括在同一视点对在不同打光角度下的物体采集得到的多张单视点物体图像;
    所述利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息包括:
    利用物体光照阴影的图像数据特征,查找出每张所述单视点物体图像中所述物体的轮廓阴影;
    利用所有所述单视点物体图像间的所述轮廓阴影的关联关系计算得到所述物体的与所述轮廓阴影对应的轮廓的位置信息;
    根据所述轮廓的位置信息计算得到所述物体的所述轮廓的深度信息。
  4. 根据权利要求3所述的方法,其特征在于,所述多张物体图像包括对应多个视点的多组单视点物体图像,其中,每个视点的一组单视点物体图像包括所述视点对在不同打光角度下的物体采集得到的多张单视点物体图像;
    所述利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息包括:
    分别利用每个视点对应的一组所述单视点物体图像间所述物体的轮廓阴影的位置关系,计算得到每个视点对应的所述物体的轮廓的深度信息;
    将所有视点对应的物体的轮廓的深度信息进行整合处理,得到所述物体的轮廓的最终深度信息。
  5. 根据权利要求4所述的方法,其特征在于,
    所述多个视点的位置相互不平行。
  6. 根据权利要求4所述的方法,其特征在于,在所述将所有视点对应的物体的轮廓的深度信息进行整合处理,得到所述物体的轮廓的最终深度信息之后还包括:
    根据所述物体的轮廓阴影、所述不同打光角度的信息以及所述多个视点的位置信息,获取所述物体的BRDF数据;
    对所述物体的表面材质的特性参数进行设置,根据所述BRDF数据对所述物体的表面进行法向计算,以获得所述物体的轮廓的最终图像。
  7. 一种获取物体轮廓的装置,其特征在于,包括:
    光源阵列,所述光源阵列设置在所述物体的外围,所述光源阵列包括若干个光源,用于在不同角度下对物体打光;
    图像获取组件,所述图像获取组件包括至少一个相机,每个所述相机用于在同一视点获取对在不同打光角度下的物体采集得到的多张物体图像;
    控制器,分别与所述光源阵列和所述图像获取组件耦接,用于控制所述光源阵列在不同角度下对物体打光,以及控制所述图像获取组件获取对在不同打光角度下的物体采集得到的多张物体图像;
    图像处理设备,分别耦接所述光源阵列、所述图像获取组件和所述控制器,用于利用所述多张物体图像间所述物体的轮廓阴影的位置关系,计算得到所述物体的轮廓的深度信息。
  8. 如权利要求7所述的装置,其特征在于,
    所述控制器具体用于控制所述物体外围的光源阵列中的各组光源组轮流对物体进行打光,其中,每组所述光源组包括至少一个位置上的光源,所述光源阵列中不同的所述光源组间至少存在一个光源的位置不同。
  9. 如权利要求7所述的装置,其特征在于,所述多张物体图像包括所述图像获取组件在同一视点对在不同打光角度下的物体采集得到的多张单视点物体图像;
    所述图像处理设备具体用于:
    利用物体光照阴影的图像数据特征,查找出每张所述单视点物体图像中所述物体的轮廓阴影;
    利用所有所述单视点物体图像间的所述轮廓阴影计算得到所述物体的 与所述轮廓阴影对应的轮廓的位置信息;
    根据所述轮廓的位置信息计算得到所述物体的所述轮廓的深度信息。
  10. 如权利要求9所述的装置,其特征在于,所述多张物体图像包括对应多个视点的多组单视点物体图像,其中,每个视点的一组单视点物体图像包括所述图像获取组件在对应视点获取的对在不同打光角度下的物体采集得到的多张单视点物体图像;
    所述图像处理设备还用于:
    分别利用每个视点对应的一组所述单视点物体图像间所述物体的轮廓阴影的位置关系,计算得到每个视点对应的所述物体的轮廓的深度信息;
    将所有视点对应的物体的轮廓的深度信息进行整合处理,得到所述物体的轮廓的最终深度信息。
  11. 如权利要求10所述的装置,其特征在于,
    所述多个视点的位置相互不平行。
  12. 如权利要求10所述的装置,其特征在于,
    所述图像处理设备具体还用于:
    根据所述物体的轮廓阴影、所述不同打光角度的信息以及所述多个视点的位置信息,获取所述物体的BRDF数据;
    对所述物体的表面材质的特性参数进行设置,根据所述BRDF数据对所述物体的表面进行法向计算,以获得所述物体的轮廓的最终图像。
  13. 如权利要求10所述的装置,其特征在于,
    所述图像处理设备还用于作为所述控制器。
  14. 一种图像处理装置,其特征在于,包括相互耦接的存储器、处理器;
    所述存储器用于存储计算机指令以及数据;
    所述处理器执行所述计算机指令,用于实现如权利要求7-13任一项所述的获取物体轮廓的装置中的图像处理设备的功能;或者用于执行权利要求1-6任一项所述的方法。
  15. 一种计算机存储介质,其特征在于,所述计算机存储介质中存储有程序数据,所述程序数据能够被执行以实现如权利要求1-6任一项所述的方法。
PCT/CN2018/104893 2018-09-10 2018-09-10 获取物体轮廓的方法、图像处理装置以及计算机存储介质 WO2020051747A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/104893 WO2020051747A1 (zh) 2018-09-10 2018-09-10 获取物体轮廓的方法、图像处理装置以及计算机存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/104893 WO2020051747A1 (zh) 2018-09-10 2018-09-10 获取物体轮廓的方法、图像处理装置以及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2020051747A1 true WO2020051747A1 (zh) 2020-03-19

Family

ID=69776943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104893 WO2020051747A1 (zh) 2018-09-10 2018-09-10 获取物体轮廓的方法、图像处理装置以及计算机存储介质

Country Status (1)

Country Link
WO (1) WO2020051747A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862218A (zh) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 计算机设备定位方法、装置、计算机设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447018A (zh) * 2007-11-27 2009-06-03 索尼株式会社 摄像设备及其方法
CN106530315A (zh) * 2016-12-27 2017-03-22 浙江大学常州工业技术研究院 中小型物体全角度下目标提取系统及方法
KR20170079288A (ko) * 2015-12-30 2017-07-10 (주)이더블유비엠 가변조명 하의 깊이정보 정확도 향상방법 및 시스템

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447018A (zh) * 2007-11-27 2009-06-03 索尼株式会社 摄像设备及其方法
KR20170079288A (ko) * 2015-12-30 2017-07-10 (주)이더블유비엠 가변조명 하의 깊이정보 정확도 향상방법 및 시스템
CN106530315A (zh) * 2016-12-27 2017-03-22 浙江大学常州工业技术研究院 中小型物体全角度下目标提取系统及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862218A (zh) * 2020-07-29 2020-10-30 上海高仙自动化科技发展有限公司 计算机设备定位方法、装置、计算机设备和存储介质

Similar Documents

Publication Publication Date Title
CN110782524B (zh) 基于全景图的室内三维重建方法
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US20180322623A1 (en) Systems and methods for inspection and defect detection using 3-d scanning
US20180047208A1 (en) System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
WO2021042277A1 (zh) 基于神经网络的三维物体法向量、几何及材质获取方法
CN105574525B (zh) 一种复杂场景多模态生物特征图像获取方法及其装置
US20150116460A1 (en) Method and apparatus for generating depth map of a scene
CN106709947A (zh) 一种基于rgbd相机的三维人体快速建模系统
WO2021203883A1 (zh) 三维扫描方法、三维扫描系统和计算机可读存储介质
CN103196370A (zh) 一种导管接头空间位姿参数的测量方法和装置
CN106530315B (zh) 中小型物体全角度下目标提取系统及方法
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
WO2020051747A1 (zh) 获取物体轮廓的方法、图像处理装置以及计算机存储介质
Liu et al. The applications and summary of three dimensional reconstruction based on stereo vision
CN108401318B (zh) 基于物体表面三维形貌分析的智能照明控制系统及方法
CN114140534A (zh) 一种用于激光雷达与相机的联合标定方法
CN110097540A (zh) 多边形工件的视觉检测方法及装置
Petrovai et al. Obstacle detection using stereovision for Android-based mobile devices
US20220230459A1 (en) Object recognition device and object recognition method
JPH05135155A (ja) 連続シルエツト画像による3次元モデル構成装置
Ren et al. Application of stereo vision technology in 3D reconstruction of traffic objects
EP4071578A1 (en) Light source control method for vision machine, and vision machine
Takarics et al. Welding trajectory reconstruction based on the Intelligent Space concept
CN102831637B (zh) 基于移动设备的三维重建方法
CN112184878B (zh) 三维夜景灯光自动生成和渲染的方法、装置和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18933472

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18933472

Country of ref document: EP

Kind code of ref document: A1