TWI772040B - Object depth information acquistition method, device, computer device and storage media - Google Patents

Object depth information acquistition method, device, computer device and storage media Download PDF

Info

Publication number
TWI772040B
TWI772040B TW110119318A TW110119318A TWI772040B TW I772040 B TWI772040 B TW I772040B TW 110119318 A TW110119318 A TW 110119318A TW 110119318 A TW110119318 A TW 110119318A TW I772040 B TWI772040 B TW I772040B
Authority
TW
Taiwan
Prior art keywords
information
feature information
mathematical model
light source
feature
Prior art date
Application number
TW110119318A
Other languages
Chinese (zh)
Other versions
TW202247098A (en
Inventor
陳明坤
林澤銘
Original Assignee
大陸商珠海凌煙閣芯片科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商珠海凌煙閣芯片科技有限公司 filed Critical 大陸商珠海凌煙閣芯片科技有限公司
Priority to TW110119318A priority Critical patent/TWI772040B/en
Application granted granted Critical
Publication of TWI772040B publication Critical patent/TWI772040B/en
Publication of TW202247098A publication Critical patent/TW202247098A/en

Links

Images

Landscapes

  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present application provides an object depth information acquisition method, an device, a computer device and a storage medium. The method includes: obtaining a plurality of images of a testing object under different characteristic light; dividing each image into a plurality of regions, and extracting luminance information of each corresponding region in the plurality of images; representing the luminance information of the same region in all images under different characteristic lights by a light source characteristic matrix; obtaining feature information of each light source feature matrix by inputting the light source feature matrix into a preset mathematical model, and grouping multiple feature information based on the similarity of feature information; setting a corresponding depth value for each group of feature information, and forming the depth information of the testing object according to the depth values corresponding to multiple groups of feature information.

Description

物體深度資訊獲取方法、裝置、電腦裝置及儲存介質 Object depth information acquisition method, device, computer device and storage medium

本發明涉及電腦技術領域,具體涉及一種物體深度資訊獲取方法、裝置、電腦裝置及存儲介質。 The present invention relates to the field of computer technology, in particular to a method, device, computer device and storage medium for acquiring depth information of an object.

目前獲取物體深度資訊獲取資訊的方法多採用深度相機或雙攝像頭相機,所述深度相機或雙攝像機的物體深度資訊獲取方法成本高,在一些製造領域,急需要一種成本低廉的方法獲取待測物體的深度資訊。 At present, most of the methods for obtaining depth information of objects use depth cameras or dual-camera cameras. The methods for obtaining depth information of objects with depth cameras or dual cameras are expensive. In some manufacturing fields, there is an urgent need for a low-cost method to obtain objects to be measured. in-depth information.

鑒於以上內容,有必要提出一種物體深度資訊獲取方法、裝置、電腦裝置和儲存介質,使得物體深度資訊的獲取方式以更加經濟的方式進行。 In view of the above, it is necessary to provide a method, device, computer device and storage medium for acquiring depth information of an object, so that the acquisition method of depth information of an object can be carried out in a more economical manner.

本申請的第一方面提供一種物體深度資訊獲取方法,所述方法包括:獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像,將每一所述圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示;將所述光源特徵矩陣依次輸入到預設的數學模型中得到每個所述光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化;基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多 組特徵資訊對應的深度值組成所述待測物體的深度資訊。 A first aspect of the present application provides a method for acquiring depth information of an object, the method comprising: acquiring a plurality of images of an object to be measured under different characteristic lights collected by a camera device, and dividing each of the images into a plurality of regions , respectively extract the brightness information of each corresponding area in the multiple images, wherein the brightness information of the same area in all the images under different characteristic lights is represented by a light source characteristic matrix; The feature information of each of the light source feature matrices is obtained from the set mathematical model, wherein the feature information includes the gradient change of the light source feature matrix in any direction; the plurality of feature information are grouped based on the similarity of feature information, Obtain multiple sets of feature information, set a corresponding depth value for each set of feature information, and set a corresponding depth value for each set of feature information. The depth values corresponding to the group feature information constitute the depth information of the object to be measured.

可選地,所述不同特徵光包括不同發光強度和不同發光模式的組合特徵光,其中,所述發光模式包括:發光角度、發光位置、光譜、光場分佈中的一項或多項。 Optionally, the different characteristic lights include combined characteristic lights with different luminous intensities and different luminous modes, wherein the luminous modes include one or more of: luminous angle, luminous position, spectrum, and light field distribution.

可選地,所述攝像裝置為單攝像頭攝像裝置。 Optionally, the camera device is a single-camera camera device.

可選地,所述方法包括:將所述多個圖像按照相同的預設規則分割。 Optionally, the method includes: dividing the plurality of images according to the same preset rule.

可選地,所述預設的數學模型為基於分類演算法的數學模型,其中,所述分類演算法包括基於二分類演算法、決策樹演算法、最小二值演算法中的任意一種。 Optionally, the preset mathematical model is a mathematical model based on a classification algorithm, wherein the classification algorithm includes any one of a binary classification algorithm, a decision tree algorithm, and a least binary algorithm.

可選地,所述預設的數學模型的訓練方法包括:獲取多份已知特徵資訊的光源特徵矩陣,將所述光源特徵矩陣分為訓練集和驗證集;建立基於分類演算法的數學模型,並利用所述訓練集對所述基於分類演算法的數學模型的參數進行訓練;利用所述驗證集對訓練後的所述基於分類演算法的數學模型進行驗證,將驗證集中的光源特徵矩陣輸入到所述基於分類演算法的數學模型中,將模型輸出的特徵資訊與已知的特徵資訊相比對,並根據比對結果統計得到所述基於分類演算法的數學模型的預測準確率;判斷所述基於分類演算法的數學模型的預測準確率是否小於預設閾值;若所述基於分類演算法的數學模型預測準確率不小於所述預設閾值,則將訓練完成的所述基於分類演算法的數學模型作為所述預設的數學模型;若所述基於分類演算法的數學模型預測準確率小於所述預設閾值,則調整所述基於分類演算法的數學模型的參數和/或調整訓練集樣本的數量,並利用所述調整後的訓練集重新對調整後的基於分類演算法的數學模型進行訓練,直至透過所述驗證集驗證得到的模型預測準確率不小於所述預設閾值。 Optionally, the training method of the preset mathematical model includes: acquiring a plurality of light source feature matrices with known feature information, dividing the light source feature matrix into a training set and a verification set; establishing a mathematical model based on a classification algorithm. , and use the training set to train the parameters of the mathematical model based on the classification algorithm; use the verification set to verify the mathematical model based on the classification algorithm after training, and use the verification set to verify the light source feature matrix in the set. Input into the mathematical model based on the classification algorithm, compare the feature information output by the model with the known feature information, and statistically obtain the prediction accuracy of the mathematical model based on the classification algorithm according to the comparison result; judging whether the prediction accuracy of the mathematical model based on the classification algorithm is less than a preset threshold; if the prediction accuracy of the mathematical model based on the classification algorithm is not less than the preset threshold, the The mathematical model of the algorithm is used as the preset mathematical model; if the prediction accuracy of the mathematical model based on the classification algorithm is less than the preset threshold, then adjust the parameters of the mathematical model based on the classification algorithm and/or Adjust the number of training set samples, and use the adjusted training set to retrain the adjusted mathematical model based on the classification algorithm, until the model prediction accuracy obtained through the verification set verification is not less than the preset threshold.

可選地,所述基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊的方法包括:按照相似度匹配的方法對所述多組特徵資訊 進行分組,其中所述相似度匹配的方法包括根據所述特徵資訊的梯度變化,對所述梯度變化的變化量按照不同的閾值劃分為多個區間,將同一區間內的特徵資訊劃分為一組。 Optionally, the method for grouping the multiple sets of feature information based on the similarity of the feature information, and obtaining the multiple sets of feature information includes: matching the multiple sets of feature information according to the similarity matching method. Perform grouping, wherein the method for matching the similarity includes dividing the variation of the gradient change into a plurality of intervals according to different thresholds according to the gradient change of the feature information, and dividing the feature information in the same interval into a group .

本申請的第二方面提供一種物體深度資訊獲取裝置,所述裝置包括:獲取模組:用於獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像,將每一所述圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示;提取模組,用於將所述光源特徵矩陣依次輸入到預設的數學模型中得到每個所述光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化;執行模組:用於基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多組特徵資訊對應的深度值組成所述待測物體的深度資訊。 A second aspect of the present application provides an object depth information acquisition device, the device includes: an acquisition module: used to acquire a plurality of images of the object to be measured located under different characteristic lights collected by a camera device, and each of the images is The image is divided into multiple regions, and the brightness information of each corresponding region in the multiple images is extracted respectively, wherein the brightness information of the same region in all images under different characteristic lights is represented by a light source feature matrix; the extraction module, It is used to sequentially input the light source characteristic matrix into a preset mathematical model to obtain characteristic information of each light source characteristic matrix, wherein the characteristic information includes the gradient change of the light source characteristic matrix in any direction; Group: used to group the plurality of feature information based on the similarity of the feature information to obtain multiple sets of feature information, set a corresponding depth value for each set of feature information, and form a group according to the depth values corresponding to the multiple sets of feature information. Describe the depth information of the object to be measured.

本申請的第三方面提供一種電腦裝置,所述電腦裝置包括處理器,所述處理器用於執行儲存器中儲存的電腦程式時實現所述物體深度資訊獲取方法。 A third aspect of the present application provides a computer device, the computer device includes a processor, and the processor is configured to implement the method for acquiring depth information of an object when executing a computer program stored in a memory.

本申請的第四方面提供一種電腦可讀儲存介質,所述電腦可讀儲存介質儲存有至少一個電腦程式,所述至少一個電腦程式被處理器執行時實現所述物體深度資訊獲取方法。 A fourth aspect of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores at least one computer program, and when the at least one computer program is executed by a processor, implements the method for acquiring depth information of an object.

本申請的物體深度資訊獲取方法、裝置、電腦裝置及儲存介質可以透過單攝像頭採集待測物體的圖像,透過所述物體深度資訊獲取方法獲取所述圖像中不同區域對應的深度資訊,使得物體深度資訊的獲取方式以更加經濟的方式進行。 The object depth information acquisition method, device, computer device and storage medium of the present application can collect an image of an object to be measured through a single camera, and acquire depth information corresponding to different areas in the image through the object depth information acquisition method, so that The acquisition of object depth information is carried out in a more economical way.

1:電腦裝置 1: Computer device

2:攝像裝置 2: Camera device

3:可調節光源 3: Adjustable light source

10:物體深度資訊獲取裝置 10: Object depth information acquisition device

101:獲取模組 101: Get Mods

102:提取模組 102: Extract the module

103:執行模組 103: Execute the module

20:儲存器 20: Storage

30:處理器 30: Processor

40:電腦程式 40: Computer Programs

S1~S3:步驟 S1~S3: Steps

為了更清楚地說明本申請實施例或習知技術中的技術方案,下面 將對實施例或習知技術描述中所需要使用的附圖作簡單地介紹,顯而易見地,下面描述中的附圖僅僅是本申請的實施例,對於本領域普通技術人員來講,在不付出創造性勞動的前提下,還可以根據提供的附圖獲得其他的附圖。 In order to more clearly illustrate the technical solutions in the embodiments of the present application or in the prior art, the following The accompanying drawings required in the description of the embodiments or the prior art will be briefly introduced. Obviously, the accompanying drawings in the following description are only the embodiments of the present application. On the premise of creative work, other drawings can also be obtained based on the provided drawings.

圖1是本申請實施例提供的物體深度資訊獲取方法的應用環境架構示意圖。 FIG. 1 is a schematic diagram of an application environment architecture of a method for acquiring depth information of an object provided by an embodiment of the present application.

圖2是本申請實施例提供的物體深度資訊獲取方法的流程圖。 FIG. 2 is a flowchart of a method for acquiring depth information of an object provided by an embodiment of the present application.

圖3是本申請實施例提供的物體深度資訊獲取裝置的結構示意圖。 FIG. 3 is a schematic structural diagram of an apparatus for acquiring depth information of an object provided by an embodiment of the present application.

圖4是本申請實施例提供的電腦裝置示意圖。 FIG. 4 is a schematic diagram of a computer device provided by an embodiment of the present application.

為了能夠更清楚地理解本申請的上述目的、特徵和優點,下面結合附圖和具體實施例對本申請進行詳細描述。需要說明的是,在不衝突的情況下,本申請的實施例及實施例中的特徵可以相互組合。 In order to more clearly understand the above objects, features and advantages of the present application, the present application will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments may be combined with each other in the case of no conflict.

在下面的描述中闡述了很多具體細節以便於充分理解本申請,所描述的實施例僅僅是本申請一部分實施例,而不是全部的實施例。基於本申請中的實施例,本領域普通技術人員在沒有做出創造性勞動前提下所獲得的所有其他實施例,都屬於本申請保護的範圍。 In the following description, many specific details are set forth to facilitate a full understanding of the present application, and the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.

除非另有定義,本文所使用的所有的技術和科學術語與屬於本申請的技術領域的技術人員通常理解的含義相同。本文中在本申請的說明書中所使用的術語只是為了描述具體的實施例的目的,不是旨在於限制本申請。 Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which this application belongs. The terms used herein in the specification of the application are for the purpose of describing specific embodiments only, and are not intended to limit the application.

參閱圖1所示,為本申請實施例提供的物體深度資訊獲取方法的應用環境架構示意圖。 Referring to FIG. 1 , it is a schematic diagram of an application environment architecture of the method for acquiring depth information of an object provided by an embodiment of the present application.

本實施例中的物體深度資訊獲取方法應用在電腦裝置1中,所述 電腦裝置1與攝像裝置2和可調節光源3透過網路建立通信連接。所述網路可以是有線網路,也可以是無線網路,例如無線電、無線保真(Wireless Fidelity,WIFI)、蜂窩、衛星、廣播等。所述可調節光源3用於發出具有不同特徵的光,所述不同特徵的光是指不同發光強度和不同發光模式的組合特徵光,其中,所述發光模式包括:發光角度、發光位置、光譜、光場分佈中的一項或多項。所述攝像裝置2用於採集物體在不同特徵光照射下的圖像資訊,並將所述圖像資訊發送至電腦裝置1。所述電腦裝置1用於切換可調節光源3的不同發光強度和發光模式從而發出所述不同特徵的光,還用於接收攝像裝置2採集的圖像資訊,獲取所述圖像資訊的亮度資訊,並提取所述亮度資訊的特徵資訊,對所述特徵資訊按照預設方法進行分組,並根據預設規則對每組特徵資訊設置對應的深度值,透過將所述特徵資訊對應的深度值進行組合從而獲得物體深度資訊。 The object depth information acquisition method in this embodiment is applied in the computer device 1, and the The computer device 1 establishes a communication connection with the camera device 2 and the adjustable light source 3 through a network. The network may be a wired network or a wireless network, such as radio, Wireless Fidelity (WIFI), cellular, satellite, broadcast, and the like. The adjustable light source 3 is used to emit light with different characteristics, and the light with different characteristics refers to the combined characteristic light of different luminous intensities and different luminous modes, wherein the luminous modes include: luminous angle, luminous position, spectrum , one or more of the light field distributions. The camera device 2 is used for collecting image information of an object under different characteristic light irradiation, and sending the image information to the computer device 1 . The computer device 1 is used to switch different luminous intensities and luminous modes of the adjustable light source 3 to emit light with the different characteristics, and is also used to receive the image information collected by the camera device 2 to obtain the brightness information of the image information. , and extract the feature information of the luminance information, group the feature information according to the preset method, and set the corresponding depth value for each group of feature information according to the preset rule, Combined to obtain object depth information.

所述電腦裝置1可以為安裝有物體深度資訊獲取軟體的電子設備,例如個人電腦、伺服器等,其中,所述伺服器可以是單一的伺服器、伺服器集群或雲伺服器等。 The computer device 1 may be an electronic device installed with object depth information acquisition software, such as a personal computer, a server, and the like, wherein the server may be a single server, a server cluster, or a cloud server.

所述攝像裝置2是具有拍攝功能的單攝像頭攝像裝置,包括但不限於手機、照相機、平板電腦、監控器等。 The camera device 2 is a single-camera camera device with a shooting function, including but not limited to a mobile phone, a camera, a tablet computer, a monitor, and the like.

所述可調節光源3的發光強度、發光角度、發光位置、光譜、光場分佈中的一項或多項可以調節,從而實現不同模式的發光。在一個實施例中,上述可調節光源3可以包括但不限於可調節LED燈,可調節燈箱等。 One or more of the light-emitting intensity, light-emitting angle, light-emitting position, spectrum, and light-field distribution of the adjustable light source 3 can be adjusted, thereby realizing light-emitting in different modes. In one embodiment, the above-mentioned adjustable light sources 3 may include, but are not limited to, adjustable LED lights, adjustable light boxes, and the like.

在一個實施例中,所述電腦裝置1可以位於攝像裝置2中,所述電腦裝置用於接收攝像裝置2採集的圖像資訊,並根據物體深度資訊獲取方法獲取所述圖像的深度資訊。 In one embodiment, the computer device 1 may be located in the camera device 2, and the computer device is configured to receive image information collected by the camera device 2, and acquire depth information of the image according to a method for acquiring depth information of an object.

請參閱圖2所示,是本申請實施例提供的物體深度資訊獲取方法的流程圖。根據不同的需求,所述流程圖中步驟的順序可以改變,某些步驟可以省略。 Please refer to FIG. 2 , which is a flowchart of a method for acquiring depth information of an object provided by an embodiment of the present application. According to different requirements, the order of the steps in the flowchart can be changed, and some steps can be omitted.

步驟S1、獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像,將每一所述圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示。 Step S1: Acquire multiple images of the object to be tested under different characteristic lights collected by the camera, divide each of the images into multiple regions, and extract the brightness of each corresponding region in the multiple images respectively. information, in which the luminance information of the same area in all images under different characteristic light is represented by a light source characteristic matrix.

步驟S1的具體實施步驟如下所述: The specific implementation steps of step S1 are as follows:

(1)獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像。 (1) Acquiring multiple images of the object to be measured under different characteristic lights and collected by the camera device.

所述不同特徵光是指不同發光強度和不同發光模式的組合特徵光,其中,所述發光模式包括:發光角度、發光位置、光譜、光場分佈中的一項或多項。 The different characteristic lights refer to the combined characteristic lights of different luminous intensities and different luminous modes, wherein the luminous modes include one or more of: luminous angle, luminous position, spectrum, and light field distribution.

在同一發光模式下,將光源的發光強度設置為M階,對於每一階發光強度,攝像裝置2獲取一張所述物體的圖像。對於同一種發光模式,不同種發光強度的光源下,總共獲取的圖像有M張。若所述發光模式包括N種,則總共獲取的圖像數量為M×N張。例如將光源的發光強度分為100階,對於每一階發光發光強度,獲取在不同發光模式的圖像,即發光角度、發光位置、光譜、光場分佈4種發光模式下物體的圖像。所述物體在發光強度為100階,發光模式為4種的光源下拍攝的圖像總數為100×4張圖像。 In the same light-emitting mode, the light-emitting intensity of the light source is set to M levels, and for each level of light-emitting intensity, the camera 2 acquires an image of the object. For the same light-emitting mode and light sources with different light-emitting intensities, a total of M images are obtained. If the light-emitting modes include N types, the total number of acquired images is M×N. For example, the luminous intensity of the light source is divided into 100 levels. For each level of luminous luminous intensity, images of objects in different luminous modes are obtained, that is, the images of objects in 4 luminous modes: luminous angle, luminous position, spectrum, and light field distribution. The total number of images captured by the object under the light source with 100-level luminous intensity and 4 light-emitting modes is 100×4 images.

(2)電腦裝置1將每一所述圖像分割成多個區域。 (2) The computer device 1 divides each of the images into a plurality of regions.

在一個實施例中,所述多個圖像的分割方式相同。所述分割方式可以是對所述圖像的橫軸方向和縱軸方向進行等比例或預設比例的分割,將所述圖像分割成H×V個區域,其中所述H、V為正整數,H表示圖像橫軸方向分割的段數,V表示圖像縱軸方向分割的段數。所述分割方式還可以是按照所述圖像的一條邊為基準,與基準邊以預設角度的斜線將所述圖像分割成多個區域。 In one embodiment, the plurality of images are divided in the same manner. The division method may be to divide the horizontal axis direction and the vertical axis direction of the image in equal or preset proportions, and divide the image into H×V regions, where H and V are positive. Integer, H represents the number of segments divided in the horizontal axis direction of the image, and V represents the number of segments divided in the vertical axis direction of the image. The dividing manner may further be that according to an edge of the image as a reference, the image is divided into a plurality of regions with an oblique line at a preset angle with the reference edge.

(3)電腦裝置1分別提取所述多個圖像中每一對應區域的亮度資訊。 (3) The computer device 1 separately extracts the luminance information of each corresponding region in the plurality of images.

所述調取圖像亮度的方法包括將所述圖像中的圖元點RGB的色度值透過色域轉換方法,將圖元點的RGB值轉換為Yuv值,其中Y表示圖元點的亮度資訊,uv表示圖元點的色度資訊,所述色域轉換方法包括基於神經網路的色域轉換方法、基於四面體的色域轉換方法、基於線性回歸的色域轉換方法中的任意一種。 The method for recalling the image brightness includes converting the RGB values of the primitive points in the image through a color gamut conversion method to convert the RGB values of the primitive points into Yuv values, wherein Y represents the RGB value of the primitive points. Luminance information, uv represents the chromaticity information of the primitive point, and the color gamut conversion method includes any of a neural network-based color gamut conversion method, a tetrahedral-based color gamut conversion method, and a linear regression-based color gamut conversion method. A sort of.

(4)電腦裝置1將所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示。 (4) The computer device 1 represents the luminance information of the same area in all images under different characteristic lights by a light source characteristic matrix.

所述矩陣的大小為M×N,其中M×N代表不同特徵光的個數。例如電腦裝置1將攝像裝置2採集的圖像按照預設方式分割為H×V個區域,對於任意一個區域,獲取所述區域在M×N種特徵光源照射下的亮度資訊,將獲取到的M×N個亮度資訊組成一個光源特徵矩陣,矩陣中的元素a01代表所述區域在光源的發光強度為0、發光模式為第一種模式下的亮度資訊。依次類推,獲取所述圖像中其他各個區域在不同特徵光源下的亮度資訊,並由不同區域的多組亮度資訊組成多個光源特徵矩陣,其中所述光源特徵矩陣的個數與所述物體圖像分割區域的個數相等。 The size of the matrix is M×N, where M×N represents the number of different characteristic lights. For example, the computer device 1 divides the image collected by the camera device 2 into H×V areas according to a preset method, and for any area, obtains the brightness information of the area under the illumination of M×N characteristic light sources, and uses the acquired The M×N pieces of luminance information form a light source characteristic matrix, and the element a01 in the matrix represents the luminance information of the region when the luminous intensity of the light source is 0 and the luminous mode is the first mode. By analogy, the brightness information of other regions in the image under different characteristic light sources is obtained, and multiple sets of brightness information of different regions are used to form multiple light source characteristic matrices, wherein the number of the light source characteristic matrices is related to the object. The number of image segmentation regions is equal.

步驟S2、將所述光源特徵矩陣依次輸入到預設的數學模型中得到每個所述光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化。 Step S2 , sequentially inputting the light source characteristic matrix into a preset mathematical model to obtain characteristic information of each light source characteristic matrix, wherein the characteristic information includes the gradient change of the light source characteristic matrix in any direction.

將所述光源特徵矩陣依次輸入到預設的數學模型中,提取所述光源特徵矩陣的特徵資訊,所述特徵資訊的個數與光源特徵矩陣的個數相等,在一個實施例中,待測試物體的圖像的分割區域為H×V個,所述H×V個區域對應H×V個光學特徵矩陣,將H×V個光學特徵矩陣輸入到預設的資料模型中,提取的特徵資訊的個數為H×V個。所述特徵資訊包括所述亮度資訊在任意方向上的梯度變化,所述預設的數學模型透過基於分類演算法的數學模型獲取所述光源特徵矩陣中的特徵資訊,所述分類演算法包括但不限於基於二分類演算法、決策樹演算法、最小二值演算法。 Inputting the light source characteristic matrix into a preset mathematical model in turn, extracting characteristic information of the light source characteristic matrix, the number of the characteristic information is equal to the number of the light source characteristic matrix, in one embodiment, to be tested The image of the object is divided into H×V regions, and the H×V regions correspond to H×V optical feature matrices. The H×V optical feature matrices are input into the preset data model, and the extracted feature information The number is H×V. The feature information includes the gradient change of the luminance information in any direction, and the preset mathematical model obtains the feature information in the light source feature matrix through a mathematical model based on a classification algorithm, and the classification algorithm includes but Not limited to based on binary classification algorithm, decision tree algorithm, least binary algorithm.

在一個實施例中,所述預設的數學模型為基於分類演算法的數學模型,所述預設的數學模型的訓練方法包括:獲取多份已知特徵資訊的光源特徵矩陣,將所述光源特徵矩陣分為訓練集和驗證集;建立基於分類演算法的數學模型,並利用所述訓練集對所述基於分類演算法的數學模型的參數進行訓練;利用所述驗證集對訓練後的所述基於分類演算法的數學模型進行驗證,將驗證集中的光源特徵矩陣輸入到所述基於分類演算法的數學模型中,將模型輸出的特徵資訊與已知的特徵資訊相比對,並根據比對結果統計得到所述基於分類演算法的數學模型的預測準確率;判斷所述基於分類演算法的數學模型的預測準確率是否小於預設閾值;若所述基於分類演算法的數學模型預測準確率不小於所述預設閾值,則將訓練完成的所述基於分類演算法的數學模型作為所述預設的數學模型;若所述基於分類演算法的數學模型預測準確率小於所述預設閾值,則調整所述基於分類演算法的數學模型的參數和/或調整訓練集樣本的數量,並利用所述調整後的訓練集重新對調整後的基於分類演算法的數學模型進行訓練,直至透過所述驗證集驗證得到的模型預測準確率不小於所述預設閾值。 In one embodiment, the preset mathematical model is a mathematical model based on a classification algorithm, and the training method of the preset mathematical model includes: acquiring a plurality of light source feature matrices with known feature information; The feature matrix is divided into a training set and a verification set; a mathematical model based on a classification algorithm is established, and the training set is used to train the parameters of the mathematical model based on the classification algorithm; The mathematical model based on the classification algorithm is used for verification, the light source feature matrix in the verification set is input into the mathematical model based on the classification algorithm, the feature information output by the model is compared with the known feature information, and according to the comparison Statistically obtain the prediction accuracy of the mathematical model based on the classification algorithm; determine whether the prediction accuracy of the mathematical model based on the classification algorithm is less than a preset threshold; if the prediction accuracy of the mathematical model based on the classification algorithm is accurate If the accuracy rate is not less than the preset threshold, the trained mathematical model based on the classification algorithm is used as the preset mathematical model; if the prediction accuracy of the mathematical model based on the classification algorithm is less than the preset mathematical model threshold, then adjust the parameters of the mathematical model based on the classification algorithm and/or adjust the number of training set samples, and use the adjusted training set to retrain the adjusted mathematical model based on the classification algorithm until The model prediction accuracy obtained through the verification set is not less than the preset threshold.

步驟S3、基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多組特徵資訊對應的深度值組成所述待測物體的深度資訊。 Step S3, grouping the plurality of feature information based on the similarity of the feature information to obtain multiple sets of feature information, setting a corresponding depth value for each set of feature information, and forming the Depth information of the object to be measured.

在一個實施例中,按照相似度匹配的方法對所述多個,例如步驟S2中提到的H×V個特徵資訊進行分組,其中所述相似度匹配的方法包括:將所述多個特徵資訊按照預設規則排序;將排序後的特徵資訊按照預設間 隔劃分為不同區間,將同一區間內的至少一個特徵資訊劃分為一組,對於同一組內的特徵資訊賦予對應的深度值,由多組深度值組成所述待測物體的深度資訊。 In one embodiment, the plurality of, for example, H×V pieces of feature information mentioned in step S2 are grouped according to a similarity matching method, wherein the similarity matching method includes: combining the plurality of features The information is sorted according to the preset rules; the sorted feature information is sorted according to the preset rules The interval is divided into different intervals, at least one feature information in the same interval is divided into a group, and corresponding depth values are assigned to the feature information in the same group, and the depth information of the object to be measured is composed of multiple groups of depth values.

在一個實施例中,對H×V個特徵資訊使用基於機器學習演算法的分類方法進行分組。所述機器學習演算法包括支援向量機演算法的分類方法、貝葉斯分類演算法等。透過所述機器學習演算法對H×V個特徵對於同一組內的特徵資訊賦予對應的深度值,由多組深度值組成所述待測物體的深度資訊。 In one embodiment, the H×V feature information is grouped using a classification method based on a machine learning algorithm. The machine learning algorithm includes the classification method of the support vector machine algorithm, the Bayesian classification algorithm, and the like. The H×V features are assigned corresponding depth values to the feature information in the same group through the machine learning algorithm, and the depth information of the object to be measured is composed of a plurality of sets of depth values.

上述圖2詳細介紹了本申請的物體深度資訊獲取方法,下面結合圖3-4,對實現所述物體深度資訊獲取方法的軟體裝置的功能模組以及實現所述物體深度資訊獲取方法的硬體裝置架構進行介紹。 The above-mentioned Fig. 2 describes the object depth information acquisition method of the present application in detail. Below, in conjunction with Figs. 3-4, the functional modules of the software device for realizing the object depth information acquisition method and the hardware for realizing the object depth information acquisition method are described in detail. The device architecture is introduced.

應該瞭解,所述實施例僅為說明之用,在專利申請範圍上並不受此結構的限制。 It should be understood that the embodiments are only used for illustration, and are not limited by this structure in the scope of the patent application.

圖3為本申請物體深度資訊獲取裝置較佳實施例的結構圖。 FIG. 3 is a structural diagram of a preferred embodiment of an object depth information acquisition device according to the present application.

在一些實施例中,物體深度資訊獲取裝置10運行於電腦裝置中。所述電腦裝置透過網路連接了多個用戶終端。所述物體深度資訊獲取裝置10可以包括多個由程式碼段所組成的功能模組。所述物體深度資訊獲取裝置10中的各個程式段的程式碼可以儲存於電腦裝置的儲存器中,並由所述至少一個處理器所執行,以實現物體深度資訊獲取功能。 In some embodiments, the object depth information acquisition device 10 is executed in a computer device. The computer device is connected to a plurality of user terminals through a network. The object depth information acquisition device 10 may include a plurality of functional modules composed of program code segments. The code of each program segment in the object depth information acquisition device 10 can be stored in the memory of the computer device and executed by the at least one processor to realize the object depth information acquisition function.

本實施例中,所述物體深度資訊獲取裝置10根據其所執行的功能,可以被劃分為多個功能模組。參閱圖3所示,所述功能模組可以包括:獲取模組101、提取模組102、執行模組103。本申請所稱的模組是指一種能夠被至少一個處理器所執行並且能夠完成固定功能的一系列電腦程式段,其儲存在儲存器中。在本實施例中,關於各模組的功能將在後續的實施例中詳述。 In this embodiment, the object depth information acquiring apparatus 10 can be divided into a plurality of functional modules according to the functions performed by the apparatus 10 . Referring to FIG. 3 , the functional modules may include: an acquisition module 101 , an extraction module 102 , and an execution module 103 . The module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can perform fixed functions, and are stored in a memory. In this embodiment, the functions of each module will be described in detail in subsequent embodiments.

所述獲取模組101:用於獲取攝像裝置採集的待測物體位於不同 特徵光下的多個圖像,將所述每一圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示。 The acquisition module 101 is used to acquire the object to be measured collected by the camera device located in a different location. For a plurality of images under characteristic light, each image is divided into a plurality of regions, and the brightness information of each corresponding region in the plurality of images is extracted respectively, wherein the same region in all images is under different characteristic light. The luminance information below is represented by a light source characteristic matrix.

所述獲取模組101的具體實施步驟如下所述: The specific implementation steps of the acquisition module 101 are as follows:

(1)所述獲取模組101獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像。 (1) The acquisition module 101 acquires a plurality of images of the object to be measured under different characteristic lights and collected by the camera device.

所述不同特徵光是指不同發光強度和不同發光模式的組合特徵光,其中,所述發光模式包括:發光角度、發光位置、光譜、光場分佈中的一項或多項。 The different characteristic lights refer to the combined characteristic lights of different luminous intensities and different luminous modes, wherein the luminous modes include one or more of: luminous angle, luminous position, spectrum, and light field distribution.

在同一發光模式下,將光源的發光強度設置為M階,對於每一階發光強度,攝像裝置2獲取一張所述物體的圖像。對於同一種發光模式,不同種發光強度的光源下,總共獲取的圖像有M張。若所述發光模式包括N種,則總共獲取的圖像數量M×N張。例如將光源的發光強度分為100階,對於每一階發光發光強度,獲取在不同發光模式的圖像,即發光角度、發光位置、光譜、光場分佈4種發光模式下物體的圖像。所述物體在發光強度為100階,發光模式為4種的光源下拍攝的圖像總數為100×4張圖像。 In the same light-emitting mode, the light-emitting intensity of the light source is set to M levels, and for each level of light-emitting intensity, the camera 2 acquires an image of the object. For the same light-emitting mode and light sources with different light-emitting intensities, a total of M images are obtained. If the light-emitting modes include N types, the total number of acquired images is M×N. For example, the luminous intensity of the light source is divided into 100 levels. For each level of luminous luminous intensity, images of objects in different luminous modes are obtained, that is, the images of objects in 4 luminous modes: luminous angle, luminous position, spectrum, and light field distribution. The total number of images captured by the object under the light source with 100-level luminous intensity and 4 light-emitting modes is 100×4 images.

(2)所述獲取模組101將每一所述圖像分割成多個區域。 (2) The acquisition module 101 divides each image into a plurality of regions.

在一個實施例中,所述多個圖像的分割方式相同。所述分割方式可以是對所述圖像的橫軸方向和縱軸方向進行等比例或預設比例的分割,將所述圖像分割成H×V個區域,其中所述H、V為正整數,H表示圖像橫軸方向分割的段數,V表示圖像縱軸方向分割的段數。所述分割方式還可以是按照所述圖像的一條邊為基準,與基準邊以預設角度的斜線將所述圖像分割成多個區域。 In one embodiment, the plurality of images are divided in the same manner. The division method may be to divide the horizontal axis direction and the vertical axis direction of the image in equal or preset proportions, and divide the image into H×V regions, where H and V are positive. Integer, H represents the number of segments divided in the horizontal axis direction of the image, and V represents the number of segments divided in the vertical axis direction of the image. The dividing manner may further be that according to an edge of the image as a reference, the image is divided into a plurality of regions with an oblique line at a preset angle with the reference edge.

(3)所述獲取模組101分別提取所述多個圖像中每一對應區域的亮度資訊。 (3) The acquiring module 101 extracts the luminance information of each corresponding region in the plurality of images respectively.

所述調取圖像亮度的方法包括將所述圖像中的圖元點RGB的色 度值透過色域轉換方法,將圖元點的RGB值轉換為Yuv值,其中Y表示圖元點的亮度資訊,uv表示圖元點的色度資訊,所述色域轉換方法包括基於神經網路的色域轉換方法、基於四面體的色域轉換方法、基於線性回歸的色域轉換方法中的任意一種。 The method for recalling the brightness of the image includes converting the RGB color of the primitive points in the image. The degree value converts the RGB value of the primitive point into the Yuv value through the color gamut conversion method, wherein Y represents the luminance information of the primitive point, and uv represents the chromaticity information of the primitive point. The color gamut conversion method includes a neural network based Any one of the color gamut conversion method based on the road, the color gamut conversion method based on the tetrahedron, and the color gamut conversion method based on the linear regression.

(4)所述獲取模組101將所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示。 (4) The acquisition module 101 represents the luminance information of the same area in all images under different characteristic lights by a light source characteristic matrix.

所述矩陣的大小為M×N,其中M×N代表不同特徵光的個數。例如電腦裝置1將攝像裝置2採集的圖像按照預設方式分割為H×V個區域,對於任意一個區域,獲取所述區域在M×N種特徵光源照射下的亮度資訊,將獲取到的M×N個亮度資訊組成一個光源特徵矩陣,矩陣中的元素a01代表所述區域在光源的發光強度為0、發光模式為第一種模式下的亮度資訊。依次類推,獲取所述圖像中其他各個區域在不同特徵光源下的亮度資訊,並由不同區域的多組亮度資訊組成多個光源特徵矩陣,其中所述光源特徵矩陣的個數與所述物體圖像分割區域的個數相等。 The size of the matrix is M×N, where M×N represents the number of different characteristic lights. For example, the computer device 1 divides the image collected by the camera device 2 into H×V areas according to a preset method, and for any area, obtains the brightness information of the area under the illumination of M×N characteristic light sources, and uses the acquired The M×N pieces of luminance information form a light source characteristic matrix, and the element a 01 in the matrix represents the luminance information of the region when the luminous intensity of the light source is 0 and the luminous mode is the first mode. By analogy, the brightness information of other regions in the image under different characteristic light sources is obtained, and multiple sets of brightness information of different regions are used to form multiple light source characteristic matrices, wherein the number of the light source characteristic matrices is related to the object. The number of image segmentation regions is equal.

所述提取模組102,用於將所述光源特徵矩陣依次輸入到預設的數學模型中得到所述每個光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化。 The extraction module 102 is used to sequentially input the light source feature matrix into a preset mathematical model to obtain feature information of each light source feature matrix, wherein the feature information includes the light source feature matrix in any direction. gradient change on .

將所述光源特徵矩陣依次輸入到預設的數學模型中,提取所述光源特徵矩陣的特徵資訊,所述特徵資訊的個數與光源特徵矩陣的個數相等,在一個實施例中,待測試物體的圖像的分割區域為H×V個,所述H×V個區域對應H×V個光學特徵矩陣,將H×V個光學特徵矩陣輸入到預設的資料模型中,提取的特徵資訊的個數為H×V個。所述特徵資訊包括所述亮度資訊在任意方向上的梯度變化,所述預設的數學模型透過基於分類演算法的數學模型獲取所述光源特徵矩陣中的特徵資訊,所述分類演算法包括但不限於基於二分類演算法、決策樹演算法、最小二值演算法。 Inputting the light source characteristic matrix into a preset mathematical model in turn, extracting characteristic information of the light source characteristic matrix, the number of the characteristic information is equal to the number of the light source characteristic matrix, in one embodiment, to be tested The image of the object is divided into H×V regions, and the H×V regions correspond to H×V optical feature matrices. The H×V optical feature matrices are input into the preset data model, and the extracted feature information The number is H×V. The feature information includes the gradient change of the luminance information in any direction, and the preset mathematical model obtains the feature information in the light source feature matrix through a mathematical model based on a classification algorithm, and the classification algorithm includes but Not limited to based on binary classification algorithm, decision tree algorithm, least binary algorithm.

在一個實施例中,所述預設的數學模型為基於分類演算法的數學 模型,所述預設的數學模型的訓練方法包括:獲取多份已知特徵資訊的光源特徵矩陣,將所述光源特徵矩陣分為訓練集和驗證集;建立基於分類演算法的數學模型,並利用所述訓練集對所述基於分類演算法的數學模型的參數進行訓練;利用所述驗證集對訓練後的所述基於分類演算法的數學模型進行驗證,將驗證集中的光源特徵矩陣輸入到所述基於分類演算法的數學模型中,將模型輸出的特徵資訊與已知的特徵資訊相比對,並根據比對結果統計得到所述基於分類演算法的數學模型的預測準確率;判斷所述基於分類演算法的數學模型的預測準確率是否小於預設閾值;若所述基於分類演算法的數學模型預測準確率不小於所述預設閾值,則將訓練完成的所述基於分類演算法的數學模型作為所述預設的數學模型;若所述基於分類演算法的數學模型預測準確率小於所述預設閾值,則調整所述基於分類演算法的數學模型的參數和/或調整訓練集樣本的數量,並利用所述調整後的訓練集重新對調整後的基於分類演算法的數學模型進行訓練,直至透過所述驗證集驗證得到的模型預測準確率不小於所述預設閾值。 In one embodiment, the preset mathematical model is a mathematical model based on a classification algorithm The training method of the preset mathematical model includes: acquiring a plurality of light source characteristic matrices with known characteristic information, dividing the light source characteristic matrix into a training set and a verification set; establishing a mathematical model based on a classification algorithm, and Use the training set to train the parameters of the mathematical model based on the classification algorithm; use the verification set to verify the trained mathematical model based on the classification algorithm, and input the light source feature matrix in the verification set into the In the mathematical model based on the classification algorithm, the feature information output by the model is compared with the known feature information, and the prediction accuracy of the mathematical model based on the classification algorithm is statistically obtained according to the comparison result; Whether the prediction accuracy of the mathematical model based on the classification algorithm is less than a preset threshold; if the prediction accuracy of the mathematical model based on the classification algorithm is not less than the preset threshold, then the classification algorithm based on the training completed The mathematical model based on the classification algorithm is used as the preset mathematical model; if the prediction accuracy of the mathematical model based on the classification algorithm is less than the preset threshold, then adjust the parameters of the mathematical model based on the classification algorithm and/or adjust the training The number of samples in the set is determined, and the adjusted mathematical model based on the classification algorithm is retrained using the adjusted training set until the model prediction accuracy obtained through the verification set is not less than the preset threshold.

所述執行模組103,用於基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多組特徵資訊對應的深度值組成所述待測物體的深度資訊。 The execution module 103 is configured to group the plurality of feature information based on the similarity of the feature information to obtain multiple sets of feature information, set a corresponding depth value for each set of feature information, and correspond to the multiple sets of feature information according to the The depth value of , constitutes the depth information of the object to be measured.

在一個實施例中,按照相似度匹配的方法對所述多個,例如所述提取模組102中提到的H×V個特徵資訊進行分組,其中所述相似度匹配的方法包括:將所述多個特徵資訊按照預設規則排序;將排序後的特徵資訊按照預設間隔劃分為不同區間,將同一區間內的至少一個特徵資訊劃分為 一組,對於同一組內的特徵資訊賦予對應的深度值,由多組深度值組成所述待測物體的深度資訊。 In one embodiment, the plurality of, for example, H×V pieces of feature information mentioned in the extraction module 102 are grouped according to a similarity matching method, wherein the similarity matching method includes: The plurality of feature information is sorted according to a preset rule; the sorted feature information is divided into different intervals according to a preset interval, and at least one feature information in the same interval is divided into In one group, corresponding depth values are assigned to the feature information in the same group, and the depth information of the object to be measured is composed of multiple groups of depth values.

在一個實施例中,對H×V個特徵資訊使用基於機器學習演算法的分類方法進行分組。所述機器學習演算法包括支援向量機演算法的分類方法、貝葉斯分類演算法等。透過所述機器學習演算法對H×V個特徵對於同一組內的特徵資訊賦予對應的深度值,由多組深度值組成所述待測物體的深度資訊。 In one embodiment, the H×V feature information is grouped using a classification method based on a machine learning algorithm. The machine learning algorithm includes the classification method of the support vector machine algorithm, the Bayesian classification algorithm, and the like. The H×V features are assigned corresponding depth values to the feature information in the same group through the machine learning algorithm, and the depth information of the object to be measured is composed of a plurality of sets of depth values.

圖4為本申請電腦裝置1較佳實施例的示意圖。 FIG. 4 is a schematic diagram of a preferred embodiment of the computer device 1 of the present application.

所述電腦裝置1包括儲存器20、處理器30以及儲存在所述儲存器20中並可在所述處理器30上運行的電腦程式40,例如物體深度資訊獲取程式。所述處理器30執行所述電腦程式40時實現上述物體深度資訊獲取方法實施例中的步驟,例如圖2所示的步驟S1~S3。或者,所述處理器30執行所述電腦程式40時實現上述物體深度資訊獲取裝置實施例中各模組/單元的功能,例如圖3中的單元101-103。 The computer device 1 includes a storage 20, a processor 30, and a computer program 40 stored in the storage 20 and executable on the processor 30, such as an object depth information acquisition program. When the processor 30 executes the computer program 40 , the steps in the above-mentioned embodiment of the method for acquiring depth information of an object are implemented, for example, steps S1 to S3 shown in FIG. 2 . Alternatively, when the processor 30 executes the computer program 40 , the functions of each module/unit in the above-mentioned embodiment of the apparatus for acquiring depth information of an object are realized, for example, the units 101 to 103 in FIG. 3 .

示例性的,所述電腦程式40可以被分割成一個或多個模組/單元,所述一個或者多個模組/單元被儲存在所述儲存器20中,並由所述處理器30執行,以完成本申請。所述一個或多個模組/單元可以是能夠完成特定功能的一系列電腦程式指令段,所述指令段用於描述所述電腦程式40在所述電腦裝置1中的執行過程。例如,所述電腦程式40可以被分割成圖3中的獲取模組101、提取模組102、執行模組103。 Exemplarily, the computer program 40 may be divided into one or more modules/units, and the one or more modules/units are stored in the storage 20 and executed by the processor 30 , to complete this application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 40 in the computer device 1 . For example, the computer program 40 can be divided into an acquisition module 101 , an extraction module 102 , and an execution module 103 in FIG. 3 .

所述電腦裝置1可以是桌上型電腦、筆記本、掌上型電腦及雲端伺服器等計算設備。本領域技術人員可以理解,所述示意圖僅僅是電腦裝置1的示例,並不構成對電腦裝置1的限定,可以包括比圖示更多或更少的部件,或者組合某些部件,或者不同的部件,例如所述電腦裝置1還可以包括輸入輸出設備、網路接入設備、匯流排等。 The computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. Those skilled in the art can understand that the schematic diagram is only an example of the computer device 1, and does not constitute a limitation on the computer device 1. It may include more or less components than the one shown, or combine some components, or different Components, such as the computer device 1, may also include input and output devices, network access devices, bus bars, and the like.

所稱處理器30可以是中央處理單元(Central Processing Unit, CPU),還可以是其他通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現成可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯元件、分立門或者電晶體邏輯元件、分立硬體元件等。通用處理器可以是微處理器或者所述處理器30也可以是任何常規的處理器等,所述處理器30是所述電腦裝置1的控制中心,利用各種介面和線路連接整個電腦裝置1的各個部分。 The so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (FPGAs) ) or other programmable logic elements, discrete gate or transistor logic elements, discrete hardware elements, etc. The general-purpose processor can be a microprocessor or the processor 30 can also be any conventional processor, etc. The processor 30 is the control center of the computer device 1, and uses various interfaces and lines to connect the entire computer device 1. various parts.

所述儲存器20可用於儲存所述電腦程式40和/或模組/單元,所述處理器30透過運行或執行儲存在所述儲存器20內的電腦程式和/或模組/單元,以及調用儲存在儲存器20內的資料,實現所述電腦裝置1的各種功能。所述儲存器20可主要包括儲存程式區和儲存資料區,其中,儲存程式區可儲存作業系統、至少一個功能所需的應用程式(比如聲音播放功能、圖像播放功能等)等;儲存資料區可儲存根據電腦裝置1的使用所創建的資料(比如音訊資料、電話本等)等。此外,儲存器20可以包括高速隨機存取儲存器,還可以包括非易失性儲存器,例如硬碟、儲存器、插接式硬碟,智慧儲存卡(Smart Media Card,SMC),安全數位(Secure Digital,SD)卡,快閃儲存器卡(Flash Card)、至少一個磁碟儲存元件、快閃儲存器元件、或其他易失性固態儲存元件。 The storage 20 can be used to store the computer programs 40 and/or modules/units, and the processor 30 runs or executes the computer programs and/or modules/units stored in the storage 20, and The data stored in the storage 20 is called to realize various functions of the computer device 1 . The storage 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; storage data The area can store data (such as audio data, phone book, etc.) created according to the use of the computer device 1 and the like. In addition, the storage 20 may include high-speed random access storage, and may also include non-volatile storage, such as hard disk, storage, plug-in hard disk, Smart Media Card (SMC), Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card), at least one disk storage element, flash memory element, or other volatile solid state storage element.

所述電腦裝置1集成的模組/單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以儲存在一個電腦可讀儲存介質中。基於這樣的理解,本申請實現上述實施例方法中的全部或部分流程,也可以透過電腦程式來指令相關的硬體來完成,所述的電腦程式可儲存於一電腦可讀儲存介質中,所述電腦程式在被處理器執行時,可實現上述各個方法實施例的步驟。其中,所述電腦程式包括電腦程式代碼,所述電腦程式代碼可以為原始程式碼形式、物件代碼形式、可執行文檔或某些中間形式等。所述電腦可讀儲存介質可以包括:能夠攜帶所述電腦程式代碼的任何 實體或裝置、記錄介質、隨身碟、移動硬碟、磁碟、光碟、電腦儲存器、唯讀記憶體(ROM,Read-Only Memory)、隨機存取儲存器(RAM,Random Access Memory)、電載波訊號、電信訊號以及軟體分發介質等。需要說明的是,所述電腦可讀儲存介質包含的內容可以根據司法管轄區內立法和專利實踐的要求進行適當的增減,例如在某些司法管轄區,根據立法和專利實踐,電腦可讀儲存介質不包括電載波訊號和電信訊號。 If the modules/units integrated in the computer device 1 are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium, so When the computer program is executed by the processor, the steps of the above-mentioned method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of original code, object code, executable file, or some intermediate form. The computer-readable storage medium may include any computer program code capable of carrying Entity or device, recording medium, flash drive, portable hard disk, magnetic disk, optical disc, computer storage, ROM (Read-Only Memory), random access memory (RAM, Random Access Memory), electrical Carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer-readable Storage media does not include electrical carrier signals and telecommunication signals.

在本申請所提供的幾個實施例中,應該理解到,所揭露的電腦裝置和方法,可以透過其它的方式實現。例如,以上所描述的電腦裝置實施例僅僅是示意性的,例如,所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式。 In the several embodiments provided in this application, it should be understood that the disclosed computer apparatus and method may be implemented in other manners. For example, the computer apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and other division methods may be used in actual implementation.

另外,在本申請各個實施例中的各功能單元可以集成在相同處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在相同單元中。上述集成的單元既可以採用硬體的形式實現,也可以採用硬體加軟體功能模組的形式實現。 In addition, each functional unit in each embodiment of the present application may be integrated in the same processing unit, or each unit may exist physically alone, or two or more units may be integrated in the same unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.

對於本領域技術人員而言,顯然本申請不限於上述示範性實施例的細節,而且在不背離本申請的精神或基本特徵的情況下,能夠以其他的具體形式實現本申請。因此,無論從哪一點來看,均應將實施例看作是示範性的,而且是非限制性的,本申請的範圍由所附請求項而不是上述說明限定,因此旨在將落在請求項的等同要件的含義和範圍內的所有變化涵括在本申請內。不應將請求項中的任何附圖標記視為限制所涉及的請求項。此外,顯然“包括”一詞不排除其他單元或步驟,單數不排除複數。電腦裝置請求項中陳述的多個單元或電腦裝置也可以由同一個單元或電腦裝置透過軟體或者硬體來實現。第一,第二等詞語用來表示名稱,而並不表示任何特定的順序。 It will be apparent to those skilled in the art that the present application is not limited to the details of the above-described exemplary embodiments, but that the present application can be implemented in other specific forms without departing from the spirit or essential characteristics of the present application. Accordingly, the embodiments are to be regarded in all respects as illustrative and not restrictive, and the scope of this application is defined by the appended claims rather than the foregoing description, and is therefore intended to fall within the scope of the claims. All changes within the meaning and scope of the equivalents of , are included in this application. Any reference sign in a claim should not be construed as limiting the claim to which it relates. Furthermore, it is clear that the word "comprising" does not exclude other units or steps and the singular does not exclude the plural. A plurality of units or computer devices stated in the computer device claim may also be implemented by the same unit or computer device through software or hardware. The terms first, second, etc. are used to denote names and do not denote any particular order.

最後應說明的是,以上實施例僅用以說明本申請的技術方案而非限制,儘管參照較佳實施例對本申請進行了詳細說明,本領域的普通技術 人員應當理解,可以對本申請的技術方案進行修改或等同替換,而不脫離本申請技術方案的精神和範圍。 Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application and not to limit them. Although the present application has been described in detail with reference to the preferred Persons should understand that the technical solutions of the present application may be modified or equivalently replaced without departing from the spirit and scope of the technical solutions of the present application.

S1~S3:步驟 S1~S3: Steps

Claims (9)

一種物體深度資訊獲取方法,其中,所述方法包括:獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像,將每一所述圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示;將所述光源特徵矩陣依次輸入到預設的數學模型中得到每個所述光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化;基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多組特徵資訊對應的深度值組成所述待測物體的深度資訊,其中,所述基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊的方法包括:按照相似度匹配的方法對所述多組特徵資訊進行分組,其中所述相似度匹配的方法包括根據所述特徵資訊的梯度變化,對所述梯度變化的變化量按照不同的閾值劃分為多個區間,將同一區間內的特徵資訊劃分為一組。 A method for acquiring depth information of an object, wherein the method comprises: acquiring a plurality of images of an object to be measured under different characteristic lights collected by a camera, dividing each of the images into a plurality of regions, and extracting the The brightness information of each corresponding area in the multiple images, wherein the brightness information of the same area in all the images under different characteristic lights is represented by a light source characteristic matrix; the light source characteristic matrix is sequentially input into the preset mathematical model Obtain feature information of each of the light source feature matrices, wherein the feature information includes the gradient change of the light source feature matrix in any direction; group the plurality of feature information based on the similarity of the feature information to obtain multiple sets of feature information , set a corresponding depth value for each set of feature information, and form the depth information of the object to be measured according to the depth values corresponding to the multiple sets of feature information, wherein the similarity based on the feature information is used for the multiple features. Information grouping, and the method for obtaining multiple sets of feature information includes: grouping the multiple sets of feature information according to a similarity matching method, wherein the similarity matching method includes changing the gradient according to the gradient change of the feature information. The change amount of the change is divided into multiple intervals according to different thresholds, and the feature information in the same interval is divided into a group. 如請求項1所述的物體深度資訊獲取方法,其中,所述不同特徵光包括不同發光強度和不同發光模式的組合特徵光,其中,所述發光模式包括:發光角度、發光位置、光譜、光場分佈中的一項或多項。 The method for acquiring depth information of an object according to claim 1, wherein the different characteristic lights comprise combined characteristic lights of different luminous intensities and different luminous modes, wherein the luminous modes include: luminous angle, luminous position, spectrum, light One or more of the field distributions. 如請求項1所述的物體深度資訊獲取方法,其中,所述攝像裝置為單攝像頭攝像裝置。 The object depth information acquisition method according to claim 1, wherein the camera device is a single-camera camera device. 如請求項1所述的物體深度資訊獲取方法,其中,所述方法包括:將所述多個圖像按照相同的預設規則分割。 The method for acquiring depth information of an object according to claim 1, wherein the method comprises: dividing the plurality of images according to the same preset rule. 如請求項1所述的物體深度資訊獲取方法,其中,所述預設的數學模型為基於分類演算法的數學模型,其中,所述分類演算法包括基於二分類演算法、決策樹演算法、最小二值演算法中的任意一種。 The object depth information acquisition method according to claim 1, wherein the preset mathematical model is a mathematical model based on a classification algorithm, wherein the classification algorithm includes a binary classification algorithm, a decision tree algorithm, Any of the least binary algorithms. 如請求項5所述的物體深度資訊獲取方法,其中,所述預設的數學模型的訓練方法包括:獲取多份已知特徵資訊的光源特徵矩陣,將所述光源特徵矩陣分為訓練集和驗證集;建立基於分類演算法的數學模型,並利用所述訓練集對所述基於分類演算法的數學模型的參數進行訓練;利用所述驗證集對訓練後的所述基於分類演算法的數學模型進行驗證,將驗證集中的光源特徵矩陣輸入到所述基於分類演算法的數學模型中,將模型輸出的特徵資訊與已知的特徵資訊相比對,並根據比對結果統計得到所述基於分類演算法的數學模型的預測準確率;判斷所述基於分類演算法的數學模型的預測準確率是否小於預設閾值;若所述基於分類演算法的數學模型預測準確率不小於所述預設閾值,則將訓練完成的所述基於分類演算法的數學模型作為所述預設的數學模型;若所述基於分類演算法的數學模型預測準確率小於所述預設閾值,則調整所述基於分類演算法的數學模型的參數和/或調整訓練集樣本的數量,並利用所述調整後的訓練集重新對調整後的基於分類演算法的數學模型進行訓練,直至透過所述驗證集驗證得到的模型預測準確率不小於所述預設閾值。 The method for acquiring depth information of an object according to claim 5, wherein the training method of the preset mathematical model comprises: acquiring a plurality of light source feature matrices of known feature information, and dividing the light source feature matrix into a training set and a verification set; establish a mathematical model based on the classification algorithm, and use the training set to train the parameters of the mathematical model based on the classification algorithm; use the verification set to train the mathematical model based on the classification algorithm The model is verified, the light source feature matrix in the verification set is input into the mathematical model based on the classification algorithm, the feature information output by the model is compared with the known feature information, and the statistics based on the comparison result are obtained. The prediction accuracy of the mathematical model of the classification algorithm; determine whether the prediction accuracy of the mathematical model based on the classification algorithm is less than a preset threshold; if the prediction accuracy of the mathematical model based on the classification algorithm is not less than the preset threshold, then the trained mathematical model based on the classification algorithm is used as the preset mathematical model; if the prediction accuracy of the mathematical model based on the classification algorithm is less than the preset threshold, then adjust the mathematical model based on the classification algorithm The parameters of the mathematical model of the classification algorithm and/or the number of samples in the adjusted training set, and the adjusted mathematical model based on the classification algorithm is retrained using the adjusted training set until it is verified through the verification set. The prediction accuracy of the model is not less than the preset threshold. 一種物體深度資訊獲取裝置,其中,所述裝置包括:獲取模組:用於獲取攝像裝置採集的待測物體位於不同特徵光下的多個圖像,將每一所述圖像分割成多個區域,分別提取所述多個圖像中每一對應區域的亮度資訊,其中所有圖像中同一區域在不同特徵光下的亮度資訊用一個光源特徵矩陣表示;提取模組,用於將所述光源特徵矩陣依次輸入到預設的數學模型中得到每個所述光源特徵矩陣的特徵資訊,其中所述特徵資訊包括所述光源特徵矩陣在任意方向上的梯度變化; 執行模組:用於基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊,對每一組特徵資訊設置一對應的深度值,根據所述多組特徵資訊對應的深度值組成所述待測物體的深度資訊,其中,所述基於特徵資訊相似度對所述多個特徵資訊分組,得到多組特徵資訊的方法包括:按照相似度匹配的方法對所述多組特徵資訊進行分組,其中所述相似度匹配的方法包括根據所述特徵資訊的梯度變化,對所述梯度變化的變化量按照不同的閾值劃分為多個區間,將同一區間內的特徵資訊劃分為一組。 An object depth information acquisition device, wherein the device includes: an acquisition module: used to acquire a plurality of images of an object to be measured under different characteristic lights collected by a camera device, and divide each of the images into a plurality of region, respectively extracting the brightness information of each corresponding region in the multiple images, wherein the brightness information of the same region in all images under different characteristic lights is represented by a light source feature matrix; an extraction module is used to extract the The light source characteristic matrix is sequentially input into a preset mathematical model to obtain characteristic information of each light source characteristic matrix, wherein the characteristic information includes the gradient change of the light source characteristic matrix in any direction; Execution module: used to group the plurality of feature information based on the similarity of the feature information, obtain multiple sets of feature information, set a corresponding depth value for each set of feature information, and set a corresponding depth value according to the multiple sets of feature information. The depth information of the object to be measured is composed, wherein the method of grouping the plurality of feature information based on the similarity of the feature information, and obtaining the plurality of sets of feature information includes: matching the plurality of sets of feature information according to the similarity matching method. Perform grouping, wherein the method for matching the similarity includes dividing the variation of the gradient change into a plurality of intervals according to different thresholds according to the gradient change of the feature information, and dividing the feature information in the same interval into a group . 一種電腦裝置,其中:所述電腦裝置包括處理器,所述處理器用於執行儲存器中儲存的電腦程式時實現如請求項1至6中任一項所述的物體深度資訊獲取方法。 A computer device, wherein: the computer device includes a processor, and the processor is configured to implement the method for acquiring object depth information according to any one of claim 1 to 6 when executing a computer program stored in a memory. 一種電腦可讀儲存介質,其中,所述電腦可讀儲存介質儲存有至少一個電腦程式,所述至少一個電腦程式被處理器執行時實現如請求項1至6中任一項所述的物體深度資訊獲取方法。 A computer-readable storage medium, wherein the computer-readable storage medium stores at least one computer program, and when the at least one computer program is executed by a processor, realizes the object depth according to any one of claim 1 to 6 Information acquisition method.
TW110119318A 2021-05-27 2021-05-27 Object depth information acquistition method, device, computer device and storage media TWI772040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110119318A TWI772040B (en) 2021-05-27 2021-05-27 Object depth information acquistition method, device, computer device and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110119318A TWI772040B (en) 2021-05-27 2021-05-27 Object depth information acquistition method, device, computer device and storage media

Publications (2)

Publication Number Publication Date
TWI772040B true TWI772040B (en) 2022-07-21
TW202247098A TW202247098A (en) 2022-12-01

Family

ID=83439720

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110119318A TWI772040B (en) 2021-05-27 2021-05-27 Object depth information acquistition method, device, computer device and storage media

Country Status (1)

Country Link
TW (1) TWI772040B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
TWI375464B (en) * 2006-11-28 2012-10-21 Sony Corp Imaging device
CN103119516A (en) * 2011-09-20 2013-05-22 松下电器产业株式会社 Light field imaging device and image processing device
TWI485435B (en) * 2010-09-07 2015-05-21 Sony Corp A solid-state imaging device, a solid-state imaging device, a camera device, and a ampling element
CN106611158A (en) * 2016-11-14 2017-05-03 深圳奥比中光科技有限公司 Method and equipment for obtaining human body 3D characteristic information
TWI624177B (en) * 2014-10-28 2018-05-11 惠普發展公司有限責任合夥企業 Image data segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI375464B (en) * 2006-11-28 2012-10-21 Sony Corp Imaging device
CN101720047A (en) * 2009-11-03 2010-06-02 上海大学 Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation
TWI485435B (en) * 2010-09-07 2015-05-21 Sony Corp A solid-state imaging device, a solid-state imaging device, a camera device, and a ampling element
CN103119516A (en) * 2011-09-20 2013-05-22 松下电器产业株式会社 Light field imaging device and image processing device
TWI624177B (en) * 2014-10-28 2018-05-11 惠普發展公司有限責任合夥企業 Image data segmentation
CN106611158A (en) * 2016-11-14 2017-05-03 深圳奥比中光科技有限公司 Method and equipment for obtaining human body 3D characteristic information

Also Published As

Publication number Publication date
TW202247098A (en) 2022-12-01

Similar Documents

Publication Publication Date Title
WO2020221177A1 (en) Method and device for recognizing image, storage medium and electronic device
Li et al. No-reference image quality assessment with deep convolutional neural networks
Dev et al. Categorization of cloud image patches using an improved texton-based approach
US20220198634A1 (en) Method for selecting a light source for illuminating defects, electronic device, and non-transitory storage medium
CN110991506B (en) Vehicle brand identification method, device, equipment and storage medium
CN110399487B (en) Text classification method and device, electronic equipment and storage medium
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment
US10417772B2 (en) Process to isolate object of interest in image
CN106203461B (en) Image processing method and device
US20220222799A1 (en) Method for detecting defect in products and electronic device using method
CN110691226A (en) Image processing method, device, terminal and computer readable storage medium
US20240013453A1 (en) Image generation method and apparatus, and storage medium
TWI830815B (en) User terminal failure detection method, device, computer device and storage medium
CN111161281A (en) Face region identification method and device and storage medium
CN115082400A (en) Image processing method and device, computer equipment and readable storage medium
CN116188808A (en) Image feature extraction method and system, storage medium and electronic device
Reta et al. Color uniformity descriptor: An efficient contextual color representation for image indexing and retrieval
TWI772040B (en) Object depth information acquistition method, device, computer device and storage media
CN111079637A (en) Method, device and equipment for segmenting rape flowers in field image and storage medium
CN112304292B (en) Object detection method and detection system based on monochromatic light
CN111797694A (en) License plate detection method and device
CN115690578A (en) Image fusion method and target identification method and device
CN115482266A (en) Object depth information acquisition method and device, computer device and storage medium
TWI775084B (en) Image recognition method, device, computer device and storage media
TWI754241B (en) A method, a device for extracting features of fingerprint images and computer-readable storage medium