CN109961501A - Method and apparatus for establishing three-dimensional stereo model - Google Patents
Method and apparatus for establishing three-dimensional stereo model Download PDFInfo
- Publication number
- CN109961501A CN109961501A CN201711337773.9A CN201711337773A CN109961501A CN 109961501 A CN109961501 A CN 109961501A CN 201711337773 A CN201711337773 A CN 201711337773A CN 109961501 A CN109961501 A CN 109961501A
- Authority
- CN
- China
- Prior art keywords
- target object
- edge contour
- image
- edge
- color image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000003708 edge detection Methods 0.000 claims description 26
- 238000005530 etching Methods 0.000 claims description 24
- 238000003384 imaging method Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 3
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 claims 2
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006854 communication Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000005611 electricity Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The embodiment of the present application discloses the method and apparatus for establishing three-dimensional stereo model.One specific embodiment of this method includes: the color image and depth image that the region including target object is obtained from depth camera;It determines coplanar edge contour of the target object in color image, and determines the non-co-planar edge contour of the target object in depth image;Coplanar edge contour of target object and non-co-planar edge contour are merged, the edge contour of target object is obtained;Determine the two-dimensional position information of the target object in color image;Edge contour and two-dimensional position information based on target object, establish the three-dimensional stereo model of target object.The embodiment combines the color image and depth image in the region including target object, the edge contour of target object can be quickly detected from, so as to rapidly establish the three-dimensional stereo model of target object.
Description
Technical field
The invention relates to field of computer technology, and in particular to technical field of image processing, more particularly, to
The method and apparatus for establishing three-dimensional stereo model.
Background technique
Three-dimensional reconstruction refers to the mathematical model established to three-dimension object and be suitble to computer representation and processing, is in computer ring
The basis of its property is handled it, operated and analyzed under border, and establishes the virtual of expression objective world in a computer
The key technology of reality.
Existing three-dimensional reconstruction mode generally includes following steps: image obtains: before carrying out image procossing, first
The two dimensional image of three-dimension object is obtained with video camera;Camera calibration: effective imaging mould is established by camera calibration
Type solves the inside and outside parameter of video camera, obtains the three-dimensional point coordinate in space in conjunction with the matching result of image;Feature extraction:
Feature mainly includes characteristic point, characteristic curve and region.It is all in most cases using characteristic point as Matching unit;Stereo matching:
Imaging point of the same physical space o'clock in two width different images correspond according to extracted feature;It is three-dimensional
It rebuilds: having more accurate matching result, in conjunction with the inside and outside parameter of camera calibration, recover three-dimensional scene information.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for establishing three-dimensional stereo model.
In a first aspect, the embodiment of the present application provides a kind of method for establishing three-dimensional stereo model, this method comprises:
The color image and depth image in the region including target object are obtained from depth camera;Determine the target in color image
Coplanar edge contour of object, and determine the non-co-planar edge contour of the target object in depth image;By target object
Coplanar edge contour and non-co-planar edge contour merged, obtain the edge contour of target object;Determine colour
The two-dimensional position information of target object in image;Edge contour and two-dimensional position information based on target object, establish target
The three-dimensional stereo model of object.
In some embodiments, in the coplanar edge contour for determining the target object in color image, and depth is determined
Before the non-co-planar edge contour of target object in image, further includes: carried out at denoising to color image and depth image
Reason.
In some embodiments, denoising is carried out to color image and depth image, comprising: be converted to color image
Gray level image, first with the first pre-set radius to gray level image carry out etching operation, then with the first pre-set radius to etching operation after
Gray level image carry out expansive working, the color image after obtaining denoising;First with the second pre-set radius to depth image into
Row etching operation, then expansive working is carried out to the depth image after etching operation with the second pre-set radius, after obtaining denoising
Depth image.
In some embodiments, it determines coplanar edge contour of the target object in color image, and determines depth map
The non-co-planar edge contour of target object as in, comprising: determine target object using a kind of following edge detection method
Coplanar edge contour and non-co-planar edge contour: Canny operator edge detection method;SUSAN operator edge detection method;
And Shen Jun operator edge detection method.
In some embodiments, coplanar edge contour of target object and non-co-planar edge contour are merged,
Obtain the edge contour of target object, comprising: Edge track is carried out to coplanar edge contour of institute's target object, if detecting
There are edge discontinuous points for coplanar edge contour of target object, then find out in the non-co-planar edge contour of target object
Marginal point corresponding to edge discontinuous point, and utilize coplanar edge contour of marginal point linking objective object and non-public flat edge
Edge profile obtains the edge contour of target object.
In some embodiments, this method further include: the surface of the three-dimensional stereo model of target object is carried out at denoising
Reason.
In some embodiments, this method further include: determine the imaging size of the target object in color image;Based at
As size, the relative position between target object and depth camera is determined;Three-dimensional stereo model and mesh based on target object
The relative position between object and depth camera is marked, the location information of target object is obtained.
Second aspect, the embodiment of the present application provide a kind of for establishing the device of three-dimensional stereo model, which includes:
Image acquisition unit is configured to obtain the color image and depth map in the region including target object from depth camera
Picture;Edge contour determination unit is configured to determine coplanar edge contour of the target object in color image, and determines deep
Spend the non-co-planar edge contour of the target object in image;Edge contour integrated unit is configured to being total to target object
Horizontal edge profile and non-co-planar edge contour are merged, and the edge contour of target object is obtained;Location information determines single
Member is configured to determine the two-dimensional position information of the target object in color image;Three-dimensional stereo model establishes unit, configuration
For edge contour and two-dimensional position information based on target object, the three-dimensional stereo model of target object is established.
In some embodiments, device further include: image denoising unit is configured to color image and depth image
Carry out denoising.
In some embodiments, image denoising unit includes: color image filtering subelement, is configured to color image
Gray level image is converted to, etching operation is first carried out to gray level image with the first pre-set radius, then with the first pre-set radius to corrosion
Gray level image after operation carries out expansive working, the color image after obtaining denoising;Depth image denoises subelement, configuration
Etching operation is carried out to depth image with the second pre-set radius for elder generation, then with the second pre-set radius to the depth after etching operation
Image carries out expansive working, the depth image after obtaining denoising.
In some embodiments, edge contour determination unit is further configured to: utilizing a kind of following side edge detection
Method determines the coplanar edge contour and non-co-planar edge contour of target object: Canny operator edge detection method;SUSAN
Operator edge detection method;And Shen Jun operator edge detection method.
In some embodiments, edge contour integrated unit is further configured to: to coplanar side of institute's target object
Edge profile carries out Edge track, if detecting coplanar edge contour of target object, there are edge discontinuous points, in object
Marginal point corresponding to edge discontinuous point is found out in the non-co-planar edge contour of body, and utilizes marginal point linking objective object
Coplanar edge contour and non-fair face edge contour, obtain the edge contour of target object.
In some embodiments, device further include: three-dimensional stereo model denoises unit, is configured to target object
The surface of three-dimensional stereo model carries out denoising.
In some embodiments, device further include: imaging size determination unit is configured to determine in color image
The imaging size of target object;Relative position determination unit is configured to determine that target object is taken the photograph with depth based on imaging size
Relative position between camera;Location information obtaining unit is configured to three-dimensional stereo model and target based on target object
Relative position between object and depth camera obtains the location information of target object.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes: one or more processing
Device;Storage device, for storing one or more programs;When one or more programs are executed by one or more processors, make
Obtain method of the one or more processors realization as described in implementation any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey
Sequence realizes the method as described in implementation any in first aspect when the computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for establishing three-dimensional stereo model, first acquisition depth camera
The color image and depth image in the captured region including target object;Then by determining the object in color image
Coplanar edge contour of body, and determine the non-co-planar edge contour of the target object in depth image, thus by object
The coplanar edge contour and non-co-planar edge contour of body are merged, to obtain the edge contour of target object;Finally lead to
The two-dimensional position information for determining the target object in color image is crossed, thus edge contour and Two-dimensional Position based on target object
Confidence breath, to establish the three-dimensional stereo model of target object.In conjunction with the color image and depth map in the region for including target object
Picture can be quickly detected from the edge contour of target object, so as to rapidly establish the 3 D stereo mould of target object
Type.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the embodiment of the present application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for establishing three-dimensional stereo model of the application;
Fig. 3 is the flow chart according to another embodiment of the method for establishing three-dimensional stereo model of the application;
Fig. 4 is the structural schematic diagram according to one embodiment of the device for establishing three-dimensional stereo model of the application;
Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the embodiment of the present application for establishing the method for three-dimensional stereo model or for establishing three
Tie up the exemplary system architecture 100 of the device of three-dimensional model.
As shown in Figure 1, system architecture 100 may include depth camera 101, network 102 and server 103.Network 102
To provide the medium of communication link between depth camera 101 and server 103.Network 102 may include various connections
Type, such as wired, wireless communication link or fiber optic cables etc..
Depth camera 101 can be interacted by network 102 with server 103, to receive or send message etc..Depth is taken the photograph
Camera 101 can be called RGB-D video camera again, can be used for shooting the RGB-D image of object, and RGB-D image may include
Color image (i.e. RGB image) and depth image (Depth image).
Server 103 can provide various services, for example, server 103 can be to getting from depth camera 101
The data such as the color image and depth image in the region including target object carry out the processing such as analyzing, and obtain processing result (example
Such as the three-dimensional stereo model of target object).
It should be noted that for establishing the method for three-dimensional stereo model generally by servicing provided by the embodiment of the present application
Device 103 executes, and correspondingly, the device for establishing three-dimensional stereo model is generally positioned in server 103.
It should be understood that the number of depth camera, network and server in Fig. 1 is only schematical.According to realization
It needs, can have any number of depth camera, network and server.
With continued reference to Fig. 2, it illustrates an implementations according to the method for establishing three-dimensional stereo model of the application
The process 200 of example.The method for being used to establish three-dimensional stereo model, comprising the following steps:
Step 201, the color image and depth image in the region including target object are obtained from depth camera.
In the present embodiment, the method for establishing three-dimensional stereo model runs electronic equipment (such as Fig. 1 institute thereon
The server 103 shown) it can be (such as shown in FIG. 1 from depth camera by wired connection mode or radio connection
Depth camera 101) in obtain include target object region color image and depth image.Wherein, depth camera is again
RGB-D video camera can be called, can be used for shooting the RGB-D image of object, RGB-D image may include color image and
Depth image.The pixel value of each pixel of color image can be the color value of each point of body surface.In general, the mankind
The all colours that eyesight can perceive are by the variation and their phases to red (R), green (G), blue (B) three Color Channels
It is superimposed to obtain between mutually.The pixel value of each pixel of depth image can be depth camera and body surface
The distance between each point.In general, color image and depth image are registration, thus the pixel of color image and depth image
There is one-to-one corresponding relationship between point.Here, target object can be the object of three-dimensional stereo model to be established, and depth is taken the photograph
Camera can be with the RGB-D image of photographic subjects object, and the RGB-D image of target object is sent to electronic equipment.
Step 202, it determines coplanar edge contour of the target object in color image, and determines the mesh in depth image
Mark the non-co-planar edge contour of object.
In the present embodiment, color image and depth map based on the region for acquired in step 201 including target object
Picture, electronic equipment can determine coplanar edge contour of the target object in color image, while determine in depth image
The non-co-planar edge contour of target object.
In the present embodiment, coplanar, it is also known as coplanar, refer to that geometry accounts for conplane pass altogether in three dimensions
System.Color image is two dimensional image, can be used for extracting coplanar edge contour of target object.Specifically, color image
The pixel value of each pixel can be the color value of each point of target object surface, for coplanar edge of target object
For profile, usually meet predetermined relationship between each pixel, whether each pixel by determining color image is full
Sufficient predetermined relationship can quickly determine out the pixel for meeting predetermined relationship, meet composed by the pixel of predetermined relationship
Profile is coplanar edge contour of target object.As an example, electronic equipment can carry out gradient extraction first, that is, calculate
The gradient magnitude of each pixel of color image and direction;Then carry out non-maxima suppression, i.e., it, will for each pixel
Gradient magnitude of the pixel on two gradient directions is compared with the gradient magnitude of two neighboring pixel, if the pixel
The gradient magnitude of point is not less than the gradient magnitude of two neighboring pixel, then the pixel may be edge pixel point, conversely, then
The pixel is unlikely to be edge pixel point;It finally carries out dual threshold detection to connect with edge, that is, traverses each possible edge
Pixel carries out edge detection with high threshold and Low threshold respectively and obtains strong edge point and weak marginal point, if strong edge point institute group
At profile there are edge discontinuous points, then weak marginal point corresponding to edge discontinuous point is found out in weak marginal point, and utilize
Edge discontinuous point in profile composed by the weak marginal point connection strong edge point found out, thus by the side in strong edge point
Intermarginal breakpoint all connects, and obtains coplanar edge contour.
In the present embodiment, non-co-planar refers to that geometry does not account for conplane relationship altogether in three dimensions.It is deep
Spending image is 3-D image, can be used for extracting the non-co-planar edge contour of target object.Specifically, each of depth image
The pixel value of pixel can be the distance between each point of depth camera and body surface, for the non-total of target object
For horizontal edge profile, usually meet predetermined relationship between each pixel, by each pixel for determining depth image
Whether point meets predetermined relationship, can quickly determine out the pixel for meeting predetermined relationship, meet the pixel of predetermined relationship
Composed profile is the non-co-planar edge contour of target object.Here, electronic equipment equally can use gradient extract,
Non-maxima suppression, dual threshold detection connected with edge and etc. obtain non-co-planar edge contour, details are not described herein again.
In some optional implementations of the present embodiment, electronic equipment can use a kind of following edge detection method
Determine the coplanar edge contour and non-co-planar edge contour of target object: Canny operator edge detection method;SUSAN is calculated
Sub- edge detection method;And Shen Jun operator edge detection method.Wherein, Canny operator is a multistage edge detection algorithm.
Canny operator can identify the actual edge in image as much as possible;The edge identified will as far as possible with real image
In actual edge as close possible to;Edge in image can only identify once, and picture noise that may be present should not be marked
Knowing is edge.SUSAN (Small Univalue Segment Assimilating Nucleus) operator is a kind of based on gray scale
Characteristic point acquisition methods, the detection at edge and angle point suitable for image can remove the noise in image, have it is simple,
Effectively, the feature that noise resisting ability is strong, calculating speed is fast.
Step 203, coplanar edge contour of target object and non-co-planar edge contour are merged, obtains target
The edge contour of object.
In the present embodiment, coplanar edge contour of the target object based on determined by step 202 and non-co-planar side
Edge profile, electronic equipment can merge coplanar edge contour of target object and non-co-planar edge contour, thus
Obtain the edge contour of target object.Specifically, electronic equipment can carry out side to coplanar edge contour of institute's target object
Edge tracking, if detecting coplanar edge contour of target object, there are edge discontinuous points, in the non-co-planar of target object
Marginal point corresponding to edge discontinuous point is found out in edge contour, and utilizes coplanar edge of marginal point linking objective object
Profile and non-fair face edge contour, obtain the edge contour of target object.
Step 204, the two-dimensional position information of the target object in color image is determined.
In the present embodiment, based on color image acquired in step 201, electronic equipment can be determined in color image
Target object two-dimensional position information.Wherein, two-dimensional position information can be position area of the target object in color image
The description information in domain.For example, the two-dimensional position information of target object may include position area of the target object in color image
The center point coordinate in domain, target object region of the band of position in color image be wide and target object position in color image
The region in region is high.
In the present embodiment, the two-dimensional position information of target object can be determined in several ways.As an example,
Region where manually selecting target object in color image centre circle, the available two dimension for enclosing the region selected of electronic equipment
Location information, the as two-dimensional position information of the target object in color image.As another example, electronic equipment can be incited somebody to action
Color image is input in YOLO v2 model trained in advance, to obtain the two-dimensional position of the target object in color image
Information.Wherein, YOLO v2 model can be used for identifying article and its position in image.Specifically, YOLO v2 first will be colored
Image is divided into multiple grid.If the center of target object falls into grid, which is just responsible for detecting the target object.YOLO
V2 model uses convolutional neural networks structure, and electronic equipment can be using the color image in the region comprising Reference as defeated
Enter, using the two-dimensional position information of Reference as output, training obtains YOLO v2 model.
Step 205, edge contour and two-dimensional position information based on target object establish the 3 D stereo mould of target object
Type.
Edge contour and the obtained two-dimensional position information of step 204 based on the obtained target object of step 203, electricity
Sub- equipment can establish the three-dimensional stereo model of target object.Specifically, electronic equipment can use the Two-dimensional Position of target object
Confidence breath orients the band of position of each plane of target object, and determines target object in the band of position of each plane
Each plane edge contour, to establish the three-dimensional stereo model of target object.
Method provided by the embodiments of the present application for establishing three-dimensional stereo model, first captured by acquisition depth camera
The region including target object color image and depth image;Then by determining being total to for the target object in color image
Horizontal edge profile, and determine the non-co-planar edge contour of the target object in depth image, thus being total to target object
Horizontal edge profile and non-co-planar edge contour are merged, to obtain the edge contour of target object;Finally by determination
The two-dimensional position information of target object in color image out, thus the edge contour based on target object and Two-dimensional Position confidence
Breath, to establish the three-dimensional stereo model of target object.In conjunction with the color image and depth image in the region for including target object, energy
It is quickly detected from the edge contour of target object, enough so as to rapidly establish the three-dimensional stereo model of target object.
With further reference to Fig. 3, it illustrates according to the method for establishing three-dimensional stereo model of the application another
The process 300 of embodiment.This is used to establish the process 300 of the method for three-dimensional stereo model, comprising the following steps:
Step 301, the color image and depth image in the region including target object are obtained from depth camera.
In the present embodiment, the method for establishing three-dimensional stereo model runs electronic equipment (such as Fig. 1 institute thereon
The server 103 shown) it can be (such as shown in FIG. 1 from depth camera by wired connection mode or radio connection
Depth camera 101) in obtain include target object region color image and depth image.Wherein, depth camera is again
RGB-D video camera can be called, can be used for shooting the RGB-D image of object, RGB-D image may include color image and
Depth image.The pixel value of each pixel of color image can be the color value of each point of body surface.Depth image
The pixel value of each pixel can be the distance between each point of depth camera and body surface.In general, cromogram
As and depth image be registration, thus between color image and the pixel of depth image have one-to-one corresponding relationship.
Step 302, denoising is carried out to color image and depth image.
In the present embodiment, based on color image acquired in step 301 and depth image, electronic equipment can be to colour
Image and depth image carry out denoising, to reduce noise to the coplanar edge contour and non-co-planar for extracting target object
The influence of edge contour.
In some optional implementations of the present embodiment, electronic equipment can be by color image and depth image
The operation expanded afterwards is first corroded in progress, to achieve the purpose that remove noise.Specifically, for color image, electronic equipment can be with
Color image is converted into gray level image, etching operation is first carried out to gray level image with the first pre-set radius, then is default with first
Radius carries out expansive working to the gray level image after etching operation, the color image after obtaining denoising.For depth image,
Electronic equipment first can carry out etching operation to depth image with the second pre-set radius, then with the second pre-set radius to etching operation
Depth image afterwards carries out expansive working, the depth image after obtaining denoising.Wherein, etching operation can remove object
The certain pixels in body edge, expansive working can add pixel to target object edge.Image progress is first corroded and is expanded afterwards
Operation can eliminate wisp, the separating objects at very thin point, smooth larger object boundary, but can't substantially change simultaneously
The originally area of object.Expansive working after first corroding is carried out to image with same Radius, radius can be removed no more than the radius
Noise.
Step 303, it determines coplanar edge contour of the target object in color image, and determines the mesh in depth image
Mark the non-co-planar edge contour of object.
In the present embodiment, based on the color image and depth image after the obtained denoising of step 302, electronics is set
The standby coplanar edge contour that can determine the target object in color image, while the target object in determining depth image
Non-co-planar edge contour.Wherein, coplanar, it is also known as coplanar, it is conplane to refer to that geometry accounts for altogether in three dimensions
Relationship.Color image is two dimensional image, can be used for extracting coplanar edge contour of target object.Non-co-planar refers to several
What shape does not account for conplane relationship altogether in three dimensions.Depth image is 3-D image, can be used for extracting object
The non-co-planar edge contour of body.
Step 304, coplanar edge contour of target object and non-co-planar edge contour are merged, obtains target
The edge contour of object.
In the present embodiment, coplanar edge contour of the target object based on determined by step 303 and non-co-planar side
Edge profile, electronic equipment can merge coplanar edge contour of target object and non-co-planar edge contour, thus
Obtain the edge contour of target object.Specifically, electronic equipment can carry out side to coplanar edge contour of institute's target object
Edge tracking, if detecting coplanar edge contour of target object, there are edge discontinuous points, in the non-co-planar of target object
Marginal point corresponding to edge discontinuous point is found out in edge contour, and utilizes coplanar edge of marginal point linking objective object
Profile and non-fair face edge contour, obtain the edge contour of target object.
Step 305, the two-dimensional position information of the target object in color image is determined.
In the present embodiment, based on the color image after the obtained denoising of step 302, electronic equipment can be determined
The two-dimensional position information of target object in color image out.Wherein, two-dimensional position information can be target object in cromogram
The description information of the band of position as in.For example, the two-dimensional position information of target object may include target object in cromogram
The region of the band of position is wide in color image and target object is in coloured silk for center point coordinate, the target object of the band of position as in
The region of the band of position is high in chromatic graph picture.
Step 306, edge contour and two-dimensional position information based on target object establish the 3 D stereo mould of target object
Type.
Edge contour and the obtained two-dimensional position information of step 305 based on the obtained target object of step 304, electricity
Sub- equipment can establish the three-dimensional stereo model of target object.Specifically, electronic equipment can use the Two-dimensional Position of target object
Confidence breath orients the band of position of each plane of target object, and determines target object in the band of position of each plane
Each plane edge contour, to establish the three-dimensional stereo model of target object.
Step 307, denoising is carried out to the surface of the three-dimensional stereo model of target object.
In the present embodiment, the three-dimensional stereo model for the target object established based on step 306, electronic equipment can be right
The surface of the three-dimensional stereo model of target object carries out denoising.Specifically, the surface of the three-dimensional stereo model of target object
With many textures and noise, and for same surface, depth information is continuously that therefore, electronic equipment can use depth
The gradient operator for spending image carries out denoising to the surface of the three-dimensional stereo model of target object.
Step 308, the imaging size of the target object in color image is determined.
In the present embodiment, based on color image acquired in step 301, electronic equipment can know color image
Not, so that it is determined that the imaging size of the target object in color image.Here, the target object in color image has with background
Different features, and feature gap is larger, therefore according to the feature of target object, can quickly identify in color image
The region of target object, so that it is determined that the imaging size of target object out.
Step 309, based on imaging size, the relative position between target object and depth camera is determined.
In the present embodiment, size is imaged based on determined by step 308, electronic equipment can determine target object and depth
Spend the relative position between video camera.Wherein, relative position can be information relevant to the position where target object, example
Such as, relative position may include the distance between target object and depth camera and target object relative to depth camera institute
Direction etc..
In the present embodiment, the distance between size and target object and depth camera is imaged and there is certain corresponding pass
System, according to corresponding relationship, electronic equipment can determine the distance between target object and depth camera.As an example, if
The focal length of depth camera is f, and the actual size of target object is n, and imaging size is m, then electronic equipment can use as follows
Formula calculates the distance between target object and depth camera s:
Step 310, the three-dimensional stereo model based on target object and the opposite position between target object and depth camera
It sets, obtains the location information of target object.
In the present embodiment, three-dimensional stereo model based on the target object after the obtained denoising of step 307 and
Relative position between target object and depth camera determined by step 309, to obtain the location information of target object.It is real
In trampling, electronic equipment can first using ground be xOy plane, using preset as origin, using the preset direction on ground as x-axis,
The direction being rotated by 90 ° counterclockwise using x-axis as y-axis, to establish three-dimensional coordinate as z-axis by origin and perpendicular to the direction on ground
System;Then determine location information of the depth camera in three-dimensional system of coordinate, and according to target object and depth camera it
Between relative position, determine the location information of target object.
From figure 3, it can be seen that being used to establish 3 D stereo mould in the present embodiment compared with the corresponding embodiment of Fig. 2
The process 300 of the method for type increases denoising step and target object positioning step.Energy in the scheme of the present embodiment description as a result,
It is enough quickly obtained the location information of target object, and reduces the influence that noise positions target object, improves positioning
Precision.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides one kind for establishing three
One embodiment of the device of three-dimensional model is tieed up, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, device tool
Body can be applied in various electronic equipments.
As shown in figure 4, the device 400 for establishing three-dimensional stereo model of the present embodiment may include: that image obtains list
Member 401, edge contour determination unit 402, edge contour integrated unit 403, location information determination unit 404 and 3 D stereo mould
Type establishes unit 405.Wherein, image acquisition unit 401 are configured to obtain the area including target object from depth camera
The color image and depth image in domain;Edge contour determination unit 402 is configured to determine the target object in color image
Coplanar edge contour, and determine the non-co-planar edge contour of the target object in depth image;Edge contour integrated unit
403, it is configured to merge coplanar edge contour of target object and non-co-planar edge contour, obtains target object
Edge contour;Location information determination unit 404 is configured to determine the Two-dimensional Position confidence of the target object in color image
Breath;Three-dimensional stereo model establishes unit 405, is configured to edge contour and two-dimensional position information based on target object, establishes
The three-dimensional stereo model of target object.
In the present embodiment, for establishing in the device 400 of three-dimensional stereo model: image acquisition unit 401, edge contour
Determination unit 402, edge contour integrated unit 403, location information determination unit 404 and three-dimensional stereo model establish unit 405
Specific processing and its brought technical effect can be respectively with reference to step 201, the step 202, step in Fig. 2 corresponding embodiment
203, the related description of step 204 and step 205, details are not described herein.
In some optional implementations of the present embodiment, the device 400 for establishing three-dimensional stereo model can be with
Include: image denoising unit (not shown), is configured to carry out denoising to color image and depth image.
In some optional implementations of the present embodiment, image denoising unit may include: color image filtering
Unit (not shown) is configured to being converted to color image into gray level image, first with the first pre-set radius to gray level image
Etching operation is carried out, then expansive working is carried out to the gray level image after etching operation with the first pre-set radius, obtains denoising
Color image afterwards;Depth image denoises subelement (not shown), is configured to first with the second pre-set radius to depth map
Expansive working is carried out to the depth image after etching operation as carrying out etching operation, then with the second pre-set radius, is obtained at denoising
Depth image after reason.
In some optional implementations of the present embodiment, edge contour determination unit 402 can further configure use
In: the coplanar edge contour and non-co-planar edge contour of target object are determined using a kind of following edge detection method:
Canny operator edge detection method;SUSAN operator edge detection method;And Shen Jun operator edge detection method.
In some optional implementations of the present embodiment, edge contour integrated unit 403 can further configure use
In: Edge track is carried out to coplanar edge contour of institute's target object, if detecting coplanar edge contour of target object
There are edge discontinuous points, then find out edge corresponding to edge discontinuous point in the non-co-planar edge contour of target object
Point, and using coplanar edge contour of marginal point linking objective object and non-fair face edge contour, obtain target object
Edge contour.
In some optional implementations of the present embodiment, the device 400 for establishing three-dimensional stereo model can be with
Include: three-dimensional stereo model denoising unit (not shown), is configured to the surface to the three-dimensional stereo model of target object
Carry out denoising.
In some optional implementations of the present embodiment, the device 400 for establishing three-dimensional stereo model can be with
Include: imaging size determination unit (not shown), is configured to determine the imaging size of the target object in color image;
Relative position determination unit (not shown), is configured to based on imaging size, determine target object and depth camera it
Between relative position;Location information obtaining unit (not shown), is configured to the three-dimensional stereo model based on target object
Relative position between target object and depth camera obtains the location information of target object.
Below with reference to Fig. 5, it illustrates the computer systems 500 for the electronic equipment for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 5 is only an example, function to the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into the program in random access storage device (RAM) 503 from storage section 508 and
Execute various movements appropriate and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interface 505 is connected to lower component: the importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 508 including hard disk etc.;
And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because
The network of spy's net executes communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 510, in order to read from thereon
Computer program be mounted into storage section 508 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 509, and/or from detachable media
511 are mounted.When the computer program is executed by central processing unit (CPU) 501, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be but unlimited
In the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or device, or any above combination.Computer can
The more specific example for reading storage medium can include but is not limited to: electrical connection, portable meter with one or more conducting wires
Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, computer readable storage medium can be it is any include or storage program
Tangible medium, which can be commanded execution system, device or device use or in connection.And in this Shen
Please in, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable
Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by
Instruction execution system, device or device use or program in connection.The journey for including on computer-readable medium
Sequence code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include image acquisition unit, edge contour determination unit, edge contour integrated unit, location information determination unit and 3 D stereo mould
Type establishes unit.Wherein, the title of these units does not constitute the restriction to the unit itself under certain conditions, for example, figure
As acquiring unit is also described as " obtaining the color image and depth in the region including target object from depth camera
The unit of image ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
When row, so that the electronic equipment: obtaining the color image and depth image in the region including target object from depth camera;
It determines coplanar edge contour of the target object in color image, and determines the non-co-planar of the target object in depth image
Edge contour;Coplanar edge contour of target object and non-co-planar edge contour are merged, target object is obtained
Edge contour;Determine the two-dimensional position information of the target object in color image;Edge contour based on target object and two
Location information is tieed up, the three-dimensional stereo model of target object is established.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of method for establishing three-dimensional stereo model, comprising:
The color image and depth image in the region including target object are obtained from depth camera;
It determines coplanar edge contour of the target object in the color image, and determines the institute in the depth image
State the non-co-planar edge contour of target object;
Coplanar edge contour of the target object and non-co-planar edge contour are merged, the target object is obtained
Edge contour;
Determine the two-dimensional position information of the target object in the color image;
Edge contour and two-dimensional position information based on the target object, establish the three-dimensional stereo model of the target object.
2. according to the method described in claim 1, wherein, the target object in the determination color image is total to
Horizontal edge profile, and before the non-co-planar edge contour of the target object in the determining depth image, further includes:
Denoising is carried out to the color image and the depth image.
3. described to be carried out at denoising to the color image and the depth image according to the method described in claim 2, wherein
Reason, comprising:
The color image is converted into gray level image, etching operation is first carried out to the gray level image with the first pre-set radius,
Expansive working is carried out to the gray level image after etching operation with first pre-set radius again, the institute after obtaining denoising
State color image;
Etching operation is first carried out to the depth image with the second pre-set radius, then with second pre-set radius to etching operation
The depth image afterwards carries out expansive working, the depth image after obtaining denoising.
4. according to the method described in claim 1, wherein, the total of the target object in the determination color image is put down
Face edge contour, and determine the non-co-planar edge contour of the target object in the depth image, comprising:
The coplanar edge contour and non-co-planar edge wheel of the target object are determined using a kind of following edge detection method
It is wide:
Canny operator edge detection method;
SUSAN operator edge detection method;And
Shen Jun operator edge detection method.
5. described by coplanar edge contour of the target object and non-flat altogether according to the method described in claim 1, wherein
Face edge contour is merged, and the edge contour of the target object is obtained, comprising:
Edge track is carried out to coplanar edge contour of institute's target object, if detecting coplanar edge of the target object
There are edge discontinuous points for profile, then the edge discontinuous point institute is found out in the non-co-planar edge contour of the target object
Corresponding marginal point, and connect using the marginal point the coplanar edge contour and non-fair face edge wheel of the target object
Exterior feature obtains the edge contour of the target object.
6. according to the method described in claim 1, wherein, the method also includes:
Denoising is carried out to the surface of the three-dimensional stereo model of the target object.
7. according to the method described in claim 1, wherein, the method also includes:
Determine the imaging size of the target object in the color image;
Based on the imaging size, the relative position between the target object and the depth camera is determined;
Opposite position between three-dimensional stereo model and the target object and the depth camera based on the target object
It sets, obtains the location information of the target object.
8. a kind of for establishing the device of three-dimensional stereo model, comprising:
Image acquisition unit is configured to obtain the color image and depth in the region including target object from depth camera
Image;
Edge contour determination unit is configured to determine coplanar edge wheel of the target object in the color image
Exterior feature, and determine the non-co-planar edge contour of the target object in the depth image;
Edge contour integrated unit is configured to the coplanar edge contour and non-co-planar edge contour of the target object
It is merged, obtains the edge contour of the target object;
Location information determination unit is configured to determine the Two-dimensional Position confidence of the target object in the color image
Breath;
Three-dimensional stereo model establishes unit, is configured to edge contour and two-dimensional position information based on the target object, builds
Found the three-dimensional stereo model of the target object.
9. device according to claim 8, wherein described device further include:
Image denoising unit is configured to carry out denoising to the color image and the depth image.
10. device according to claim 9, wherein described image denoises unit and includes:
Color image filtering subelement is configured to being converted to the color image into gray level image, first with the first pre-set radius
Etching operation is carried out to the gray level image, then the gray level image after etching operation is carried out with first pre-set radius
Expansive working, the color image after obtaining denoising;
Depth image denoises subelement, is configured to first carry out etching operation to the depth image with the second pre-set radius, then
Expansive working is carried out to the depth image after etching operation with second pre-set radius, it is described after obtaining denoising
Depth image.
11. device according to claim 8, wherein the edge contour determination unit is further configured to:
The coplanar edge contour and non-co-planar edge wheel of the target object are determined using a kind of following edge detection method
It is wide:
Canny operator edge detection method;
SUSAN operator edge detection method;And
Shen Jun operator edge detection method.
12. device according to claim 8, wherein the edge contour integrated unit is further configured to:
Edge track is carried out to coplanar edge contour of institute's target object, if detecting coplanar edge of the target object
There are edge discontinuous points for profile, then the edge discontinuous point institute is found out in the non-co-planar edge contour of the target object
Corresponding marginal point, and connect using the marginal point the coplanar edge contour and non-fair face edge wheel of the target object
Exterior feature obtains the edge contour of the target object.
13. device according to claim 8, wherein described device further include:
Three-dimensional stereo model denoises unit, is configured to the gradient operator using the depth image to the three of the target object
The surface for tieing up three-dimensional model carries out denoising.
14. device according to claim 8, wherein described device further include:
Size determination unit is imaged, is configured to determine the imaging size of the target object in the color image;
Relative position determination unit is configured to determine the target object and the depth camera based on the imaging size
Relative position between machine;
Location information obtaining unit is configured to three-dimensional stereo model based on the target object and the target object and institute
The relative position between depth camera is stated, the location information of the target object is obtained.
15. a kind of electronic equipment, comprising:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer readable storage medium, is stored thereon with computer program, wherein the computer program is processed
The method as described in any in claim 1-7 is realized when device executes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711337773.9A CN109961501A (en) | 2017-12-14 | 2017-12-14 | Method and apparatus for establishing three-dimensional stereo model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711337773.9A CN109961501A (en) | 2017-12-14 | 2017-12-14 | Method and apparatus for establishing three-dimensional stereo model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109961501A true CN109961501A (en) | 2019-07-02 |
Family
ID=67017765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711337773.9A Pending CN109961501A (en) | 2017-12-14 | 2017-12-14 | Method and apparatus for establishing three-dimensional stereo model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109961501A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110087055A (en) * | 2018-01-25 | 2019-08-02 | 台湾东电化股份有限公司 | Vehicle and its three-dimension object Information Acquisition System and three-dimension object information acquisition method |
CN111223111A (en) * | 2020-01-03 | 2020-06-02 | 歌尔股份有限公司 | Depth image contour generation method, device, equipment and storage medium |
CN111815761A (en) * | 2020-07-14 | 2020-10-23 | 杭州翔毅科技有限公司 | Three-dimensional display method, device, equipment and storage medium |
CN113110178A (en) * | 2021-04-16 | 2021-07-13 | 深圳市艾赛克科技有限公司 | Construction site monitoring method and system based on Internet |
WO2022042304A1 (en) * | 2020-08-31 | 2022-03-03 | 腾讯科技(深圳)有限公司 | Method and apparatus for identifying scene contour, and computer-readable medium and electronic device |
CN114820772A (en) * | 2019-07-15 | 2022-07-29 | 牧今科技 | System and method for object detection based on image data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1617174A (en) * | 2004-12-09 | 2005-05-18 | 上海交通大学 | Human limb three-dimensional model building method based on image cutline |
US20080212871A1 (en) * | 2007-02-13 | 2008-09-04 | Lars Dohmen | Determining a three-dimensional model of a rim of an anatomical structure |
CN101499177A (en) * | 2008-01-28 | 2009-08-05 | 上海西门子医疗器械有限公司 | 3D model building method and system |
CN103559737A (en) * | 2013-11-12 | 2014-02-05 | 中国科学院自动化研究所 | Object panorama modeling method |
CN106327464A (en) * | 2015-06-18 | 2017-01-11 | 南京理工大学 | Edge detection method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
-
2017
- 2017-12-14 CN CN201711337773.9A patent/CN109961501A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1617174A (en) * | 2004-12-09 | 2005-05-18 | 上海交通大学 | Human limb three-dimensional model building method based on image cutline |
US20080212871A1 (en) * | 2007-02-13 | 2008-09-04 | Lars Dohmen | Determining a three-dimensional model of a rim of an anatomical structure |
CN101499177A (en) * | 2008-01-28 | 2009-08-05 | 上海西门子医疗器械有限公司 | 3D model building method and system |
CN103559737A (en) * | 2013-11-12 | 2014-02-05 | 中国科学院自动化研究所 | Object panorama modeling method |
CN106327464A (en) * | 2015-06-18 | 2017-01-11 | 南京理工大学 | Edge detection method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110087055A (en) * | 2018-01-25 | 2019-08-02 | 台湾东电化股份有限公司 | Vehicle and its three-dimension object Information Acquisition System and three-dimension object information acquisition method |
CN110087055B (en) * | 2018-01-25 | 2022-03-29 | 台湾东电化股份有限公司 | Vehicle, three-dimensional object information acquisition system thereof and three-dimensional object information acquisition method |
US11726292B2 (en) | 2018-01-25 | 2023-08-15 | Tdk Taiwan Corp. | Optical system |
CN114820772A (en) * | 2019-07-15 | 2022-07-29 | 牧今科技 | System and method for object detection based on image data |
CN114820772B (en) * | 2019-07-15 | 2023-04-07 | 牧今科技 | System and method for object detection based on image data |
CN111223111A (en) * | 2020-01-03 | 2020-06-02 | 歌尔股份有限公司 | Depth image contour generation method, device, equipment and storage medium |
CN111223111B (en) * | 2020-01-03 | 2023-04-25 | 歌尔光学科技有限公司 | Depth image contour generation method, device, equipment and storage medium |
CN111815761A (en) * | 2020-07-14 | 2020-10-23 | 杭州翔毅科技有限公司 | Three-dimensional display method, device, equipment and storage medium |
WO2022042304A1 (en) * | 2020-08-31 | 2022-03-03 | 腾讯科技(深圳)有限公司 | Method and apparatus for identifying scene contour, and computer-readable medium and electronic device |
CN113110178A (en) * | 2021-04-16 | 2021-07-13 | 深圳市艾赛克科技有限公司 | Construction site monitoring method and system based on Internet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427917B (en) | Method and device for detecting key points | |
CN109961501A (en) | Method and apparatus for establishing three-dimensional stereo model | |
CN106920279B (en) | Three-dimensional map construction method and device | |
CN111325796B (en) | Method and apparatus for determining pose of vision equipment | |
US9177381B2 (en) | Depth estimate determination, systems and methods | |
JP5699788B2 (en) | Screen area detection method and system | |
CN103582893B (en) | The two dimensional image represented for augmented reality is obtained | |
KR101595537B1 (en) | Networked capture and 3d display of localized, segmented images | |
CN109683699B (en) | Method and device for realizing augmented reality based on deep learning and mobile terminal | |
US9129435B2 (en) | Method for creating 3-D models by stitching multiple partial 3-D models | |
JP2018163654A (en) | System and method for telecom inventory management | |
US20170154204A1 (en) | Method and system of curved object recognition using image matching for image processing | |
CN109753928A (en) | The recognition methods of architecture against regulations object and device | |
CN111028358B (en) | Indoor environment augmented reality display method and device and terminal equipment | |
CN110866977B (en) | Augmented reality processing method, device, system, storage medium and electronic equipment | |
CN104246793A (en) | Three-dimensional face recognition for mobile devices | |
CN108174152A (en) | A kind of target monitoring method and target monitor system | |
CN113724368B (en) | Image acquisition system, three-dimensional reconstruction method, device, equipment and storage medium | |
CN108564082A (en) | Image processing method, device, server and medium | |
CN108182412A (en) | For the method and device of detection image type | |
WO2021136386A1 (en) | Data processing method, terminal, and server | |
CN110555879B (en) | Space positioning method, device, system and computer readable medium thereof | |
CN113313097B (en) | Face recognition method, terminal and computer readable storage medium | |
CN110472460A (en) | Face image processing process and device | |
CN109978753B (en) | Method and device for drawing panoramic thermodynamic diagram |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |