CN109242903A - Generation method, device, equipment and the storage medium of three-dimensional data - Google Patents

Generation method, device, equipment and the storage medium of three-dimensional data Download PDF

Info

Publication number
CN109242903A
CN109242903A CN201811045660.6A CN201811045660A CN109242903A CN 109242903 A CN109242903 A CN 109242903A CN 201811045660 A CN201811045660 A CN 201811045660A CN 109242903 A CN109242903 A CN 109242903A
Authority
CN
China
Prior art keywords
dimensional
target object
vehicle
dimensional image
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811045660.6A
Other languages
Chinese (zh)
Other versions
CN109242903B (en
Inventor
王煜城
孙迅
夏添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811045660.6A priority Critical patent/CN109242903B/en
Publication of CN109242903A publication Critical patent/CN109242903A/en
Priority to US16/562,129 priority patent/US11024045B2/en
Priority to JP2019163317A priority patent/JP6830139B2/en
Priority to EP19195798.4A priority patent/EP3621036A1/en
Application granted granted Critical
Publication of CN109242903B publication Critical patent/CN109242903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The embodiment of the invention discloses a kind of generation method of three-dimensional data, device, equipment and storage mediums.The described method includes: obtaining profile information annotation results of the target object in two dimensional image;According to the profile information annotation results and the standard three-dimensional size of the target object, the depth of field of the target object and the three-dimensional direction of the target object are calculated;According to two-dimensional coordinate and the depth of field of the two-dimensional image point in the two dimensional image is referred in the target object, the three-dimensional coordinate with reference to two-dimensional image point is determined;According to the three-dimensional direction of the three-dimensional coordinate with reference to two-dimensional image point and the target object, generate and the matched three-dimensional data of the target object.Technical solution through the embodiment of the present invention can reduce the cost of three-dimensional labeling, improve the efficiency of three-dimensional labeling.

Description

Generation method, device, equipment and the storage medium of three-dimensional data
Technical field
The present embodiments relate to the information processing technology more particularly to a kind of generation methods of three-dimensional data, device, equipment And storage medium.
Background technique
In automatic Pilot, the three-dimensional information detection of vehicle is most important for perceiving and planning in running environment.
In the prior art, three-dimensional information detection, which needs to rely on, is labeled the three-dimensional information of vehicle, and the side of mark Formula is only limitted to manually mark, and therefore, the mark for carrying out three-dimensional information is often costly and time-consuming, and existing low cost and quick two Dimension mask method is unable to satisfy the requirement for obtaining three-dimensional information data again.
Summary of the invention
The embodiment of the invention provides a kind of generation method of three-dimensional data, device, equipment and storage mediums, to reduce by three The cost for tieing up mark, improves the efficiency of three-dimensional labeling.
In a first aspect, the embodiment of the invention provides a kind of generation methods of three-dimensional data, comprising:
Obtain profile information annotation results of the target object in two dimensional image;
According to the profile information annotation results and the standard three-dimensional size of the target object, the object is calculated The three-dimensional direction of the depth of field of body and the target object;
According in the target object refer to two-dimensional coordinate and the scape of the two-dimensional image point in the two dimensional image It is deep, determine the three-dimensional coordinate with reference to two-dimensional image point;
According to the three-dimensional direction of the three-dimensional coordinate with reference to two-dimensional image point and the target object, generate with it is described The matched three-dimensional data of target object.
Second aspect, the embodiment of the invention also provides a kind of generating means of three-dimensional data, which includes:
As a result module is obtained, for obtaining profile information annotation results of the target object in two dimensional image;
Three-dimensional computations module, for the standard three-dimensional ruler according to the profile information annotation results and the target object It is very little, calculate the depth of field of the target object and the three-dimensional direction of the target object;
Coordinate determining module, for according in the target object refer to two-dimensional image point in the two dimensional image two Coordinate and the depth of field are tieed up, determines the three-dimensional coordinate with reference to two-dimensional image point;
Data generation module, for according to the three of the three-dimensional coordinate with reference to two-dimensional image point and the target object Direction is tieed up, is generated and the matched three-dimensional data of the target object.
The third aspect the embodiment of the invention also provides a kind of computer equipment, including memory, processor and is stored in It is realized on memory and when processor described in the computer program that can run on a processor executes described program as the present invention is real Apply the generation method of three-dimensional data described in example.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program, the program realize the generation method of three-dimensional data as described in the embodiments of the present invention when being executed by processor.
The embodiment of the invention provides a kind of generation method of three-dimensional data, device, equipment and storage mediums, pass through basis The standard three-dimensional size of profile information annotation results and target object of the target object of acquisition in two dimensional image calculates The depth of field of target object and three-dimensional direction, and sat according to two dimension of the two-dimensional image point in two dimensional image is referred in target object Mark and the depth of field determine the three-dimensional coordinate for referring to two-dimensional image point, and then according to the three-dimensional of the three-dimensional coordinate and target object Direction, generate with the matched three-dimensional data of target object, using two dimension mark predict three-dimensional information, solve in the prior art because Three-dimensional labeling is only limitted to the mode manually marked, caused by the costly and time-consuming problem of three-dimensional labeling, realize the three-dimensional mark of reduction This is formed, the effect of three-dimensional labeling efficiency is improved.
Detailed description of the invention
Fig. 1 a is a kind of flow diagram of the generation method for three-dimensional data that the embodiment of the present invention one provides;
Fig. 1 b is the schematic diagram of the applicable vehicle shooting top view of the embodiment of the present invention one;
Fig. 2 is a kind of flow diagram of the generation method of three-dimensional data provided by Embodiment 2 of the present invention;
Fig. 3 is a kind of structural schematic diagram of the generating means for three-dimensional data that the embodiment of the present invention three provides;
Fig. 4 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention four provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
It also should be noted that only the parts related to the present invention are shown for ease of description, in attached drawing rather than Full content.It should be mentioned that some exemplary embodiments are described before exemplary embodiment is discussed in greater detail At the processing or method described as flow chart.Although operations (or step) are described as the processing of sequence by flow chart, It is that many of these operations can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of operations can be by again It arranges.The processing can be terminated when its operations are completed, it is also possible to have the additional step being not included in attached drawing. The processing can correspond to method, function, regulation, subroutine, subprogram etc..
Embodiment one
Fig. 1 a is a kind of flow chart of the generation method for three-dimensional data that the embodiment of the present invention one provides, and the present embodiment can fit The case where for carrying out three-dimensional information mark to object, this method can be by the generation of three-dimensional data provided in an embodiment of the present invention Device executes, and the mode which can be used software and/or hardware is realized, and can generally be integrated in computer equipment.Such as Shown in Fig. 1 a, the method for the present embodiment is specifically included:
S110, profile information annotation results of the target object in two dimensional image are obtained.
In the present embodiment, can by the image of the target object acquired from predetermined angle, as the two dimensional image of target object, Wherein, predetermined angle includes but is not limited to arbitrary orientation when this equipment is in same horizontal line with target object.For example, can The image of acquisition target vehicle, the two dimensional image as acquisition are captured by the laser radar on unmanned vehicle.
Wherein, profile information annotation results can be in the two dimensional image of acquisition, and the profile that can characterize target object is special The annotation results of sign.For example, profile information annotation results can be vehicle in two dimensional image when target object is vehicle The outline border of external rectangle frame namely vehicle can also be the line of demarcation in two dimensional image between the headstock and vehicle body of vehicle, That is cut-off rule, the position in the outline border.
Specifically, identifying processing can be carried out to two dimensional image, to identify target object at this by image recognition technology Profile information in two dimensional image, and be labeled, obtain profile information annotation results.
Optionally, after obtaining profile information annotation results of the target object in two dimensional image, further includes: according to wheel Wide information labeling in two dimensional image as a result, intercept and the matched topography of target object;Topography is input in advance In trained standard three-dimensional identification model, the standard three-dimensional size of target object is obtained.
Wherein, interception with the matched topography of target object, it is external in two dimensional image to can be target object Part image corresponding to rectangle frame.It is established specifically, standard three-dimensional identification model can be based on deep machine learning algorithm Model, for being identified to the corresponding image of object, the corresponding practical three-dimensional dimension of prediction object namely standard three-dimensional ruler It is very little.Wherein, machine learning algorithm used by the standard three-dimensional identification model includes but is not limited to depth convolutional neural networks.Show Example property, image pattern corresponding to a large amount of different objects of acquisition can be first passed through, depth convolutional neural networks model is carried out Training, to train the model that can predict object standard three-dimensional size from image, as standard three-dimensional identification model;Again By being input in the standard three-dimensional identification model with the matched topography of target object for interception, to pass through target object Its standard three-dimensional size of image prediction, such as length (L, W, H).Furthermore it is also possible to target object size template is set, it will Obtained standard three-dimensional size matches with target object size template, and to filter out unreasonable value, it is accurate to further increase prediction Property.
S120, according to the standard three-dimensional size of profile information annotation results and target object, calculate the scape of target object The three-dimensional direction of depth and target object.
Due to the profile key message that profile information annotation results are the target object obtained based on two dimensional image, and target Actual size of the standard three-dimensional of object having a size of target object, therefore the two can be combined, on the basis of two dimensional image, Obtain the three-dimensional information of target object.Wherein, three-dimensional information includes but is not limited to the depth of field and target object of target object Three-dimensional direction.Marking prediction three-dimensional information by two dimension is advantageous in that, relatively low cost and quickly two dimension mark can be used Note, obtains the equivalent detection effect of costly and time-consuming three-dimensional labeling.
Specifically, the depth of field of target object can be target object to two dimensional image it is corresponding shooting focus between away from From, target object it is three-dimensional towards can be target object towards the angle between direction and camera plane.
Such as in vehicle shooting top view as shown in Figure 1 b, areas imaging of the vehicle 1 on two dimensional image 2 is to shoot The projection namely intersection points B and intersection point C of left end point E and right end point G on two dimensional image 2 on focus A and vehicle 1, In, the depth of field of vehicle is to shoot focus A into vehicle 1 closest to the distance between the endpoint of two dimensional image 2, namely distance Z;Vehicle 1 three-dimensional is oriented angle γ.
S130, according in target object refer to two-dimensional coordinate and the depth of field of the two-dimensional image point in two dimensional image, determine With reference to the three-dimensional coordinate of two-dimensional image point.
In the present embodiment, any point of target object in two dimensional image can be with reference to two-dimensional image point, for example, mesh Mark the object geometric center point of external rectangle frame namely K as shown in Figure 1 b point in two dimensional image.
Illustratively, two-dimensional coordinate system is established as origin using an endpoint of two dimensional image, by identifying outside target object The geometric center point of rectangle frame is connect, and then obtains the coordinate of the geometric center point based on the two-dimensional coordinate system, as target object The middle two-dimensional coordinate with reference to two-dimensional image point in two dimensional image, in conjunction with the depth of field for the target object being calculated, according to default Calculation formula determines the three-dimensional coordinate for referring to two-dimensional image point.
Optionally, it according to the two-dimensional coordinate and the depth of field for referring to two-dimensional image point in target object, determines and refers to X-Y scheme The three-dimensional coordinate of picture point, comprising:
According to formula:
Calculate the three-dimensional coordinate P for referring to two-dimensional image point, wherein x is the abscissa with reference to two-dimensional image point, and y is reference The ordinate of two-dimensional image point, Z are the depth of field, and f is shooting focal length corresponding with two dimensional image.
By taking a concrete instance as an example, as shown in Figure 1 b, the geometric center point namely vehicle 1 of the boundary rectangle frame of vehicle 1 Outline border central point K coordinate be (x, y), can be used above-mentioned formula calculate vehicle 1 outline border central point corresponding to three-dimensional seat MarkWherein, the depth of field of the Z for vehicle 1, f the distance between camera plane and shooting focus A where two dimensional image, Namely shooting focal length.
S140, according to the three-dimensional coordinate of reference two-dimensional image point and the three-dimensional direction of target object, generate and object The matched three-dimensional data of body.
It, can if being determined with reference to the three-dimensional coordinate of two-dimensional image point and the three-dimensional direction of target object in the present embodiment According to the distance between testing image point in two-dimensional image point to target object is referred on two dimensional image, according to default calculating public affairs Formula, determines the corresponding three-dimensional coordinate of testing image point, so according to the corresponding three-dimensional coordinate construction of each picture point generate with it is whole A matched three-dimensional data of target object realizes the two dimension mark using target object, obtains the three-dimensional labeling of target object, from And manual time required when direct progress three-dimensional labeling has been saved, three-dimensional labeling efficiency is improved, in addition, due to whole process All it is that computer is automatically performed, is participated in without artificial, therefore the cost of three-dimensional labeling can also be reduced.
The embodiment of the invention provides a kind of generation methods of three-dimensional data, by the target object according to acquisition in two dimension The standard three-dimensional size of profile information annotation results and target object in image calculates the depth of field and three-dimensional of target object Direction, and according to two-dimensional coordinate and the depth of field of the two-dimensional image point in two dimensional image is referred in target object, it determines and refers to two The three-dimensional coordinate of picture point is tieed up, and then according to the three-dimensional direction of the three-dimensional coordinate and target object, is generated and target object The three-dimensional data matched is marked using two dimension and predicts three-dimensional information, solved in the prior art because three-dimensional labeling is only limitted to manually mark The mode of note, caused by the costly and time-consuming problem of three-dimensional labeling, realizing reduces three-dimensional labeling cost, improves three-dimensional labeling effect The effect of rate.
Embodiment two
Fig. 2 is a kind of flow chart of the generation method of three-dimensional data provided by Embodiment 2 of the present invention, more than the present embodiment It states and is embodied based on embodiment.In the present embodiment, target object is advanced optimized as vehicle, and by profile information Annotation results advanced optimize to include: position of the cut-off rule of the outline border of vehicle and the headstock of vehicle and vehicle body in outline border It sets.
Correspondingly, the method for the present embodiment includes:
S210, profile information annotation results of the vehicle in two dimensional image are obtained, wherein profile information annotation results packet It includes: position of the cut-off rule of the outline border of vehicle and the headstock of vehicle and vehicle body in outline border.
In the present embodiment, the outline border of vehicle specifically can be boundary rectangle frame of the vehicle in two dimensional image, the vehicle of vehicle The cut-off rule of head and vehicle body can be the line of demarcation between the headstock of vehicle and vehicle body in the boundary rectangle in the position in outline border Position in frame.Such as in top view shown in Fig. 1 b, intersection points B is the leftmost side point of the outline border of vehicle 1, and intersection point C is the outer of vehicle 1 The rightmost side point of frame, location point of the intersection point D for the headstock of vehicle 1 and the cut-off rule of vehicle body in outline border.
As shown in Figure 1 b, a concrete instance is lifted, if being with reference to two dimension with the geometric center point K (x, y) of the outline border of vehicle 1 Picture point, and in two dimensional image 2 outline border of vehicle 1 width and a height of (w, h), wherein the wide h of outline border is intersection points B and intersection point C The distance between, then the abscissa that the outline border leftmost side point B of vehicle 1 can be obtained isThe cross of rightmost side point C Coordinate isThe abscissa of headstock and vehicle body cut-off rule location point D are xseg
S220, according to the vehicle actual height in the outline border height and standard three-dimensional size of the outline border of vehicle, calculate mesh Mark the depth of field of object.
Wherein, the outline border height of the outline border of vehicle can be the height of boundary rectangle frame of the vehicle in two dimensional image, The height of vehicle i.e. in two dimensional image.Due to the vehicle actual height in the outline border height and standard three-dimensional size of the outline border of vehicle The ratio between, be with the ratio between the shooting focal length of two dimensional image and the depth of field of target object it is equal, therefore, can be according to the outline border of vehicle The shooting focal length of vehicle actual height and preset two dimensional image in outline border height, standard three-dimensional size, to calculate vehicle The depth of field namely target object the depth of field.
Optionally, it according to the vehicle actual height in the outline border height of the outline border of vehicle and standard three-dimensional size, calculates The depth of field of target object, comprising: according to formula:Calculate the depth of field Z of target object;Wherein, H is that vehicle is practical high Degree, h are outline border height, and f is shooting focal length corresponding with two dimensional image.
It as shown in Figure 1 b, can be according to the bat of the actual height H of vehicle 1, the outline border height h of vehicle 1 and two dimensional image 2 Focal length f is taken the photograph, the depth of field Z of the vehicle 1 namely depth of field Z of target object is calculated according to above-mentioned calculation formula.
S230, according to position of the cut-off rule of headstock and vehicle body in outline border, the vehicle actual (tube) length in standard three-dimensional size The depth of field of degree and target object, calculates the three-dimensional direction of target object.
In the present embodiment, the three-dimensional vehicle body long side towards specially vehicle of target object is relative to the be in angle of camera plane Degree, such as angle γ shown in Fig. 1 b.Specifically, can according to position of the cut-off rule of headstock and vehicle body in outline border calculate headstock with The lateral distance of the outline border leftmost side point of the cut-off rule and vehicle of vehicle body, further according to the lateral distance and the depth of field and bat of vehicle The ratio between focal length is taken the photograph, determines that projection of the vehicle body long side of vehicle in actual scene in the camera plane of two dimensional image is long Degree finally calculates vehicle according to default geometric formula according to the vehicle physical length in the projected length and standard three-dimensional size Three-dimensional direction namely target object three-dimensional direction.
Optionally, the position according to the cut-off rule of headstock and vehicle body in outline border, the vehicle in standard three-dimensional size are practical The depth of field of length and target object calculates the three-dimensional direction of target object, comprising:
Calculate the abscissa x of leftmost side point in outline borderleft
According to xleftAnd position of the cut-off rule of headstock and vehicle body in outline border, calculate the cut-off rule of headstock and vehicle body with The lateral distance w of leftmost side pointleft
According to following formula, the three-dimensional towards γ of target object is calculated:
γ=π-alpha-beta; (4)
Wherein, N is projection of the vehicle body long side of vehicle in actual scene in the camera plane of two dimensional image, and Z is target The depth of field of object, angle of the α between target light and camera plane, β is between target light and the vehicle body long side of vehicle Angle, L are the vehicle physical length in standard three-dimensional size, and target light is the leftmost side point of vehicle and two dimension in actual scene Line between the intersection point of image.
As shown in Figure 1 b, it is built using the coordinate of the geometric center point K point of the outline border of vehicle 1 in two dimensional image 2 as origin (0,0) In vertical two-dimensional coordinate system, the abscissa that the outline border leftmost side point B of vehicle 1 can be obtained isWherein, w is vehicle 1 The width of outline border in two dimensional image 2.When the abscissa of headstock and vehicle body cut-off rule location point D are xsegWhen, intersection points B and position Set the cut-off rule of the distance between point D namely headstock and vehicle body and the lateral distance w of leftmost side pointleft=xseg-xleft.It utilizes The vehicle body long side of vehicle 1 is between projected length namely intersection point Q in the camera plane of two dimensional image and point F in actual scene Distance, the ratio between with the distance between intersection points B and location point D in two dimensional image 2, with the depth of field of vehicle 1 and the bat of two dimensional image 2 Take the photograph the ratio between focal length be it is equal, therefore, the vehicle body long side of vehicle 1 in actual scene can be calculated according to above-mentioned formula (1) two Tie up the length of the projection N in the camera plane of image.Then, shooting focus A is calculated further according to trigonometric function formula (2) and is handed over The value of ∠ ABK in triangle composed by point B and intersection point K, to determine angle α's according to parallel lines theorem namely formula (3) Value, projection of the vehicle body long side of practical length of wagon L and vehicle 1 in conjunction with vehicle 1 in the camera plane of two dimensional image The length of N obtains the value of angle beta according to sine, finally according to angle sum of a triangle theorem namely formula (4), calculates To the three-dimensional direction of angle γ namely target object.
S240, according in target object refer to two-dimensional coordinate and the depth of field of the two-dimensional image point in two dimensional image, determine With reference to the three-dimensional coordinate of two-dimensional image point.
S250, according to the three-dimensional coordinate of reference two-dimensional image point and the three-dimensional direction of target object, generate and object The matched three-dimensional data of body.
The technical solution of the embodiment of the present invention, by obtain the headstock of outline border and vehicle of the vehicle in two dimensional image with Position of the cut-off rule of vehicle body in outline border, and vehicle in the outline border height and standard three-dimensional size of the outline border of combination vehicle Actual height calculates the depth of field of vehicle, further according in position of the cut-off rule of headstock and vehicle body in outline border, standard three-dimensional size Vehicle physical length and vehicle the depth of field, calculate the three-dimensional direction of vehicle, finally, in conjunction in vehicle refer to two-dimensional image point Two-dimensional coordinate and the depth of field in two dimensional image determine the three-dimensional coordinate for referring to two-dimensional image point, and according to reference X-Y scheme The three-dimensional coordinate of picture point and the three-dimensional direction of vehicle generate the three-dimensional data with vehicle match, to utilize the two dimension of vehicle Mark automatically generates the three-dimensional data of vehicle, without manually carrying out three-dimensional labeling to vehicle, reduce three-dimensional vehicle mark at This, improves the efficiency of three-dimensional vehicle mark.
Embodiment three
Fig. 3 is a kind of structural schematic diagram of the generating means for three-dimensional data that the embodiment of the present invention three provides, such as Fig. 3 institute Show, described device includes: that result obtains module 310, three-dimensional computations module 320, coordinate determining module 330 and data generation mould Block 340.
As a result module 310 is obtained, for obtaining profile information annotation results of the target object in two dimensional image;
Three-dimensional computations module 320, for the standard three according to the profile information annotation results and the target object Size is tieed up, the depth of field of the target object and the three-dimensional direction of the target object are calculated;
Coordinate determining module 330, for according in the target object refer to two-dimensional image point in the two dimensional image Two-dimensional coordinate and the depth of field, determine the three-dimensional coordinate with reference to two-dimensional image point;
Data generation module 340, for according to the three-dimensional coordinate with reference to two-dimensional image point and the target object Three-dimensional direction, generate with the matched three-dimensional data of the target object.
The embodiment of the invention provides a kind of generating means of three-dimensional data, by the target object according to acquisition in two dimension The standard three-dimensional size of profile information annotation results and target object in image calculates the depth of field and three-dimensional of target object Direction, and according to two-dimensional coordinate and the depth of field of the two-dimensional image point in two dimensional image is referred in target object, it determines and refers to two The three-dimensional coordinate of picture point is tieed up, and then according to the three-dimensional direction of the three-dimensional coordinate and target object, is generated and target object The three-dimensional data matched is marked using two dimension and predicts three-dimensional information, solved in the prior art because three-dimensional labeling is only limitted to manually mark The mode of note, caused by the costly and time-consuming problem of three-dimensional labeling, realizing reduces three-dimensional labeling cost, improves three-dimensional labeling effect The effect of rate.
Further, the target object may include vehicle, and the profile information annotation results may include: the vehicle Position in the outline border of outline border and the headstock of the vehicle and the cut-off rule of vehicle body.
Further, three-dimensional computations module 320 may include:
Depth of field computational submodule, for the outline border height and the standard three-dimensional size according to the outline border of the vehicle In vehicle actual height, calculate the depth of field of the target object;
Towards computational submodule, for position of the cut-off rule according to the headstock and vehicle body in the outline border, described The depth of field of vehicle physical length and the target object in standard three-dimensional size, calculates the three-dimensional court of the target object To.
Further, depth of field computational submodule specifically can be used for:
According to formula:Calculate the depth of field Z of the target object;
Wherein, H is the vehicle actual height, and h is the outline border height, and f is shooting focal length corresponding with two dimensional image.
Further, specifically can be used for towards computational submodule:
Calculate the abscissa x of leftmost side point in the outline borderleft
According to the xleftAnd position of the cut-off rule of the headstock and vehicle body in the outline border, calculate the headstock With the cut-off rule of vehicle body and the lateral distance w of the leftmost side pointleft
According to following formula, the three-dimensional towards γ of the target object is calculated:
γ=π-alpha-beta;
Wherein, N is projection of the vehicle body long side of vehicle described in actual scene in the camera plane of the two dimensional image, Z is the depth of field of the target object, and angle of the α between target light and the camera plane, β is the target light and institute The angle between the vehicle body long side of vehicle is stated, L is the vehicle physical length in the standard three-dimensional size, and the target light is Line between the leftmost side point and the intersection point of the two dimensional image of vehicle described in actual scene.
Further, the generating means of three-dimensional data can also include:
Image interception module, for after obtaining profile information annotation results of the target object in two dimensional image, root According to the profile information annotation results, interception and the matched topography of the target object in the two dimensional image;
Dimension acquisition module is obtained for the topography to be input in standard three-dimensional identification model trained in advance To the standard three-dimensional size of the target object.
Further, coordinate determining module 330 specifically can be used for:
According to formula:
Calculating the three-dimensional coordinate P with reference to two-dimensional image point, wherein x is the abscissa with reference to two-dimensional image point, Y is the ordinate with reference to two-dimensional image point, and Z is the depth of field, and f is shooting focal length corresponding with the two dimensional image.
The generation side of three-dimensional data provided by any embodiment of the invention can be performed in the generating means of above-mentioned three-dimensional data Method has the corresponding functional module of generation method and beneficial effect for executing three-dimensional data.
Example IV
Fig. 4 is a kind of structural schematic diagram for computer equipment that the embodiment of the present invention four provides.Fig. 4, which is shown, to be suitable for being used to Realize the block diagram of the exemplary computer device 12 of embodiment of the present invention.The computer equipment 12 that Fig. 4 is shown is only one Example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 4, computer equipment 12 is showed in the form of universal computing device.The component of computer equipment 12 can be with Including but not limited to: one or more processor or processing unit 16, system storage 28 connect different system components The bus 18 of (including system storage 28 and processing unit 16).
Bus 18 indicates one of a few class bus structures or a variety of, including memory bus or Memory Controller, Peripheral bus, graphics acceleration port, processor or the local bus using any bus structures in a variety of bus structures.It lifts For example, these architectures include but is not limited to industry standard architecture (ISA) bus, microchannel architecture (MAC) Bus, enhanced isa bus, Video Electronics Standards Association (VESA) local bus and peripheral component interconnection (PCI) bus.
Computer equipment 12 typically comprises a variety of computer system readable media.These media can be it is any can be by The usable medium that computer equipment 12 accesses, including volatile and non-volatile media, moveable and immovable medium.
System storage 28 may include the computer system readable media of form of volatile memory, such as arbitrary access Memory (RAM) 30 and/or cache memory 32.Computer equipment 12 may further include it is other it is removable/can not Mobile, volatile/non-volatile computer system storage medium.Only as an example, storage system 34 can be used for reading and writing not Movably, non-volatile magnetic media (Fig. 4 do not show, commonly referred to as " hard disk drive ").It although not shown in fig 4, can be with The disc driver for reading and writing to removable non-volatile magnetic disk (such as " floppy disk ") is provided, and non-volatile to moving The CD drive of CD (such as CD-ROM, DVD-ROM or other optical mediums) read-write.In these cases, each driving Device can be connected by one or more data media interfaces with bus 18.Memory 28 may include that at least one program produces Product, the program product have one group of (for example, at least one) program module, these program modules are configured to perform of the invention each The function of embodiment.
Program/utility 40 with one group of (at least one) program module 42 can store in such as memory 28 In, such program module 42 includes --- but being not limited to --- operating system, one or more application program, other programs It may include the realization of network environment in module and program data, each of these examples or certain combination.Program mould Block 42 usually executes function and/or method in embodiment described in the invention.
Computer equipment 12 can also be with one or more external equipments 14 (such as keyboard, sensing equipment, display 24 Deng) communication, can also be enabled a user to one or more equipment interact with the computer equipment 12 communicate, and/or with make The computer equipment 12 any equipment (such as network interface card, the modulatedemodulate that can be communicated with one or more of the other calculating equipment Adjust device etc.) communication.This communication can be carried out by input/output (I/O) interface 22.Also, computer equipment 12 may be used also To pass through network adapter 20 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network Network, such as internet) communication.As shown, network adapter 20 is logical by other modules of bus 18 and computer equipment 12 Letter.It should be understood that although not shown in fig 4, other hardware and/or software module, packet can be used in conjunction with computer equipment 12 It includes but is not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, magnetic tape drive Device and data backup storage system etc..
Processing unit 16 by the program that is stored in system storage 28 of operation, thereby executing various function application and Data processing, such as realize the generation method of three-dimensional data provided by various embodiments of the present invention.That is, the processing unit is held It is realized when row described program: obtaining profile information annotation results of the target object in two dimensional image;According to the profile information The standard three-dimensional size of annotation results and the target object, calculate the target object the depth of field and the target object Three-dimensional direction;According in the target object refer to two-dimensional coordinate of the two-dimensional image point in the two dimensional image, Yi Jisuo The depth of field is stated, determines the three-dimensional coordinate with reference to two-dimensional image point;According to the three-dimensional coordinate with reference to two-dimensional image point and The three-dimensional direction of the target object generates and the matched three-dimensional data of the target object.
Embodiment five
The embodiment of the present invention five provides a kind of computer readable storage medium, is stored thereon with computer program, the journey The generation method of the three-dimensional data provided such as all inventive embodiments of the application is provided when sequence is executed by processor.That is, the journey Realization when sequence is executed by processor: profile information annotation results of the target object in two dimensional image are obtained;According to the profile The standard three-dimensional size of information labeling result and the target object calculates the depth of field and the target of the target object The three-dimensional direction of object;According in the target object refer to two-dimensional coordinate of the two-dimensional image point in the two dimensional image, with And the depth of field, determine the three-dimensional coordinate with reference to two-dimensional image point;According to the three-dimensional coordinate with reference to two-dimensional image point And the three-dimensional direction of the target object, it generates and the matched three-dimensional data of the target object.
It can be using any combination of one or more computer-readable media.Computer-readable medium can be calculating Machine readable signal medium or computer readable storage medium.Computer readable storage medium for example can be --- but it is unlimited In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates The more specific example (non exhaustive list) of machine readable storage medium storing program for executing includes: electrical connection with one or more conducting wires, just Taking formula computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In this document, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including --- but It is not limited to --- electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be Any computer-readable medium other than computer readable storage medium, which can send, propagate or Transmission is for by the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including --- but it is unlimited In --- wireless, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language-such as Java, Smalltalk, C++, It further include conventional procedural programming language-such as " C " language or similar programming language.Program code can be with It fully executes, partly execute on the user computer on the user computer, being executed as an independent software package, portion Divide and partially executes or executed on a remote computer or server completely on the remote computer on the user computer.? Be related in the situation of remote computer, remote computer can pass through the network of any kind --- including local area network (LAN) or Wide area network (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as mentioned using Internet service It is connected for quotient by internet).
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (10)

1. a kind of generation method of three-dimensional data characterized by comprising
Obtain profile information annotation results of the target object in two dimensional image;
According to the profile information annotation results and the standard three-dimensional size of the target object, the target object is calculated The three-dimensional direction of the depth of field and the target object;
According in the target object refer to two-dimensional coordinate and the depth of field of the two-dimensional image point in the two dimensional image, Determine the three-dimensional coordinate with reference to two-dimensional image point;
According to the three-dimensional direction of the three-dimensional coordinate with reference to two-dimensional image point and the target object, generate and the target The three-dimensional data of object matches.
2. the method according to claim 1, wherein the target object includes vehicle, the profile information mark Note result includes: position of the cut-off rule of the outline border of the vehicle and the headstock of the vehicle and vehicle body in the outline border.
3. according to the method described in claim 2, it is characterized in that, according to the profile information annotation results and the target The standard three-dimensional size of object calculates the depth of field of the target object and the three-dimensional direction of the target object, comprising:
According to the vehicle actual height in the outline border height of the outline border of the vehicle and the standard three-dimensional size, institute is calculated State the depth of field of target object;
It is practical in the position in the outline border, the vehicle in the standard three-dimensional size according to the cut-off rule of the headstock and vehicle body The depth of field of length and the target object calculates the three-dimensional direction of the target object.
4. according to the method described in claim 3, it is characterized in that, according to the outline border height of the outline border of the vehicle, Yi Jisuo The vehicle actual height in standard three-dimensional size is stated, the depth of field of the target object is calculated, comprising:
According to formula:Calculate the depth of field Z of the target object;
Wherein, H is the vehicle actual height, and h is the outline border height, and f is shooting focal length corresponding with two dimensional image.
5. according to the method described in claim 4, it is characterized in that, according to the cut-off rule of the headstock and vehicle body in the outline border In position, the vehicle physical length in the standard three-dimensional size and the target object the depth of field, calculate the target The three-dimensional direction of object, comprising:
Calculate the abscissa x of leftmost side point in the outline borderleft
According to the xleftAnd position of the cut-off rule of the headstock and vehicle body in the outline border, calculate the headstock and vehicle The lateral distance w of the cut-off rule of body and the leftmost side pointleft
According to following formula, the three-dimensional towards γ of the target object is calculated:
γ=π-alpha-beta;
Wherein, N is projection of the vehicle body long side of vehicle described in actual scene in the camera plane of the two dimensional image, and Z is The depth of field of the target object, angle of the α between target light and the camera plane, β be the target light with it is described Angle between the vehicle body long side of vehicle, L are the vehicle physical length in the standard three-dimensional size, and the target light is real Line between the leftmost side point and the intersection point of the two dimensional image of vehicle described in the scene of border.
6. method according to claim 1-5, which is characterized in that obtaining target object in two dimensional image After profile information annotation results, further includes:
According to the profile information annotation results, interception and the matched Local map of the target object in the two dimensional image Picture;
The topography is input in standard three-dimensional identification model trained in advance, obtains the standard three of the target object Tie up size.
7. method according to claim 1-5, which is characterized in that according in the target object refer to X-Y scheme The two-dimensional coordinate of picture point and the depth of field determine the three-dimensional coordinate with reference to two-dimensional image point, comprising:
According to formula:
Calculate the three-dimensional coordinate P with reference to two-dimensional image point, wherein x is the abscissa with reference to two-dimensional image point, and y is The ordinate with reference to two-dimensional image point, Z are the depth of field, and f is shooting focal length corresponding with the two dimensional image.
8. a kind of generating means of three-dimensional data characterized by comprising
As a result module is obtained, for obtaining profile information annotation results of the target object in two dimensional image;
Three-dimensional computations module, for the standard three-dimensional size according to the profile information annotation results and the target object, Calculate the depth of field of the target object and the three-dimensional direction of the target object;
Coordinate determining module, for being sat according to the two dimension in the target object with reference to two-dimensional image point in the two dimensional image Mark and the depth of field, determine the three-dimensional coordinate with reference to two-dimensional image point;
Data generation module, for the three-dimensional court according to the three-dimensional coordinate with reference to two-dimensional image point and the target object To generation and the matched three-dimensional data of the target object.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, which is characterized in that the processor realizes the three-dimensional as described in any in claim 1-7 when executing described program The generation method of data.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The generation method of the three-dimensional data as described in any in claim 1-7 is realized when execution.
CN201811045660.6A 2018-09-07 2018-09-07 Three-dimensional data generation method, device, equipment and storage medium Active CN109242903B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201811045660.6A CN109242903B (en) 2018-09-07 2018-09-07 Three-dimensional data generation method, device, equipment and storage medium
US16/562,129 US11024045B2 (en) 2018-09-07 2019-09-05 Method and apparatus for generating three-dimensional data, device, and storage medium
JP2019163317A JP6830139B2 (en) 2018-09-07 2019-09-06 3D data generation method, 3D data generation device, computer equipment and computer readable storage medium
EP19195798.4A EP3621036A1 (en) 2018-09-07 2019-09-06 Method and apparatus for generating three-dimensional data, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811045660.6A CN109242903B (en) 2018-09-07 2018-09-07 Three-dimensional data generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109242903A true CN109242903A (en) 2019-01-18
CN109242903B CN109242903B (en) 2020-08-07

Family

ID=65060153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811045660.6A Active CN109242903B (en) 2018-09-07 2018-09-07 Three-dimensional data generation method, device, equipment and storage medium

Country Status (4)

Country Link
US (1) US11024045B2 (en)
EP (1) EP3621036A1 (en)
JP (1) JP6830139B2 (en)
CN (1) CN109242903B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816727A (en) * 2019-01-29 2019-05-28 江苏医像信息技术有限公司 The target identification method of three-dimensional atlas
CN109829447A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Method and apparatus for determining three-dimensional vehicle frame
CN110033515A (en) * 2019-04-15 2019-07-19 同济大学建筑设计研究院(集团)有限公司 Figure conversion method, device, computer equipment and storage medium
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Image object three-dimensional information mask method
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
CN111136648A (en) * 2019-12-27 2020-05-12 深圳市优必选科技股份有限公司 Mobile robot positioning method and device and mobile robot
CN111161129A (en) * 2019-11-25 2020-05-15 佛山欧神诺云商科技有限公司 Three-dimensional interaction design method and system for two-dimensional image
CN111401423A (en) * 2020-03-10 2020-07-10 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle
CN111581415A (en) * 2020-03-18 2020-08-25 时时同云科技(成都)有限责任公司 Method for determining similar objects, and method and equipment for determining object similarity
CN111882596A (en) * 2020-03-27 2020-11-03 浙江水晶光电科技股份有限公司 Structured light module three-dimensional imaging method and device, electronic equipment and storage medium
WO2020223940A1 (en) * 2019-05-06 2020-11-12 深圳大学 Posture prediction method, computer device and storage medium
CN112089422A (en) * 2020-07-02 2020-12-18 王兆英 Self-adaptive medical system and method based on wound area analysis
CN112164143A (en) * 2020-10-23 2021-01-01 广州小马慧行科技有限公司 Three-dimensional model construction method and device, processor and electronic equipment
CN112990217A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Image recognition method and device for vehicle, electronic equipment and medium
CN113065999A (en) * 2019-12-16 2021-07-02 杭州海康威视数字技术股份有限公司 Vehicle-mounted panorama generation method and device, image processing equipment and storage medium
WO2021175119A1 (en) * 2020-03-06 2021-09-10 华为技术有限公司 Method and device for acquiring 3d information of vehicle
CN113591518A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Image processing method, network training method and related equipment
CN116611165A (en) * 2023-05-18 2023-08-18 中国船舶集团有限公司第七一九研究所 CATIA-based equipment base quick labeling method and system

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392267B (en) * 2020-03-12 2024-01-16 平湖莱顿光学仪器制造有限公司 Method and device for generating two-dimensional microscopic video information of target object
CN111476798B (en) * 2020-03-20 2023-05-16 上海遨遥人工智能科技有限公司 Vehicle space morphology recognition method and system based on contour constraint
CN111429512B (en) * 2020-04-22 2023-08-25 北京小马慧行科技有限公司 Image processing method and device, storage medium and processor
CN111541907B (en) * 2020-04-23 2023-09-22 腾讯科技(深圳)有限公司 Article display method, apparatus, device and storage medium
CN113643226B (en) * 2020-04-27 2024-01-19 成都术通科技有限公司 Labeling method, labeling device, labeling equipment and labeling medium
CN111754622B (en) * 2020-07-13 2023-10-13 腾讯科技(深圳)有限公司 Face three-dimensional image generation method and related equipment
CN112097732A (en) * 2020-08-04 2020-12-18 北京中科慧眼科技有限公司 Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112070782B (en) * 2020-08-31 2024-01-09 腾讯科技(深圳)有限公司 Method, device, computer readable medium and electronic equipment for identifying scene contour
CN112183241A (en) * 2020-09-11 2021-01-05 北京罗克维尔斯科技有限公司 Target detection method and device based on monocular image
CN112132113A (en) * 2020-10-20 2020-12-25 北京百度网讯科技有限公司 Vehicle re-identification method and device, training method and electronic equipment
CN112258640B (en) * 2020-10-30 2024-03-22 李艳 Skull model building method and device, storage medium and electronic equipment
CN113873220A (en) * 2020-12-03 2021-12-31 上海飞机制造有限公司 Deviation analysis method, device, system, equipment and storage medium
CN112634152A (en) * 2020-12-16 2021-04-09 中科海微(北京)科技有限公司 Face sample data enhancement method and system based on image depth information
CN112785492A (en) * 2021-01-20 2021-05-11 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112907550B (en) * 2021-03-01 2024-01-19 创新奇智(成都)科技有限公司 Building detection method and device, electronic equipment and storage medium
WO2022249232A1 (en) * 2021-05-24 2022-12-01 日本電信電話株式会社 Learning device, learned model generation method, data generation device, data generation method, and program
CN113806870A (en) * 2021-10-29 2021-12-17 中车青岛四方机车车辆股份有限公司 Method and device for three-dimensional modeling of vehicle and vehicle system
CN113902856B (en) * 2021-11-09 2023-08-25 浙江商汤科技开发有限公司 Semantic annotation method and device, electronic equipment and storage medium
CN114637875A (en) * 2022-04-01 2022-06-17 联影智能医疗科技(成都)有限公司 Medical image labeling method, system and device
CN115482269B (en) * 2022-09-22 2023-05-09 佳都科技集团股份有限公司 Method and device for calculating earthwork, terminal equipment and storage medium
CN116643648B (en) * 2023-04-13 2023-12-19 中国兵器装备集团自动化研究所有限公司 Three-dimensional scene matching interaction method, device, equipment and storage medium
CN116704129B (en) * 2023-06-14 2024-01-30 维坤智能科技(上海)有限公司 Panoramic view-based three-dimensional image generation method, device, equipment and storage medium
CN117744185A (en) * 2024-01-03 2024-03-22 西北工业大学太仓长三角研究院 Particle generation method and device for geometric model, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102374860A (en) * 2011-09-23 2012-03-14 奇瑞汽车股份有限公司 Three-dimensional visual positioning method and system
CN105333837A (en) * 2015-10-21 2016-02-17 上海集成电路研发中心有限公司 Three dimension scanning device
CN105447812A (en) * 2015-11-10 2016-03-30 南京大学 3D moving image displaying and information hiding method based on linear array
CN105704476A (en) * 2016-01-14 2016-06-22 东南大学 Virtual viewpoint image frequency domain rapid acquisition method based on edge completion
CN106251330A (en) * 2016-07-14 2016-12-21 浙江宇视科技有限公司 A kind of point position mark method and device
CN106803073A (en) * 2017-01-10 2017-06-06 江苏信息职业技术学院 DAS (Driver Assistant System) and method based on stereoscopic vision target

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1055446A (en) 1996-08-09 1998-02-24 Omron Corp Object recognizing device
JP4328692B2 (en) 2004-08-11 2009-09-09 国立大学法人東京工業大学 Object detection device
US9865045B2 (en) 2014-05-18 2018-01-09 Edge 3 Technologies, Inc. Orthogonal and collaborative disparity decomposition
CN105241424B (en) 2015-09-25 2017-11-21 小米科技有限责任公司 Indoor orientation method and intelligent management apapratus
US10380439B2 (en) * 2016-09-06 2019-08-13 Magna Electronics Inc. Vehicle sensing system for detecting turn signal indicators
JP2018091656A (en) * 2016-11-30 2018-06-14 キヤノン株式会社 Information processing apparatus, measuring apparatus, system, calculating method, program, and article manufacturing method
KR102647351B1 (en) 2017-01-26 2024-03-13 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102374860A (en) * 2011-09-23 2012-03-14 奇瑞汽车股份有限公司 Three-dimensional visual positioning method and system
CN105333837A (en) * 2015-10-21 2016-02-17 上海集成电路研发中心有限公司 Three dimension scanning device
CN105447812A (en) * 2015-11-10 2016-03-30 南京大学 3D moving image displaying and information hiding method based on linear array
CN105704476A (en) * 2016-01-14 2016-06-22 东南大学 Virtual viewpoint image frequency domain rapid acquisition method based on edge completion
CN106251330A (en) * 2016-07-14 2016-12-21 浙江宇视科技有限公司 A kind of point position mark method and device
CN106803073A (en) * 2017-01-10 2017-06-06 江苏信息职业技术学院 DAS (Driver Assistant System) and method based on stereoscopic vision target

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIN-LONG CHEN等: "3D Free-Form Object Recognition Using Indexing by Contour Features", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816727B (en) * 2019-01-29 2023-05-02 江苏医像信息技术有限公司 Target identification method for three-dimensional atlas
CN109816727A (en) * 2019-01-29 2019-05-28 江苏医像信息技术有限公司 The target identification method of three-dimensional atlas
CN109829447A (en) * 2019-03-06 2019-05-31 百度在线网络技术(北京)有限公司 Method and apparatus for determining three-dimensional vehicle frame
CN110033515A (en) * 2019-04-15 2019-07-19 同济大学建筑设计研究院(集团)有限公司 Figure conversion method, device, computer equipment and storage medium
WO2020223940A1 (en) * 2019-05-06 2020-11-12 深圳大学 Posture prediction method, computer device and storage medium
US11348304B2 (en) 2019-05-06 2022-05-31 Shenzhen University Posture prediction method, computer device and storage medium
CN110390258A (en) * 2019-06-05 2019-10-29 东南大学 Image object three-dimensional information mask method
CN110826499A (en) * 2019-11-08 2020-02-21 上海眼控科技股份有限公司 Object space parameter detection method and device, electronic equipment and storage medium
CN111161129A (en) * 2019-11-25 2020-05-15 佛山欧神诺云商科技有限公司 Three-dimensional interaction design method and system for two-dimensional image
CN113065999A (en) * 2019-12-16 2021-07-02 杭州海康威视数字技术股份有限公司 Vehicle-mounted panorama generation method and device, image processing equipment and storage medium
CN111136648A (en) * 2019-12-27 2020-05-12 深圳市优必选科技股份有限公司 Mobile robot positioning method and device and mobile robot
CN111136648B (en) * 2019-12-27 2021-08-27 深圳市优必选科技股份有限公司 Mobile robot positioning method and device and mobile robot
CN113435224A (en) * 2020-03-06 2021-09-24 华为技术有限公司 Method and device for acquiring 3D information of vehicle
EP4105820A4 (en) * 2020-03-06 2023-07-26 Huawei Technologies Co., Ltd. Method and device for acquiring 3d information of vehicle
WO2021175119A1 (en) * 2020-03-06 2021-09-10 华为技术有限公司 Method and device for acquiring 3d information of vehicle
CN111401423B (en) * 2020-03-10 2023-05-26 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle
CN111401423A (en) * 2020-03-10 2020-07-10 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle
CN111581415A (en) * 2020-03-18 2020-08-25 时时同云科技(成都)有限责任公司 Method for determining similar objects, and method and equipment for determining object similarity
CN111882596A (en) * 2020-03-27 2020-11-03 浙江水晶光电科技股份有限公司 Structured light module three-dimensional imaging method and device, electronic equipment and storage medium
CN111882596B (en) * 2020-03-27 2024-03-22 东莞埃科思科技有限公司 Three-dimensional imaging method and device for structured light module, electronic equipment and storage medium
CN113591518A (en) * 2020-04-30 2021-11-02 华为技术有限公司 Image processing method, network training method and related equipment
WO2021218693A1 (en) * 2020-04-30 2021-11-04 华为技术有限公司 Image processing method, network training method, and related device
CN113591518B (en) * 2020-04-30 2023-11-03 华为技术有限公司 Image processing method, network training method and related equipment
CN112089422A (en) * 2020-07-02 2020-12-18 王兆英 Self-adaptive medical system and method based on wound area analysis
CN112164143A (en) * 2020-10-23 2021-01-01 广州小马慧行科技有限公司 Three-dimensional model construction method and device, processor and electronic equipment
CN112990217A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Image recognition method and device for vehicle, electronic equipment and medium
CN116611165A (en) * 2023-05-18 2023-08-18 中国船舶集团有限公司第七一九研究所 CATIA-based equipment base quick labeling method and system
CN116611165B (en) * 2023-05-18 2023-11-21 中国船舶集团有限公司第七一九研究所 CATIA-based equipment base quick labeling method and system

Also Published As

Publication number Publication date
JP6830139B2 (en) 2021-02-17
CN109242903B (en) 2020-08-07
EP3621036A1 (en) 2020-03-11
JP2020042818A (en) 2020-03-19
US20200082553A1 (en) 2020-03-12
US11024045B2 (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN109242903A (en) Generation method, device, equipment and the storage medium of three-dimensional data
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
KR102126724B1 (en) Method and apparatus for restoring point cloud data
CN109300159A (en) Method for detecting position, device, equipment, storage medium and vehicle
US10086955B2 (en) Pattern-based camera pose estimation system
CN112861653A (en) Detection method, system, equipment and storage medium for fusing image and point cloud information
CN109816704A (en) The 3 D information obtaining method and device of object
CN110386142A (en) Pitch angle calibration method for automatic driving vehicle
TW201947451A (en) Interactive processing method, apparatus and processing device for vehicle loss assessment and client terminal
CN110163903A (en) The acquisition of 3-D image and image position method, device, equipment and storage medium
US10451403B2 (en) Structure-based camera pose estimation system
EP2887315B1 (en) Camera calibration device, method for implementing calibration, program and camera for movable body
JP2014101075A (en) Image processing apparatus, image processing method and program
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN104484883A (en) Video-based three-dimensional virtual ship positioning and track simulation method
CN113011364B (en) Neural network training, target object detection and driving control method and device
CN111539484A (en) Method and device for training neural network
CN108876857A (en) Localization method, system, equipment and the storage medium of automatic driving vehicle
CN110361005A (en) Positioning method, positioning device, readable storage medium and electronic equipment
CN109703465A (en) The control method and device of vehicle-mounted imaging sensor
WO2023016182A1 (en) Pose determination method and apparatus, electronic device, and readable storage medium
JP2008506283A (en) Method and apparatus for determining camera pose
CN109829401A (en) Traffic sign recognition method and device based on double capture apparatus
CN110390295A (en) A kind of image information recognition methods, device and storage medium
WO2021189420A1 (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211014

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Patentee after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Patentee before: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) Co.,Ltd.

TR01 Transfer of patent right