CN106056616B - Deep learning 2D turns 3D unit pixel block depth map amending method and device - Google Patents

Deep learning 2D turns 3D unit pixel block depth map amending method and device Download PDF

Info

Publication number
CN106056616B
CN106056616B CN201610397022.5A CN201610397022A CN106056616B CN 106056616 B CN106056616 B CN 106056616B CN 201610397022 A CN201610397022 A CN 201610397022A CN 106056616 B CN106056616 B CN 106056616B
Authority
CN
China
Prior art keywords
depth
point
tracking
frame
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610397022.5A
Other languages
Chinese (zh)
Other versions
CN106056616A (en
Inventor
赵天奇
渠源
李桂楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twelve Dimension Beijing Technology Co ltd
Original Assignee
Twelve Dimensional (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twelve Dimensional (beijing) Technology Co Ltd filed Critical Twelve Dimensional (beijing) Technology Co Ltd
Priority to CN201610397022.5A priority Critical patent/CN106056616B/en
Publication of CN106056616A publication Critical patent/CN106056616A/en
Application granted granted Critical
Publication of CN106056616B publication Critical patent/CN106056616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The present invention provides the 2D based on deep learning and turns 3D unit pixel block depth map amending method and device, comprising: imports deep learning network 2D and turns the depth formatted data extracted in 3D unit pixel block depth map and generate point diagram;The instruction of user first is received by smearing depth point depth information in modification point diagram;The instruction of user second is received using depth point depth and location information in images match tracking mode modification point diagram;Receive the depth point that user's third instruction mobile image matched jamming mode can not track;It receives user the 4th and instructs the depth point selected in point diagram, delete any non-selected point in the convex closure of the point composition in all life cycles of point;User the 5th is received to instruct the position in selected element figure and add some points;Modified point diagram is exported with depth format for using the process of unit pixel block depth map to use during deep learning network 2D turns 3D.3D depth image accuracy can be improved, audience demand is met.

Description

Deep learning 2D turns 3D unit pixel block depth map amending method and device
Technical field
Turn to mention in 3D unit pixel block depth map the present invention relates to the field 3D more particularly to a kind of 2D based on deep learning The amending method and device for the depth formatted data taken out.
Background technique
Currently, deep learning is quickly grown, all occurs pleasurable achievement in every field.Existing patent " one 2D image of the kind based on deep learning turns the method and system of 3D rendering " it is that depth learning technology is used to 2D to turn the field 3D, lead to 2D film is switched to 3D by the existing 3D film of overfitting, and it is to be based on deep learning for the mono- disparity map of 2D that existing 2D, which turns 3D technology, It is unit pixel block depth map as (transformation of ownership is namely needed to become the original image of 3D) exports, then 2D is obtained by tinter etc. The corresponding 3D rendering of haplopia difference image, effect increase significantly compared with traditional automatic 2D turns 3D.But this method 3D The accuracy of depth image still cannot make the effect of all camera lenses that can meet the needs of spectators.
In consideration of it, the accuracy of 3D depth image how is improved, so that the effect that 2D turns camera lens in 3D meets the need of spectators Hope for success for the current technical issues that need to address.
Summary of the invention
In order to solve the above technical problems, the present invention provides a kind of 2D based on deep learning and turns 3D unit pixel block depth The amending method and device of the depth formatted data extracted in figure, can effectively improve the accurate of 3D depth image Property, to improve the effect that 2D turns 3D rear lens, meet the needs of spectators.
Turn to extract in 3D unit pixel block depth map in a first aspect, the present invention provides a kind of 2D based on deep learning Depth formatted data amending method, comprising:
It imports deep learning network 2D and turns the depth formatted data extracted in the unit pixel block depth map of 3D, Point diagram is generated, including being stored in depth formatted data in the depth point of point diagram, generates corresponding depth map, and use The position of color point mark position depth point and state;
The first instruction for receiving user, the depth according to first instruction by smearing to the depth point in point diagram Information is modified;
The second instruction for receiving user, according to second instruction to the position in point diagram by the way of images match tracking The depth and location information for setting depth point are modified;
Receive user third instruction, according to the third instruction in point diagram use images match track by the way of can not The depth point of tracking is moved;
The 4th instruction for receiving user selects the depth point in point diagram according to the 4th instruction, and will Any non-selected depth point deletion in the convex closure of depth point composition in all life cycles of selected location depth point;
The 5th instruction for receiving user according to the position in the 5th instruction selected element figure, and adds in selected position Depth point;
By modified point diagram with the export of depth formatted data, for making during deep learning network 2D turns 3D It is used with the process of said units block of pixels depth map.
Optionally, the color point includes: the square point, red of the square point of yellow, the square point of green, red square point, purple Color single arrow and green double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, be called it is better, it is better existing There is depth in position again;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has Position does not have depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame It is better;The red single arrow is also represented by the depth point for not catching up with rollback, indicates that above or below is not caught up with;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth Point;
Side's point of the purple indicates the depth point chosen, if in the life cycle of position depth point, display Present frame should position show the position where nearest position key frame if having exceeded life cycle;
Wherein, position refers to that this frame itself is that position key frame or this frame are not position key frame but before it There is position key frame below;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but It is that its front and back has depth key frame.
Optionally, it is being modified by smearing to the depth of the depth point in point diagram according to first instruction When, further includes:
Display color point is chosen whether according to first instruction, if display grayscale image, if display original image, if aobvious Show alternating graph, if display contour map, if be superimposed semi-transparent depth map.
Optionally, the mode of described image matched jamming, comprising: global unidirectionally isolated formula tracking, local unidirectionally isolated formula Tracking, global unidirectionally continous way tracking, the unidirectional continous way tracking in part, the overall situation intersect isolated formula tracking and global intersection continous way Tracking;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information is deleted the depth point of the frame of tracking, is saved in a manner of isolated positions depth point;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information deletes the depth point in the Convex range for the depth point composition that tracking frame is caught up with, with isolated The mode of depth point saves;
The global unidirectional continous way tracking, comprising: the position of the frame of tracking is deleted in unidirectional tracking, the position of assignment tracking Depth point is set, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, and marks tracking side To red unidirectional arrow, found at the end of tracking nearest depth point depth information assign depth value;
The unidirectional continous way tracking in part, comprising: what tracking frame was caught up with deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved, the position not catching up in a manner of continuous position depth point It sets depth point rollback to go to start, the red unidirectional arrow in label tracking direction finds nearest depth at the end of tracking The depth of point assigns depth value;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only deleted in first pass Except depth point;
The global continous way of intersecting is tracked, comprising: is carried out twice of global unidirectional continous way tracking, but is only deleted in first pass Except depth point.
Second aspect, the present invention provide a kind of 2D based on deep learning and turn to extract in 3D unit pixel block depth map Depth formatted data modification device, comprising:
Import modul turns the position extracted in the unit pixel block depth map of 3D for importing deep learning network 2D Depth format data generate point diagram, including being stored in depth formatted data in the depth point of point diagram, generate corresponding Depth map, and position and state with color point mark position depth point;
Module is smeared, for receiving the first instruction of user, according to first instruction by smearing to the position in point diagram The depth information for setting depth point is modified;
Tracking module, for receiving the second instruction of user, according to second instruction using the side of images match tracking Formula modifies to the depth and location information of the depth point in point diagram;
Mobile module, for receive user third instruct, according to the third instruction in point diagram use images match The depth point that the mode of tracking can not track is moved;
Removing module, for receiving the 4th instruction of user, according to the 4th instruction to the depth point in point diagram Any non-selected position in convex closure for being selected, and the depth point in all life cycles of selected location depth point being formed Set depth point deletion;
It adds some points module, for receiving the 5th instruction of user, according to the position in the 5th instruction selected element figure, and Selected position adds depth point;
Export module, for exporting modified point diagram with the format of depth, in deep learning network 2D Turn to use the process of said units block of pixels depth map to use during 3D.
Optionally, the color point includes: the square point, red of the square point of yellow, the square point of green, red square point, purple Color single arrow and green double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, be called it is better, it is better existing There is depth in position again;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has Position does not have depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame It is better;The red single arrow is also represented by the depth point for not catching up with rollback, indicates that above or below is not caught up with;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth Point;
Side's point of the purple indicates the depth point chosen, if in the life cycle of position depth point, display Present frame should position show the position where nearest position key frame if having exceeded life cycle;
Wherein, position refers to that this frame itself is that position key frame or this frame are not position key frame but before it There is position key frame below;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but It is that its front and back has depth key frame.
Optionally, the smearing module, is also used to
Display color point is chosen whether according to first instruction, if display grayscale image, if display original image, if aobvious Show alternating graph, if display contour map, if be superimposed semi-transparent depth map.
Optionally, the mode of described image matched jamming, comprising: global unidirectionally isolated formula tracking, local unidirectionally isolated formula Tracking, global unidirectionally continous way tracking, the unidirectional continous way tracking in part, the overall situation intersect isolated formula tracking and global intersection continous way Tracking;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information is deleted the depth point of the frame of tracking, is saved in a manner of isolated positions depth point;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information deletes the depth point in the Convex range for the depth point composition that tracking frame is caught up with, with isolated The mode of depth point saves;
The global unidirectional continous way tracking, comprising: the position of the frame of tracking is deleted in unidirectional tracking, the position of assignment tracking Depth point is set, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, and marks tracking side To red unidirectional arrow, found at the end of tracking nearest depth point depth information assign depth value;
The unidirectional continous way tracking in part, comprising: what tracking frame was caught up with deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved, the position not catching up in a manner of continuous position depth point It sets depth point rollback to go to start, the red unidirectional arrow in label tracking direction finds nearest depth at the end of tracking The depth of point assigns depth value;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only deleted in first pass Except depth point;
The global continous way of intersecting is tracked, comprising: is carried out twice of global unidirectional continous way tracking, but is only deleted in first pass Except depth point.
As shown from the above technical solution, the 2D of the invention based on deep learning turns to extract in 3D unit pixel block depth map The amending method and device of depth formatted data out, can effectively improve the accuracy of 3D depth image, to mention High 2D turns the effect of 3D rear lens, meets the needs of spectators.
Detailed description of the invention
Fig. 1 is that the 2D based on deep learning that one embodiment of the invention provides turns to extract in 3D unit pixel block depth map The flow diagram of the amending method for the depth formatted data come;
Fig. 2 is the original image provided in an embodiment of the present invention that be converted into solid;
Fig. 3 is the depth map to Fig. 2 using the point diagram before the modification of embodiment illustrated in fig. 1 the method;
Fig. 4 is to use the unit pixel block depth map before the modification of embodiment illustrated in fig. 1 the method to Fig. 2 in the prior art The depth map ultimately generated by subsequent deep learning network;
Fig. 5 is that the left and right generated using the depth map of Fig. 4 is schemed;
Fig. 6 is the depth map for using Fig. 2 the modified point diagram of embodiment illustrated in fig. 1 the method;
Fig. 7 is to use depth formatted data derived from the modified point diagram of embodiment illustrated in fig. 1 the method to Fig. 2 The depth map ultimately generated by subsequent deep learning network;
Fig. 8 is that the left and right generated using the depth map of Fig. 7 is schemed;
Fig. 9 is the signal for indicating the depth point position of point diagram in embodiment illustrated in fig. 1 the method with color point Figure;
Figure 10 is that the 2D based on deep learning that one embodiment of the invention provides turns to extract in 3D unit pixel block depth map The structural schematic diagram of the modification device of depth formatted data out.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only It is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiment of the present invention, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Following is related names used in the embodiment of the present invention:
1, original image: need to be converted into the original image of left and right figure.
2, depth: for describing the distance of pixel distance observer.
3, it depth map: for describing the picture of image pixel depth, is indicated herein with the grayscale image of black and white.
4, a series of original image sequence frame: the original image of having time sequencings.
5, the abscissa, ordinate and depth value of corresponding original image depth point: are had recorded.It is divided into isolated positions depth Point and continuous position depth point.It only works in a certain frame isolated positions depth point.Continuous position depth point is in continuous multiframe It works.
6, color point: being the dot for possessing different colours and geometry, for indicating position and the shape of depth point State.
7, it depth formatted data: is recorded with the format of abscissa, ordinate, depth value, and each frame is marked Original image corresponding one group of abscissa, ordinate, depth value.
8, point diagram: for storing and modifying in " method and system that a kind of 2D image based on deep learning turns 3D rendering " The depth formatted data extracted in the unit pixel depth map of generation includes the corresponding one group of depth point of each frame, The depth point of one depth map and one group of color point, point diagram is used to record position depth format data, the depth map of point diagram It is the depth point nearest with it to be found with each of which pixel, and assign the depth value of nearest depth point to the picture Element is the depth information for embodying depth point.Color point: being the location information and shape for embodying depth point State.
9, original image pixel displacement respective distance left and right figure: is used for the picture of left eye and right eye viewing according to depth map.
10, life cycle: the frame range that the depth point in point diagram works.
The 2D based on deep learning that Fig. 1 shows one embodiment of the invention offer turns to mention in 3D unit pixel block depth map The flow diagram of the amending method for the depth formatted data taken out, as shown in Figure 1, the present embodiment based on depth The 2D of habit turns the amending method of the depth formatted data extracted in 3D unit pixel block depth map, including step 101- 107:
101, it imports deep learning network 2D and turns the depth format extracted in the unit pixel block depth map of 3D Data generate point diagram including being stored in depth formatted data in the depth point of point diagram and generate corresponding depth Figure, and position and state with color point mark position depth point, can refer to Fig. 9.
In a particular application, the color point includes: the side of the square point of yellow, the square point of green, red square point, purple Point, red single arrow and green double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, be called it is better, it is better existing There is a depth in position again, position refer to this frame itself be position key frame or this frame be not position key frame but it before There is position key frame below;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but It is that its front and back has depth key frame;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has Position does not have depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame It is better, for playing suggesting effect;The red single arrow is also represented by the depth point for not catching up with rollback, indicate front or It does not catch up with below;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth Point, for playing suggesting effect;
Side's point of the purple indicates the depth point chosen, if in the life cycle of position depth point, display Present frame should position show the position where nearest position key frame if having exceeded life cycle, this be in order to Can be with shift position depth point outside life cycle, for example in first frame life cycle is had selected only in the depth of first frame Point, is switched to the second frame, and the second frame shows the position where first frame depth point.
It will be appreciated that each point manual setting life cycle is very heavy labor since the point in point diagram has very much It is dynamic, so the range of life cycle can be indicated with the position key frame of serial number minimum and maximum.
102, the first instruction for receiving user, according to first instruction by smearing to the depth point in point diagram Depth information is modified.
It will be appreciated that it is essential for assigning depth to if to modify depth.Since the position in point diagram is deep There are many degree point, and each depth point setting depth is very onerous toil, so picture can be used according to first instruction The mode of pen assigns depth value.Left button is smeared, and the depth point in paintbrush covering assigns the depth value currently selected.Right button applies Erasing function is smeared, the depth that present frame changes the depth point before depth is restored.It can be selected according to first instruction when smearing It selects and whether shows color point, if display grayscale image, if display original image, if (the three-dimensional eyes of band can be seen display alternating graph Stereoscopic effect out), if display contour map, if be superimposed semi-transparent depth map.
103, the second instruction for receiving user, according to second instruction in point diagram by the way of images match tracking Depth point depth and location information modify.
It will be appreciated that smearing the depth that can only modify single frames, when totalframes is many or wrong more, a frame frame is smeared It is very to consume cost of labor.Cost of labor can be reduced by the way of images match tracking.For different deep errors Form based on tracking by relevant matches, successively proposes different trace modes, the mode of described image matched jamming, packet Include: unidirectionally isolated formula tracking, global unidirectional continous way tracking, the unidirectional continous way in part chase after for global unidirectionally isolated formula tracking, part The isolated formula tracking of track, global intersection and global continous way of intersecting are tracked;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information is deleted the depth point of the frame of tracking, is saved in a manner of isolated positions depth point;It is described global single Stablize to the tracking of isolated formula for camera lens, but interframe shakes serious or depth value mistake camera lens, Object Depth is several in camera lens Do not change, this tracking can not transition;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information deletes the depth point in the Convex range for the depth point composition that tracking frame is caught up with, with isolated The mode of depth point saves;The part unidirectionally isolates formula tracking and stablizes for scene, but interframe shake is serious or deep The camera lens of angle value mistake, Object Depth has almost no change in camera lens, this tracking can not transition;
The global unidirectional continous way tracking, comprising: the position of the frame of tracking is deleted in unidirectional tracking, the position of assignment tracking Depth point is set, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, and marks tracking side To red unidirectional arrow, found at the end of tracking nearest depth point depth information assign depth value;The overall situation Unidirectional continous way tracking is stablized for camera lens, but interframe shakes serious or depth value mistake camera lens, Object Depth in camera lens It changes;
The unidirectional continous way tracking in part, comprising: what tracking frame was caught up with deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved, the position not catching up in a manner of continuous position depth point It sets depth point rollback to go to start, the red unidirectional arrow in label tracking direction finds nearest depth at the end of tracking The depth of point assigns depth value;The unidirectional continous way tracking in part is stablized for scene, but interframe shakes serious or depth It is worth the camera lens of mistake, Object Depth changes in camera lens;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only deleted in first pass Except depth point;It is described it is global to intersect isolated formula tracking stable for camera lens or move in parallel, but interframe shake it is serious or The camera lens of depth value mistake, Object Depth has almost no change in camera lens, this tracking can not transition;
The global continous way of intersecting is tracked, comprising: is carried out twice of global unidirectional continous way tracking, but is only deleted in first pass Except depth point;It is described it is global to intersect continous way tracking stable for camera lens or move in parallel, but interframe shake it is serious or The camera lens of depth value mistake, Object Depth changes in camera lens.
Relevant noun used in the mode of following described image matched jammings:
Unidirectional: tracking is from start frame to target frame;
Intersect: tracking goes to start tracking from start frame to target frame, then from target frame, is complementary to one another from both direction Depth information;
Isolated: the life cycle of depth point is only in a frame;
Continuous: the life cycle of depth point is in multiframe, that is to say, that has the presence of that point in multiple frames, i.e., Continuous position depth point.Non-key frame, which is taken, finds key frame progress linear transitions;
Global: all depth points of present frame are all the depth to be tracked points;
Part: the depth point of present frame selection is the depth to be tracked point;
Convex closure: for if use is not rigorous, the point set on two-dimensional surface is given, convex closure is exactly to connect outermost point Convex polygonal come what is constituted, it can concentrate all points comprising point.For deleting the point of the depth within the scope of it, eliminate not just The influence of true depth point can expand by a small margin it before;
Rollback: the tracking of the depth point is deleted when the matching degree of tracking is less than preset value in continuous position depth point The subsequent all depth points of start frame are tracked in direction.
104, receive user third instruction, according to the third instruction in point diagram use images match track by the way of The depth point that can not be tracked is moved.
It will be appreciated that executing step 105 is to track the depth point that can not be caught up with because of some, if do not moved, These depth points do not have correct depth value that can cause large effect to final effect, if judgement is learnt if will have Larger shadow can be generated to final effect by having the depth catching up with point to give correct position and depth within the scope of the frame of tracking It rings, mobile and assignment just is carried out to position depth point, that is to say, that if it is considered to influence caused by the depth point not catching up with Less, can not have to operate on it.
105, the 4th instruction for receiving user selects the depth point in point diagram according to the 4th instruction, And any non-selected depth point in the convex closure for forming the depth point in all life cycles of selected location depth point It deletes.
It will be appreciated that this step 105, which avoids a frame frame, deletes the duplication of labour a little, efficiency is improved.
106, the 5th instruction for receiving user, according to the position in the 5th instruction selected element figure, and in selected position It sets and adds depth point.
It will be appreciated that there is no position deep in some fine structures due to the limited amount of the depth point in point diagram Spend point, if but a portion fine structure do not assign correct depth and can have large effect to overall effect, then Propose the mode added some points solve the problems, such as it is this.
107, modified point diagram is exported with the format of depth, for during deep learning network 2D turns 3D It is used using the process of said units block of pixels depth map.
Has an example, as shown in Fig. 2 to Fig. 8, Fig. 2 is to be converted into three-dimensional original image, and Fig. 3 is to Fig. 2 Using the depth map of the point diagram before the modification of the present embodiment the method, Fig. 4 is to use described in the present embodiment Fig. 2 in the prior art The depth map that unit pixel block depth map before method modification is ultimately generated by subsequent deep learning network, Fig. 5 are to use The left and right figure that the depth map of Fig. 4 generates;Fig. 6 is the depth map for using Fig. 2 the modified point diagram of the present embodiment the method, figure 7 be to pass through subsequent depth using depth formatted data derived from the modified point diagram of the present embodiment the method to Fig. 2 The depth map that learning network ultimately generates;Fig. 8 is that the left and right generated using the depth map of Fig. 7 is schemed.By Fig. 2 to Fig. 8 it is found that this reality It applies the 2D described in example based on deep learning and turns repairing for the depth formatted data extracted in 3D unit pixel block depth map The effect that 2D turns camera lens in 3D can be effectively improved by changing method.
The 2D based on deep learning of the present embodiment turns the depth lattice extracted in 3D unit pixel block depth map The amending method of formula data can effectively improve the accuracy of 3D depth image, full to improve the effect that 2D turns 3D rear lens The demand of sufficient spectators.
The 2D based on deep learning that Figure 10 shows one embodiment of the invention offer turns in 3D unit pixel block depth map The depth formatted data extracted modification device structural schematic diagram, as shown in Figure 10, the present embodiment based on depth The 2D of degree study turns the modification device of the depth formatted data extracted in 3D unit pixel block depth map, comprising: leads Enter module 11, smear module 12, tracking module 13, mobile module 14, removing module 15, module of adding some points 16 and export module 17; Wherein:
Import modul 11 turns the position extracted in the unit pixel block depth map of 3D for importing deep learning network 2D Depth format data are set, point diagram is generated, including being stored in depth formatted data in the depth point of point diagram, generate phase The depth map answered, and position and state with color point mark position depth point;
Module 12 is smeared, for receiving the first instruction of user, according to first instruction by smearing in point diagram The depth information of depth point is modified;
Tracking module 13, for receiving the second instruction of user, according to second instruction using images match tracking Mode modifies to the depth and location information of the depth point in point diagram;
Mobile module 14, for receive user third instruct, according to the third instruction in point diagram use image The depth point that mode with tracking can not track is moved;
Removing module 15, for receiving the 4th instruction of user, according to the 4th instruction to the depth in point diagram Point is selected, and will be any non-selected in the convex closure of the depth point composition in all life cycles of selected location depth point Depth point deletion;
Module of adding some points 16, for receiving the 5th instruction of user, according to the position in the 5th instruction selected element figure, and Add depth point in selected position;
Export module 17, for exporting modified point diagram with the format of depth, in deep learning network 2D turns to use the process of said units block of pixels depth map to use during 3D.
In a particular application, the color point includes: the side of the square point of yellow, the square point of green, red square point, purple Point, red single arrow and green double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, be called it is better, it is better existing There is a depth in position again, position refer to this frame itself be position key frame or this frame be not position key frame but it before There is position key frame below;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but It is that its front and back has depth key frame;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has Position does not have depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame It is better, for playing suggesting effect;The red single arrow is also represented by the depth point for not catching up with rollback, indicate front or It does not catch up with below;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth Point, for playing suggesting effect;
Side's point of the purple indicates the depth point chosen, if in the life cycle of position depth point, display Present frame should position show the position where nearest position key frame if having exceeded life cycle, this be in order to Can be with shift position depth point outside life cycle, for example in first frame life cycle is had selected only in the depth of first frame Point, is switched to the second frame, and the second frame shows the position where first frame depth point.
It will be appreciated that since the depth point in point diagram has very much, each depth point manual setting life Period is very onerous toil, so the range of life cycle can be indicated with the position key frame of serial number minimum and maximum.
In a particular application, the smearing module 12, it may also be used for
Display color point is chosen whether according to first instruction, if display grayscale image, if display original image, if aobvious Show alternating graph, if display contour map, if be superimposed semi-transparent depth map.
It will be appreciated that smearing the depth that can only modify single frames, when totalframes is many or wrong more, a frame frame is smeared It is very to consume cost of labor.Cost of labor can be reduced by the way of images match tracking.For different deep errors Form successively proposes different trace modes based on tracking by relevant matches, in a particular application, described image matching The mode of tracking, comprising: global unidirectionally isolated formula tracking, part unidirectionally isolated formula tracking, global unidirectional continous way tracking, part Unidirectional continous way tracking, the isolated formula tracking of global intersection and global continous way of intersecting are tracked;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information is deleted the depth point of the frame of tracking, is saved in a manner of isolated positions depth point;It is described global single Stablize to the tracking of isolated formula for camera lens, but interframe shakes serious or depth value mistake camera lens, Object Depth is several in camera lens Do not change, this tracking can not transition;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking Obtained location information deletes the depth point in the Convex range for the depth point composition that tracking frame is caught up with, with isolated The mode of depth point saves;The part unidirectionally isolates formula tracking and stablizes for scene, but interframe shake is serious or deep The camera lens of angle value mistake, Object Depth has almost no change in camera lens, this tracking can not transition;
The global unidirectional continous way tracking, comprising: the position of the frame of tracking is deleted in unidirectional tracking, the position of assignment tracking Depth point is set, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, and marks tracking side To red unidirectional arrow, found at the end of tracking nearest depth point depth information assign depth value;The overall situation Unidirectional continous way tracking is stablized for camera lens, but interframe shakes serious or depth value mistake camera lens, Object Depth in camera lens It changes;
The unidirectional continous way tracking in part, comprising: what tracking frame was caught up with deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved, the position not catching up in a manner of continuous position depth point It sets depth point rollback to go to start, the red unidirectional arrow in label tracking direction finds nearest depth at the end of tracking The depth of point assigns depth value;The unidirectional continous way tracking in part is stablized for scene, but interframe shakes serious or depth It is worth the camera lens of mistake, Object Depth changes in camera lens;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only deleted in first pass Except depth point;It is described it is global to intersect isolated formula tracking stable for camera lens or move in parallel, but interframe shake it is serious or The camera lens of depth value mistake, Object Depth has almost no change in camera lens, this tracking can not transition;
The global continous way of intersecting is tracked, comprising: is carried out twice of global unidirectional continous way tracking, but is only deleted in first pass Except depth point;It is described it is global to intersect continous way tracking stable for camera lens or move in parallel, but interframe shake it is serious or The camera lens of depth value mistake, Object Depth changes in camera lens.
Relevant noun used in the mode of following described image matched jammings:
Unidirectional: tracking is from start frame to target frame;
Intersect: tracking goes to start tracking from start frame to target frame, then from target frame, is complementary to one another from both direction Depth information;
Isolated: the life cycle of depth point is only in a frame;
Continuous: the life cycle of depth point is in multiframe, that is to say, that has the presence of that point in multiple frames, i.e., Continuous position depth point.Non-key frame, which is taken, finds key frame progress linear transitions;
Global: all depth points of present frame are all tracking points;
Part: the depth point of present frame selection is tracking point;
Convex closure: for if use is not rigorous, the point set on two-dimensional surface is given, convex closure is exactly to connect outermost point Convex polygonal come what is constituted, it can concentrate all points comprising point.For deleting the point of the depth within the scope of it, eliminate not just The influence of true depth point can expand by a small margin it before;
Rollback: the tracking of the depth point is deleted when the matching degree of tracking is less than preset value in continuous position depth point The subsequent all depth points of start frame are tracked in direction.
The 2D based on deep learning of the present embodiment turns the depth lattice extracted in 3D unit pixel block depth map The modification device of formula data is applied in processor, can effectively improve the accuracy of 3D depth image, turn 3D to improve 2D The effect of rear lens, meets the needs of spectators.
Device described in the present embodiment can be used for executing above method embodiment, and principle is similar with technical effect, this Place repeats no more.
It should be noted that for device embodiment, since it is basically similar to the method embodiment, so description Fairly simple, the relevent part can refer to the partial explaination of embodiments of method.
Those of ordinary skill in the art will appreciate that: the instruction in the above various embodiments can be realized above-mentioned in any combination This can be accomplished by hardware associated with program instructions for all or part of the steps of each method embodiment.Program above-mentioned can deposit It is stored in a computer-readable storage medium.When being executed, execution includes the steps that above-mentioned each method embodiment to the program;And Storage medium above-mentioned includes: the various media that can store program code such as ROM, RAM, magnetic or disk.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (6)

1. a kind of 2D based on deep learning turns the depth formatted data extracted in 3D unit pixel block depth map Amending method characterized by comprising
It imports deep learning network 2D and turns the depth formatted data extracted in the unit pixel block depth map of 3D, generate Point diagram generates corresponding depth map including being stored in depth formatted data in the depth point of point diagram, and with colour The position of point mark position depth point and state;
The first instruction for receiving user, the depth information according to first instruction by smearing to the depth point in point diagram It modifies, comprising: depth value is assigned by the way of paintbrush according to first instruction;Left button is smeared, in paintbrush covering Depth point assigns the depth value currently selected;Right button smears erasing function, restores present frame and changes the depth before depth The depth of point;
The second instruction for receiving user, it is deep to the position in point diagram by the way of images match tracking according to second instruction The depth and location information for spending point are modified;
Receive user third instruction, according to the third instruction in point diagram use images match track by the way of can not track Depth point moved;
The 4th instruction for receiving user selects the depth point in point diagram according to the 4th instruction, and will be selected Any non-selected depth point deletion in the convex closure of depth point composition in all life cycles of depth point;
The 5th instruction for receiving user according to the position in the 5th instruction selected element figure, and adds position in selected position Depth point;
It is upper for being used during deep learning network 2D turns 3D by modified point diagram with the export of depth formatted data The process for stating unit pixel block depth map uses;
The color point includes: the square point of yellow, the square point of green, red square point, the square point of purple, red single arrow and green Color double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, to be called better, better existing position There is depth again;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has position There is no depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame Point;The red single arrow is also represented by the depth point for not catching up with rollback, indicates that above or below is not caught up with;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth point;
Side's point of the purple indicates the depth point chosen, if display is current in the life cycle of position depth point Frame should position show the position where nearest position key frame if having exceeded life cycle;
Wherein, position refer to this frame itself be position key frame or this frame be not position key frame but before it and after There is position key frame in face;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but it There is depth key frame in front and back.
2. the method according to claim 1, wherein smearing being passed through according to first instruction in point diagram When the depth of depth point is modified, further includes:
Display color point is chosen whether according to first instruction, if display grayscale image, if display original image, if display is handed over Mistake figure, if display contour map, if be superimposed semi-transparent depth map.
3. the method according to claim 1, wherein the mode of described image matched jamming, comprising: global unidirectional Isolated formula tracking, local unidirectionally isolated formula tracking, global unidirectionally continous way tracking, the unidirectional continous way tracking in part, the overall situation are intersected Isolated formula tracking and global intersection continous way tracking;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking obtain Location information, delete the depth point of the frame of tracking, saved in a manner of isolated positions depth point;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking obtain Location information, delete tracking frame catch up with depth point composition Convex range in depth point, with isolated positions The mode of depth point saves;
The global unidirectional continous way tracking, comprising: unidirectional tracking, the position of assignment tracking, the position for deleting the frame of tracking are deep Point is spent, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, label tracking direction Red single arrow, the depth information that nearest depth point is found at the end of tracking assign depth value;
The unidirectional continous way tracking in part, comprising: the position that tracking frame is caught up with is deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved in a manner of continuous position depth point, and the position not catching up with is deep Degree point rollback goes to start, and the red single arrow in label tracking direction finds the depth of nearest depth point at the end of tracking Degree assigns depth value;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only delete position in first pass Set depth point;
The global continous way of intersecting is tracked, comprising: carries out twice of global unidirectional continous way tracking, but only in first pass deletion position Set depth point.
4. a kind of 2D based on deep learning turns the depth formatted data extracted in 3D unit pixel block depth map Modify device characterized by comprising
Import modul turns the depth extracted in the unit pixel block depth map of 3D for importing deep learning network 2D Formatted data generates point diagram, including being stored in depth formatted data in the depth point of point diagram, generates corresponding deep Degree figure, and position and state with color point mark position depth point;
Module is smeared, it is deep to the position in point diagram by smearing according to first instruction for receiving the first instruction of user The depth information of degree point is modified, comprising: depth value is assigned by the way of paintbrush according to first instruction;Left button applies It smears, the depth point in paintbrush covering assigns the depth value currently selected;Right button smears erasing function, restores present frame and changes depth The depth of the depth point before spending;
Tracking module, for receive user second instruction, according to it is described second instruction using images match track by the way of pair The depth and location information of depth point in point diagram are modified;
Mobile module, the third for receiving user instruct, and use images match to track in point diagram according to third instruction The depth point that can not track of mode moved;
Removing module carries out the depth point in point diagram according to the 4th instruction for receiving the 4th instruction of user Selection, and any non-selected position in the convex closure of the depth point composition in all life cycles of selected location depth point is deep Spend point deletion;
It adds some points module, for receiving the 5th instruction of user, according to the position in the 5th instruction selected element figure, and selected Position add depth point;
Export module, for exporting modified point diagram with the format of depth, for turning 3D in deep learning network 2D It is used in the process using the process of said units block of pixels depth map;
The color point includes: the square point of yellow, the square point of green, red square point, the square point of purple, red single arrow and green Color double-head arrow;
Side's point of the yellow indicates isolated positions depth point in the point diagram;
Side's point of the green indicates the part that continuous position depth point is good in the point diagram, to be called better, better existing position There is depth again;
Red side's point indicates the part that continuous position depth point is bad in the point diagram, is called bad point, bad point only has position There is no depth;
The red single arrow expression respectively refers to the frame of front according to direction or subsequent frame is bad point, is in present frame Point;The red single arrow is also represented by the depth point for not catching up with rollback, indicates that above or below is not caught up with;
The green double-head arrow indicates the depth point newly added, and life cycle is only in present frame, i.e. isolated positions depth point;
Side's point of the purple indicates the depth point chosen, if display is current in the life cycle of position depth point Frame should position show the position where nearest position key frame if having exceeded life cycle;
Wherein, position refer to this frame itself be position key frame or this frame be not position key frame but before it and after There is position key frame in face;Depth refer to this frame itself be depth key frame or this frame be not depth key frame but it There is depth key frame in front and back.
5. device according to claim 4, which is characterized in that the smearing module is also used to
Display color point is chosen whether according to first instruction, if display grayscale image, if display original image, if display is handed over Mistake figure, if display contour map, if be superimposed semi-transparent depth map.
6. device according to claim 4, which is characterized in that the mode of described image matched jamming, comprising: global unidirectional Isolated formula tracking, local unidirectionally isolated formula tracking, global unidirectionally continous way tracking, the unidirectional continous way tracking in part, the overall situation are intersected Isolated formula tracking and global intersection continous way tracking;Wherein:
The global unidirectionally isolated formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking obtain Location information, delete the depth point of the frame of tracking, saved in a manner of isolated positions depth point;
The part unidirectionally isolates formula tracking, comprising: unidirectional tracking, the depth value of assignment initial position depth point and tracking obtain Location information, delete tracking frame catch up with depth point composition Convex range in depth point, with isolated positions The mode of depth point saves;
The global unidirectional continous way tracking, comprising: unidirectional tracking, the position of assignment tracking, the position for deleting the frame of tracking are deep Point is spent, is saved in a manner of continuous position depth point, the depth point rollback not catching up with goes to start, label tracking direction Red single arrow, the depth information that nearest depth point is found at the end of tracking assign depth value;
The unidirectional continous way tracking in part, comprising: the position that tracking frame is caught up with is deleted in unidirectional tracking, the position of assignment tracking Depth point in the Convex range of depth point composition, is saved in a manner of continuous position depth point, and the position not catching up with is deep Degree point rollback goes to start, and the red single arrow in label tracking direction finds the depth of nearest depth point at the end of tracking Degree assigns depth value;
The isolated formula tracking of global intersection, comprising: carry out twice of global unidirectional isolated formula tracking, but only delete position in first pass Set depth point;
The global continous way of intersecting is tracked, comprising: carries out twice of global unidirectional continous way tracking, but only in first pass deletion position Set depth point.
CN201610397022.5A 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device Active CN106056616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610397022.5A CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610397022.5A CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Publications (2)

Publication Number Publication Date
CN106056616A CN106056616A (en) 2016-10-26
CN106056616B true CN106056616B (en) 2019-02-26

Family

ID=57170452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610397022.5A Active CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Country Status (1)

Country Link
CN (1) CN106056616B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN104243948A (en) * 2013-12-20 2014-12-24 深圳深讯和科技有限公司 Depth adjusting method and device for converting 2D image to 3D image
CN104639930A (en) * 2013-11-13 2015-05-20 三星电子株式会社 Multi-view image display apparatus and multi-view image display method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8213711B2 (en) * 2007-04-03 2012-07-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Method and graphical user interface for modifying depth maps
US8848038B2 (en) * 2010-07-09 2014-09-30 Lg Electronics Inc. Method and device for converting 3D images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN104639930A (en) * 2013-11-13 2015-05-20 三星电子株式会社 Multi-view image display apparatus and multi-view image display method thereof
CN104243948A (en) * 2013-12-20 2014-12-24 深圳深讯和科技有限公司 Depth adjusting method and device for converting 2D image to 3D image

Also Published As

Publication number Publication date
CN106056616A (en) 2016-10-26

Similar Documents

Publication Publication Date Title
CN101479765B (en) Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
CN104408701B (en) A kind of large scene video image joining method
US9237330B2 (en) Forming a stereoscopic video
CN100355272C (en) Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
CN103703489B (en) Object digitized
US9544576B2 (en) 3D photo creation system and method
Radke Computer vision for visual effects
US20090219383A1 (en) Image depth augmentation system and method
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
US10949700B2 (en) Depth based image searching
CN104159093B (en) The time domain consistence hole region method for repairing and mending of the static scene video of moving camera shooting
CN103856727A (en) Multichannel real-time video splicing processing system
CN103019643A (en) Method for automatic correction and tiled display of plug-and-play large screen projections
CN108154514A (en) Image processing method, device and equipment
CN111091151B (en) Construction method of generation countermeasure network for target detection data enhancement
CN108648264A (en) Underwater scene method for reconstructing based on exercise recovery and storage medium
Bleyer et al. A stereo approach that handles the matting problem via image warping
Russell et al. Dense non-rigid structure from motion
CN103443826A (en) Mesh animation
CN107689050A (en) A kind of depth image top sampling method based on Color Image Edge guiding
CN106296574A (en) 3-d photographs generates method and apparatus
Ramirez et al. Open challenges in deep stereo: the booster dataset
CN104159098B (en) The translucent edge extracting method of time domain consistence of a kind of video
Wang et al. JAWS: just a wild shot for cinematic transfer in neural radiance fields
US20110149039A1 (en) Device and method for producing new 3-d video representation from 2-d video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20161215

Address after: 100024 Beijing City, Chaoyang District, Five Mile Bridge No. 1 Street, building 5, building 4, floor 1

Applicant after: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

Address before: 100024 Beijing City, Chaoyang District, Five Mile Bridge No. 1 Street, building 5, building 4, floor 1

Applicant before: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right

Effective date of registration: 20190102

Address after: Room 408-409, F-4 R&D Center, 4th floor, No. 1 Hospital, Wuliqiao First Street, Chaoyang District, Beijing, 100024

Applicant after: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Address before: 100024 Fourth Floor, Building 5, Courtyard 1, Wuliqiao First Street, Chaoyang District, Beijing

Applicant before: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Depth map modification method and device of 2D to 3D pixel block in depth learning

Effective date of registration: 20201021

Granted publication date: 20190226

Pledgee: Hubble Technology Investment Ltd.

Pledgor: TWELVE DIMENSION (BEIJING) TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001241

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230217

Granted publication date: 20190226

Pledgee: Hubble Technology Investment Ltd.

Pledgor: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Registration number: Y2020990001241

PC01 Cancellation of the registration of the contract for pledge of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20161026

Assignee: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

Assignor: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Contract record no.: X2023980051328

Denomination of invention: Method and device for modifying depth maps of 2D to 3D unit pixel blocks in deep learning

Granted publication date: 20190226

License type: Common License

Record date: 20231211

EE01 Entry into force of recordation of patent licensing contract