CN106056616A - 2D-to-3D unit pixel block depth map modification method and device based on deep learning - Google Patents

2D-to-3D unit pixel block depth map modification method and device based on deep learning Download PDF

Info

Publication number
CN106056616A
CN106056616A CN201610397022.5A CN201610397022A CN106056616A CN 106056616 A CN106056616 A CN 106056616A CN 201610397022 A CN201610397022 A CN 201610397022A CN 106056616 A CN106056616 A CN 106056616A
Authority
CN
China
Prior art keywords
depth
point
trail
followed
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610397022.5A
Other languages
Chinese (zh)
Other versions
CN106056616B (en
Inventor
赵天奇
渠源
李桂楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twelve Dimension Beijing Technology Co ltd
Original Assignee
Twelve Dimension Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twelve Dimension Beijing Technology Co ltd filed Critical Twelve Dimension Beijing Technology Co ltd
Priority to CN201610397022.5A priority Critical patent/CN106056616B/en
Publication of CN106056616A publication Critical patent/CN106056616A/en
Application granted granted Critical
Publication of CN106056616B publication Critical patent/CN106056616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a 2D-to-3D unit pixel block depth map modification method and device based on deep learning. The method comprises the following steps: importing position depth format data extracted in a deep learning network 2D-to-3D unit pixel block depth map and generating a point diagram; receiving a first instruction of a user and modifying depth information of position depth points in the point diagram through painting; receiving a second instruction of the user and modifying depth and position information of the position depth points in the point diagram through an image matching tracking mode; receiving a third instruction of the user and moving the position depth points, which cannot be tracked through the image matching tracking mode; receiving a fourth instruction of the user, selecting the position depth points in the point diagram and deleting non-selected points in a convex hull formed by points in all life cycles of the points; receiving a fifth instruction of the user, selecting a position in the point diagram and adding a point to the position; and exporting the modified point diagram in position depth format to provide service for the flow, where the unit pixel block depth map is used, in the deep learning network 2D-to-3D process. The modification method and device can improve 3D depth image accuracy, thereby meeting requirements of audiences.

Description

Degree of depth study 2D turns 3D unit pixel block depth map amending method and device
Technical field
The present invention relates to 3D field, particularly relate to a kind of 2D based on degree of depth study and turn in 3D unit pixel block depth map and carry The amending method of the depth formatted data taken out and device.
Background technology
At present, degree of depth study quickly grows, and all occurs in that pleasurable achievement in every field.Existing patent " Plant 2D image based on degree of depth study and turn the method and system of 3D rendering " it is degree of deep learning art to be used for 2D turn 3D field, logical Crossing and learn existing 3D film, 2D film transfers to 3D, existing 2D turning 3D technology is to learn mono-for 2D disparity map based on the degree of depth As (namely needing the transformation of ownership to become the original image of 3D) is output as unit pixel block depth map, then obtain 2D by tinter etc. The 3D rendering that haplopia difference image is corresponding, its effect turn 3D with traditional automatic 2D compared with increase significantly.But the method 3D The accuracy of depth image still can not make the effect of all camera lenses can meet the demand of spectators.
In consideration of it, how to improve the accuracy of 3D depth image, so that 2D turns the effect of camera lens in 3D and meets the need of spectators Hope for success and solve the technical problem that for being presently required.
Summary of the invention
For solving above-mentioned technical problem, the present invention provides a kind of 2D based on degree of depth study to turn the 3D unit pixel block degree of depth The amending method of the depth formatted data extracted in figure and device, it is possible to be effectively improved the accurate of 3D depth image Property, turns the effect of 3D rear lens improving 2D, meets the demand of spectators.
First aspect, the present invention provides a kind of 2D based on degree of depth study to turn in 3D unit pixel block depth map and extracts The amending method of depth formatted data, including:
Import degree of deep learning network 2D and turn the depth formatted data extracted in the unit pixel block depth map of 3D, Generate point diagram, including in the depth point that depth formatted data is stored in point diagram, generate corresponding depth map, and use The position of color point mark position depth point and state;
Receive first instruction of user, according to described first instruction by smearing the degree of depth to the depth point in point diagram Information is modified;
Receive second instruction of user, use the mode of images match tracking to the position in point diagram according to described second instruction The degree of depth and the positional information of putting depth point are modified;
Receive the 3rd instruction of user, cannot according to described 3rd instruction mode to using images match to follow the tracks of in point diagram The depth point followed the trail of moves;
Receive the 4th instruction of user, according to described 4th instruction, the depth point in point diagram is selected, and will Any non-selected depth point deletion in the convex closure of the depth point composition in all life cycle of depth point, selected location;
Receive the 5th instruction of user, according to the position in described 5th instruction selected element figure, and add in selected position Depth point;
Amended point diagram is derived with depth formatted data, makes for during turning 3D in degree of deep learning network 2D Use by the flow process of above-mentioned unit pixel block depth map.
Alternatively, described color point includes: the square point of yellow, green square point, red square point, the square point of purple, red Color list arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing There is again the degree of depth position;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point is only Position does not has the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Better;Described red single arrow is also represented by not catching up with the depth point of rollback, represents that above or below does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. the isolated positions degree of depth Point;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, and display Present frame should position, if beyond life cycle, showing the position at nearest key frame place, position;
Wherein, having position to refer to this frame itself is position key frame, or but this frame is not position key frame before it There is position key frame below;Having the degree of depth to refer to this frame itself is degree of depth key frame, or this frame be not degree of depth key frame but It is that its front and back has degree of depth key frame.
Alternatively, modifying by smearing the degree of depth to the depth point in point diagram according to described first instruction Time, also include:
Choose whether to show color point according to described first instruction, if display gray-scale map, if display artwork, if aobvious Show alternating graph, if display contour map, if the depth map that superposition is semi-transparent.
Alternatively, the mode that described images match is followed the tracks of, including: the unidirectional isolated formula of the overall situation is followed the trail of, the most unidirectional isolated formula Follow the trail of, overall unidirectional continuous way is followed the trail of, the most unidirectional continuous way is followed the trail of, the isolated formula of overall intersection is followed the trail of and overall situation intersection continuous way Follow the trail of;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point of the frame followed the trail of, preserves in the way of isolated positions depth point;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point in the Convex range following the trail of the depth point composition that frame catch up with, with isolated The mode of depth point preserves;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete the position of the frame followed the trail of Putting depth point, preserve in the way of continuous position depth point, the depth point not catching up with rolls back to start frame, labelling tracking side To red unidirectional arrow, find at the end of tracking nearest depth point depth information give depth value;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of what frame caught up with Depth point in the Convex range of depth point composition, preserves, the position not catching up with in the way of continuous position depth point Putting depth point and roll back to start frame, labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth at the end of tracking The degree of depth of point gives depth value;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only delete at first pass Except depth point;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete at first pass Except depth point.
Second aspect, the present invention provides a kind of 2D based on degree of depth study to turn in 3D unit pixel block depth map and extracts The amendment device of depth formatted data, including:
Import module, turn, for importing degree of deep learning network 2D, the position extracted in the unit pixel block depth map of 3D Depth format data, generate point diagram, including in the depth point that depth formatted data is stored in point diagram, generate corresponding Depth map, with by the position of color point mark position depth point and state;
Smear module, for receiving first instruction of user, according to described first instruction by smearing the position in point diagram The depth information putting depth point is modified;
Tracking module, for receiving second instruction of user, the side using images match to follow the tracks of according to described second instruction The degree of depth and the positional information of the depth point in point diagram are modified by formula;
Mobile module, for receiving the 3rd instruction of user, according to described 3rd instruction to using images match in point diagram The depth point that the mode followed the tracks of cannot be followed the trail of moves;
Removing module, for receiving the 4th instruction of user, according to described 4th instruction to the depth point in point diagram Select, and any non-selected position in the convex closure that the depth point in all life cycle of depth point, selected location is formed Put depth point to delete;
Add point module, for receiving the 5th instruction of user, according to the position in described 5th instruction selected element figure, and Selected position adds depth point;
Derive module, for being derived, in degree of deep learning network 2D with the form of depth by amended point diagram The flow process using said units block of pixels depth map during turning 3D uses.
Alternatively, described color point includes: the square point of yellow, green square point, red square point, the square point of purple, red Color list arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing There is again the degree of depth position;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point is only Position does not has the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Better;Described red single arrow is also represented by not catching up with the depth point of rollback, represents that above or below does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. the isolated positions degree of depth Point;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, and display Present frame should position, if beyond life cycle, showing the position at nearest key frame place, position;
Wherein, having position to refer to this frame itself is position key frame, or but this frame is not position key frame before it There is position key frame below;Having the degree of depth to refer to this frame itself is degree of depth key frame, or this frame be not degree of depth key frame but It is that its front and back has degree of depth key frame.
Alternatively, described in smear module, be additionally operable to
Choose whether to show color point according to described first instruction, if display gray-scale map, if display artwork, if aobvious Show alternating graph, if display contour map, if the depth map that superposition is semi-transparent.
Alternatively, the mode that described images match is followed the tracks of, including: the unidirectional isolated formula of the overall situation is followed the trail of, the most unidirectional isolated formula Follow the trail of, overall unidirectional continuous way is followed the trail of, the most unidirectional continuous way is followed the trail of, the isolated formula of overall intersection is followed the trail of and overall situation intersection continuous way Follow the trail of;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point of the frame followed the trail of, preserves in the way of isolated positions depth point;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point in the Convex range following the trail of the depth point composition that frame catch up with, with isolated The mode of depth point preserves;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete the position of the frame followed the trail of Putting depth point, preserve in the way of continuous position depth point, the depth point not catching up with rolls back to start frame, labelling tracking side To red unidirectional arrow, find at the end of tracking nearest depth point depth information give depth value;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of what frame caught up with Depth point in the Convex range of depth point composition, preserves, the position not catching up with in the way of continuous position depth point Putting depth point and roll back to start frame, labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth at the end of tracking The degree of depth of point gives depth value;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only delete at first pass Except depth point;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete at first pass Except depth point.
As shown from the above technical solution, the 2D based on degree of depth study of the present invention turns extraction in 3D unit pixel block depth map The amending method of depth formatted data out and device, it is possible to be effectively improved the accuracy of 3D depth image, to carry High 2D turns the effect of 3D rear lens, meets the demand of spectators.
Accompanying drawing explanation
The 2D based on degree of depth study that Fig. 1 provides for one embodiment of the invention turns in 3D unit pixel block depth map and extracts The schematic flow sheet of the amending method of the depth formatted data come;
The original image being converted into solid that Fig. 2 provides for the embodiment of the present invention;
Fig. 3 is the depth map of the point diagram before Fig. 2 uses method amendment described in embodiment illustrated in fig. 1;
Fig. 4 is the unit pixel block depth map before Fig. 2 uses in prior art method amendment described in embodiment illustrated in fig. 1 The depth map ultimately generated by degree of deep learning network below;
Fig. 5 is the left and right figure using the depth map of Fig. 4 to generate;
Fig. 6 is the depth map that Fig. 2 uses the amended point diagram of method described in embodiment illustrated in fig. 1;
Fig. 7 is the depth formatted data that Fig. 2 uses the amended point diagram of method described in embodiment illustrated in fig. 1 derive The depth map ultimately generated by degree of deep learning network below;
Fig. 8 is the left and right figure using the depth map of Fig. 7 to generate;
Fig. 9 is the signal of the depth point position representing point diagram in method described in embodiment illustrated in fig. 1 with color point Figure;
The 2D based on degree of depth study that Figure 10 provides for one embodiment of the invention turns extraction in 3D unit pixel block depth map The structural representation of the amendment device of depth formatted data out.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is carried out clear, complete description, it is clear that described embodiment is only It is only a part of embodiment of the present invention rather than whole embodiments.Based on embodiments of the invention, ordinary skill people The every other embodiment that member is obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
The following related names for using in the embodiment of the present invention:
1, artwork: need to be converted into the original image of left and right figure.
2, the degree of depth: for describing the distance of pixel distance observer.
3, depth map: for describing the picture of the image pixel degree of depth, represent with the gray-scale map of black and white herein.
4, artwork sequence frame: a series of artwork having time sequencing.
5, depth point: have recorded the corresponding abscissa of artwork, vertical coordinate and depth value.It is divided into the isolated positions degree of depth Point and continuous position depth point.Only work at a certain frame in isolated positions depth point.Continuous position depth point is in continuous print multiframe Work.
6, color point: be the point having different colours and geometry, is used for indicating position and the shape of depth point State.
7, depth formatted data: with abscissa, vertical coordinate, the form record of depth value, and marked each frame One group of abscissa, vertical coordinate, depth value that artwork is corresponding.
8, point diagram: be used for storing and revising in " method and system that a kind of 2D image based on degree of depth study turns 3D rendering " The depth formatted data extracted in the unit pixel depth map generated, comprises one group of depth point that each frame is corresponding, One depth map and one group of color point, the depth point of point diagram is used for record position depth format data, the depth map of point diagram It is to find the depth point nearest with it by each of which pixel, and gives this picture by the depth value of nearest depth point Element, is used to embody the depth information of depth point.Color point: be used to embody positional information and the shape of depth point State.
9, left and right figure: artwork pixel displacement respective distance is used for left eye and the picture of right eye viewing according to depth map.
10, life cycle: the frame scope that the depth point in point diagram works.
Fig. 1 shows that the 2D based on degree of depth study that one embodiment of the invention provides turns in 3D unit pixel block depth map and carries The schematic flow sheet of the amending method of the depth formatted data taken out, as it is shown in figure 1, the present embodiment based on the degree of depth The 2D practised turns the amending method of the depth formatted data extracted in 3D unit pixel block depth map, including step 101- 107:
101, import degree of deep learning network 2D and turn the depth form extracted in the unit pixel block depth map of 3D Data, generate point diagram, including in the depth point that depth formatted data is stored in point diagram, generate the corresponding degree of depth Figure, and by the position of color point mark position depth point and state, refer to Fig. 9.
In a particular application, described color point includes: the square point of yellow, green square point, red square point, the side of purple Point, red single arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing There is again the degree of depth position, and having position to refer to this frame itself is position key frame, or but this frame is not position key frame before it There is position key frame below;Having the degree of depth to refer to this frame itself is degree of depth key frame, or this frame be not degree of depth key frame but It is that its front and back has degree of depth key frame;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point is only Position does not has the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Better, it is used for playing suggesting effect;Described red single arrow is also represented by not catching up with the depth point of rollback, before expression or It does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. the isolated positions degree of depth Point, is used for playing suggesting effect;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, and display Present frame should position, if beyond life cycle, showing the position at nearest key frame place, position, this be in order to Such as life cycle can be have selected only in the depth of the first frame at the first frame with depth point, shift position outside life cycle Point, is switched to the second frame, and the second frame shows the position at the first place, frame position depth point.
It will be appreciated that owing to the point in point diagram has a lot, it is the heaviest labor that each point manually arranges life cycle It is dynamic, so the scope of life cycle can be represented with the position key frame of sequence number minimum and maximum.
102, first instruction of user is received, according to described first instruction by smearing the depth point in point diagram Depth information is modified.
If it will be appreciated that the degree of depth to be revised, it is requisite for giving the degree of depth to point.Owing to the position in point diagram is deep Degree point is a lot, and it is very onerous toil that each depth point arranges the degree of depth, so picture can be used according to described first instruction The mode of pen gives depth value.Left button is smeared, and the depth point that paintbrush covers gives the current depth value selected.Right button is coated with Smear erasing function, recover present frame and change the degree of depth of this depth point before the degree of depth.Can be according to described first instruction choosing when smearing Select and whether show color point, if display gray-scale map, if display artwork, if display alternating graph (can see by band solid eyes Go out stereoeffect), if display contour map, if the depth map that superposition is semi-transparent.
103, receive second instruction of user, use the mode of images match tracking in point diagram according to described second instruction The degree of depth of depth point and positional information modify.
It will be appreciated that smear the degree of depth that can only revise single frames, totalframes a lot of or wrong more time, a frame frame is smeared It is very to consume cost of labor.The mode using images match to follow the tracks of can reduce cost of labor.For different deep errors Form, based on relevant matches is followed the trail of, successively proposes different trace modes, the mode that described images match is followed the tracks of, bag Include: the unidirectional isolated formula of the overall situation is followed the trail of, the most unidirectional isolated formula is followed the trail of, overall unidirectional continuous way is followed the trail of, the most unidirectional continuous way chases after Track, the isolated formula of overall intersection are followed the trail of and overall situation intersection continuous way tracking;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point of the frame followed the trail of, preserves in the way of isolated positions depth point;The described overall situation is single Following the trail of for camera lens stable to isolated formula, but interframe shake is serious or the camera lens of depth value mistake, in camera lens, Object Depth is several Being not changed in, this tracking cannot transition;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point in the Convex range following the trail of the depth point composition that frame catch up with, with isolated The mode of depth point preserves;The unidirectional isolated formula in described local is followed the trail of for scene stable, but interframe shake is serious or deep The camera lens of angle value mistake, in camera lens, Object Depth has almost no change, and this tracking cannot transition;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete the position of the frame followed the trail of Putting depth point, preserve in the way of continuous position depth point, the depth point not catching up with rolls back to start frame, labelling tracking side To red unidirectional arrow, find at the end of tracking nearest depth point depth information give depth value;The described overall situation Unidirectional continuous way is followed the trail of for camera lens stable, but interframe shake is serious or the camera lens of depth value mistake, Object Depth in camera lens Change;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of what frame caught up with Depth point in the Convex range of depth point composition, preserves, the position not catching up with in the way of continuous position depth point Putting depth point and roll back to start frame, labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth at the end of tracking The degree of depth of point gives depth value;The unidirectional continuous way in described local is followed the trail of for scene stable, but interframe shake is serious or the degree of depth The camera lens of value mistake, in camera lens, Object Depth changes;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only delete at first pass Except depth point;The described overall situation is intersected isolated formula and is followed the trail of stable for camera lens or move in parallel, but interframe shake is serious or The camera lens of depth value mistake, in camera lens, Object Depth has almost no change, and this tracking cannot transition;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete at first pass Except depth point;The described overall situation is intersected continuous way and is followed the trail of stable for camera lens or move in parallel, but interframe shake is serious or The camera lens of depth value mistake, in camera lens, Object Depth changes.
The relevant noun used in the mode that following described images match is followed the tracks of:
Unidirectional: to follow the trail of from start frame to target frame;
Intersect: follow the trail of from start frame to target frame, then go to start tracking from target frame, be complementary to one another from both direction Depth information;
Isolated: the life cycle of depth point is only at a frame;
Continuous: the life cycle of depth point is in multiframe, say, that to have the existence of that point at multiple frames, i.e. Continuous position depth point.Non-key frame is taked to find key frame and is carried out linear transitions;
The overall situation: present frame all of depth point is all intended to the depth point followed the trail of;
Locally: the depth point that present frame selects is intended to the depth point followed the trail of;
Convex closure: from the point of view of if the most rigorous, the point set on given two dimensional surface, convex closure is exactly to connect outermost Constitute is convex polygonal, and it can comprise an all of point of concentration.It is used for the depth point deleted in the range of it, is the most just eliminating The impact of true depth point, can expand it before use by a small margin;
Rollback: the tracking of this depth point, when the matching degree followed the trail of is less than preset value, is deleted in continuous position depth point Direction, follows the trail of start frame all depth points below.
104, the 3rd instruction of user is received, according to described 3rd instruction mode to using images match to follow the tracks of in point diagram The depth point that cannot follow the trail of moves.
It will be appreciated that perform step 105 to be because the depth point that some tracking cannot catch up with, if do not moved, These depth points do not have correct depth value final effect can be caused large effect, if judging to learn if will not There is the depth catching up with point to give correct position in the range of the frame followed the trail of and the degree of depth can produce bigger shadow to final effect Ring, just depth point, position is moved and assignment, say, that if it is considered to the impact that the depth point not catching up with causes Not quite, can operate on it.
105, receive the 4th instruction of user, according to described 4th instruction, the depth point in point diagram selected, And by any non-selected depth point in the convex closure of the depth point composition in all life cycle of depth point, selected location Delete.
It will be appreciated that this step 105 avoids a frame frame deletes the duplication of labour a little, improve efficiency.
106, the 5th instruction of user is received, according to the position in described 5th instruction selected element figure, and in selected position Put and add depth point.
It will be appreciated that due to the limited amount of the depth point in point diagram, some fine structure does not has position deep Degree point, but if a portion fine structure does not give the correct degree of depth and whole structure can be had large effect, then Propose the mode added some points and solve this problem.
107, amended point diagram is derived, during turning 3D in degree of deep learning network 2D with the form of depth The flow process using said units block of pixels depth map uses.
Having an example, as shown in Fig. 2 to Fig. 8, Fig. 2 is the original image being converted into solid, and Fig. 3 is to Fig. 2 Using the depth map of the point diagram before method amendment described in the present embodiment, Fig. 4 is to use Fig. 2 described in the present embodiment in prior art The depth map that unit pixel block depth map before method amendment is ultimately generated by degree of deep learning network below, Fig. 5 is for using The left and right figure that the depth map of Fig. 4 generates;Fig. 6 is the depth map that Fig. 2 uses the amended point diagram of method described in the present embodiment, figure 7 pass through the degree of depth below for the depth formatted data that Fig. 2 uses the amended point diagram of method described in the present embodiment derive The depth map that learning network ultimately generates;Fig. 8 is the left and right figure using the depth map of Fig. 7 to generate.From Fig. 2 to Fig. 8, this reality Execute 2D based on degree of depth study described in example and turn repairing of the depth formatted data that extracts in 3D unit pixel block depth map Change method to be effectively improved 2D and turn the effect of camera lens in 3D.
The 2D based on degree of depth study of the present embodiment turns the depth lattice extracted in 3D unit pixel block depth map The amending method of formula data, it is possible to be effectively improved the accuracy of 3D depth image, turns the effect of 3D rear lens improving 2D, full The demand of foot spectators.
Figure 10 shows that the 2D based on degree of depth study that one embodiment of the invention provides turns in 3D unit pixel block depth map The depth formatted data extracted amendment device structural representation, as shown in Figure 10, the present embodiment based on deeply The 2D of degree study turns the amendment device of the depth formatted data extracted in 3D unit pixel block depth map, including: lead Enter module 11, smear module 12, tracking module 13, mobile module 14, removing module 15, add point module 16 and derive module 17; Wherein:
Import module 11, turn, for importing degree of deep learning network 2D, the position extracted in the unit pixel block depth map of 3D Put depth format data, generate point diagram, including in the depth point that depth formatted data is stored in point diagram, generate phase The depth map answered, and by the position of color point mark position depth point and state;
Smear module 12, for receiving first instruction of user, according to described first instruction by smearing in point diagram The depth information of depth point is modified;
Tracking module 13, for receiving second instruction of user, uses images match to follow the tracks of according to described second instruction The degree of depth and the positional information of the depth point in point diagram are modified by mode;
Mobile module 14, for receiving the 3rd instruction of user, according to described 3rd instruction to using image in point diagram Join the depth point that the mode of tracking cannot follow the trail of to move;
Removing module 15, for receiving the 4th instruction of user, according to described 4th instruction to the depth in point diagram Point selects, and any non-selected in the convex closure formed by the depth point in all life cycle of depth point, selected location Depth point deletion;
Add point module 16, for receiving the 5th instruction of user, according to the position in described 5th instruction selected element figure, and Depth point is added in selected position;
Derive module 17, for being derived, in degree of deep learning network with the form of depth by amended point diagram 2D uses the flow process of said units block of pixels depth map to use during turning 3D.
In a particular application, described color point includes: the square point of yellow, green square point, red square point, the side of purple Point, red single arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing There is again the degree of depth position, and having position to refer to this frame itself is position key frame, or but this frame is not position key frame before it There is position key frame below;Having the degree of depth to refer to this frame itself is degree of depth key frame, or this frame be not degree of depth key frame but It is that its front and back has degree of depth key frame;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point is only Position does not has the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Better, it is used for playing suggesting effect;Described red single arrow is also represented by not catching up with the depth point of rollback, before expression or It does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. the isolated positions degree of depth Point, is used for playing suggesting effect;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, and display Present frame should position, if beyond life cycle, showing the position at nearest key frame place, position, this be in order to Such as life cycle can be have selected only in the depth of the first frame at the first frame with depth point, shift position outside life cycle Point, is switched to the second frame, and the second frame shows the position at the first place, frame position depth point.
It will be appreciated that owing to the depth point in point diagram has a lot, each depth point manually arranges life Cycle is very onerous toil, so the scope of life cycle can be represented with the position key frame of sequence number minimum and maximum.
In a particular application, module 12 is smeared described in, it may also be used for
Choose whether to show color point according to described first instruction, if display gray-scale map, if display artwork, if aobvious Show alternating graph, if display contour map, if the depth map that superposition is semi-transparent.
It will be appreciated that smear the degree of depth that can only revise single frames, totalframes a lot of or wrong more time, a frame frame is smeared It is very to consume cost of labor.The mode using images match to follow the tracks of can reduce cost of labor.For different deep errors Form, based on relevant matches is followed the trail of, successively proposes different trace modes, in a particular application, described images match The mode followed the tracks of, including: the unidirectional isolated formula of the overall situation is followed the trail of, the most unidirectional isolated formula is followed the trail of, overall unidirectional continuous way is followed the trail of, locally Unidirectional continuous way is followed the trail of, the isolated formula of overall intersection is followed the trail of and overall situation intersection continuous way tracking;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point of the frame followed the trail of, preserves in the way of isolated positions depth point;The described overall situation is single Following the trail of for camera lens stable to isolated formula, but interframe shake is serious or the camera lens of depth value mistake, in camera lens, Object Depth is several Being not changed in, this tracking cannot transition;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, the depth value of assignment original position depth point and tracking The positional information obtained, deletes the depth point in the Convex range following the trail of the depth point composition that frame catch up with, with isolated The mode of depth point preserves;The unidirectional isolated formula in described local is followed the trail of for scene stable, but interframe shake is serious or deep The camera lens of angle value mistake, in camera lens, Object Depth has almost no change, and this tracking cannot transition;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete the position of the frame followed the trail of Putting depth point, preserve in the way of continuous position depth point, the depth point not catching up with rolls back to start frame, labelling tracking side To red unidirectional arrow, find at the end of tracking nearest depth point depth information give depth value;The described overall situation Unidirectional continuous way is followed the trail of for camera lens stable, but interframe shake is serious or the camera lens of depth value mistake, Object Depth in camera lens Change;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of what frame caught up with Depth point in the Convex range of depth point composition, preserves, the position not catching up with in the way of continuous position depth point Putting depth point and roll back to start frame, labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth at the end of tracking The degree of depth of point gives depth value;The unidirectional continuous way in described local is followed the trail of for scene stable, but interframe shake is serious or the degree of depth The camera lens of value mistake, in camera lens, Object Depth changes;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only delete at first pass Except depth point;The described overall situation is intersected isolated formula and is followed the trail of stable for camera lens or move in parallel, but interframe shake is serious or The camera lens of depth value mistake, in camera lens, Object Depth has almost no change, and this tracking cannot transition;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete at first pass Except depth point;The described overall situation is intersected continuous way and is followed the trail of stable for camera lens or move in parallel, but interframe shake is serious or The camera lens of depth value mistake, in camera lens, Object Depth changes.
The relevant noun used in the mode that following described images match is followed the tracks of:
Unidirectional: to follow the trail of from start frame to target frame;
Intersect: follow the trail of from start frame to target frame, then go to start tracking from target frame, be complementary to one another from both direction Depth information;
Isolated: the life cycle of depth point is only at a frame;
Continuous: the life cycle of depth point is in multiframe, say, that to have the existence of that point at multiple frames, i.e. Continuous position depth point.Non-key frame is taked to find key frame and is carried out linear transitions;
The overall situation: present frame all of depth point is all tracking point;
Locally: the depth point that present frame selects is tracking point;
Convex closure: from the point of view of if the most rigorous, the point set on given two dimensional surface, convex closure is exactly to connect outermost Constitute is convex polygonal, and it can comprise an all of point of concentration.It is used for the depth point deleted in the range of it, is the most just eliminating The impact of true depth point, can expand it before use by a small margin;
Rollback: the tracking of this depth point, when the matching degree followed the trail of is less than preset value, is deleted in continuous position depth point Direction, follows the trail of start frame all depth points below.
The 2D based on degree of depth study of the present embodiment turns the depth lattice extracted in 3D unit pixel block depth map The amendment device of formula data, is applied in processor, it is possible to be effectively improved the accuracy of 3D depth image, turns 3D improving 2D The effect of rear lens, meets the demand of spectators.
Device described in the present embodiment may be used for performing said method embodiment, and its principle is similar with technique effect, this Place repeats no more.
It should be noted that for device embodiment, due to itself and embodiment of the method basic simlarity, so describe Fairly simple, relevant part sees the part of embodiment of the method and illustrates.
One of ordinary skill in the art will appreciate that: the instruction in various embodiments above can be in any combination, it is achieved above-mentioned All or part of step of each method embodiment can be completed by the hardware that programmed instruction is relevant.Aforesaid program can be deposited It is stored in a computer read/write memory medium.This program upon execution, performs to include the step of above-mentioned each method embodiment;And Aforesaid storage medium includes: the various media that can store program code such as ROM, RAM, magnetic disc or CD.
Last it is noted that various embodiments above is only in order to illustrate technical scheme, it is not intended to limit;To the greatest extent The present invention has been described in detail by pipe with reference to foregoing embodiments, it will be understood by those within the art that: it depends on So the technical scheme described in foregoing embodiments can be modified, or the most some or all of technical characteristic is entered Row equivalent;And these amendments or replacement, do not make the essence of appropriate technical solution depart from various embodiments of the present invention technology The scope of scheme.

Claims (8)

1. a 2D based on degree of depth study turns the depth formatted data that extracts in 3D unit pixel block depth map Amending method, it is characterised in that including:
Import degree of deep learning network 2D and turn the depth formatted data extracted in the unit pixel block depth map of 3D, generate Point diagram, including in the depth point that depth formatted data is stored in point diagram, generates corresponding depth map, and with colored The position of some mark position depth point and state;
Receive first instruction of user, according to described first instruction by smearing the depth information to the depth point in point diagram Modify;
Receive second instruction of user, deep to the position in point diagram according to the mode that described second instruction uses images match to follow the tracks of The degree of depth and the positional information of degree point are modified;
Receive the 3rd instruction of user, according to described 3rd instruction, the mode using images match to follow the tracks of in point diagram cannot be followed the trail of Depth point move;
Receive the 4th instruction of user, according to described 4th instruction, the depth point in point diagram is selected, and by selected Any non-selected depth point deletion in the convex closure of the depth point composition in all life cycle of depth point;
Receive the 5th instruction of user, according to the position in described 5th instruction selected element figure, and add position in selected position Depth point;
Amended point diagram is derived with depth formatted data, for using during turning 3D in degree of deep learning network 2D The flow process stating unit pixel block depth map uses.
Method the most according to claim 1, it is characterised in that described color point includes: the square point of yellow, green side Point, red square point, the square point of purple, red single arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing position There is again the degree of depth;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point only has position There is no the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Point;Described red single arrow is also represented by not catching up with the depth point of rollback, represents that above or below does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. isolated positions depth point;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, display is current Frame should position, if beyond life cycle, showing the position at nearest key frame place, position;
Wherein, having position to refer to this frame itself is position key frame, or this frame be not position key frame but before it and after There is position key frame in face;Having the degree of depth to refer to this frame itself is degree of depth key frame, or but this frame is not degree of depth key frame it There is the degree of depth key frame front and back.
Method the most according to claim 1, it is characterised in that instructing by smearing in point diagram according to described first When the degree of depth of depth point is modified, also include:
Choose whether to show color point according to described first instruction, if display gray-scale map, if display artwork, if display is handed over Wrong figure, if display contour map, if the depth map that superposition is semi-transparent.
Method the most according to claim 1, it is characterised in that the mode that described images match is followed the tracks of, including: the overall situation is unidirectional Isolated formula is followed the trail of, the most unidirectional isolated formula is followed the trail of, overall unidirectional continuous way is followed the trail of, the most unidirectional continuous way is followed the trail of, the overall situation is intersected Isolated formula is followed the trail of and overall situation intersection continuous way is followed the trail of;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, depth value and the tracking of assignment original position depth point obtain Positional information, delete the depth point of frame followed the trail of, preserve in the way of isolated positions depth point;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, depth value and the tracking of assignment original position depth point obtain Positional information, delete follow the trail of frame catch up with depth point composition Convex range in depth point, with isolated positions The mode of depth point preserves;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, the position deleting the frame followed the trail of is deep Degree point, preserves in the way of continuous position depth point, and the depth point not catching up with rolls back to start frame, and labelling follows the trail of direction Red unidirectional arrow, the depth information finding nearest depth point at the end of tracking gives depth value;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of the position that frame catch up with Depth point in the Convex range of depth point composition, preserves in the way of continuous position depth point, and the position not catching up with is deep Degree point rolls back to start frame, and labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth point at the end of tracking The degree of depth gives depth value;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only in first pass deletion position Put depth point;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete position at first pass Put depth point.
5. a 2D based on degree of depth study turns the depth formatted data that extracts in 3D unit pixel block depth map Amendment device, it is characterised in that including:
Import module, turn, for importing degree of deep learning network 2D, the depth extracted in the unit pixel block depth map of 3D Formatted data, generates point diagram, including in the depth point that depth formatted data is stored in point diagram, generates corresponding deep Degree figure, and by the position of color point mark position depth point and state;
Smear module, for receiving first instruction of user, deep by smearing the position in point diagram according to described first instruction The depth information of degree point is modified;
Tracking module, for receiving second instruction of user, the mode pair using images match to follow the tracks of according to described second instruction The degree of depth and the positional information of the depth point in point diagram are modified;
Mobile module, for receiving the 3rd instruction of user, according to described 3rd instruction to using images match to follow the tracks of in point diagram The depth point that cannot follow the trail of of mode move;
Removing module, for receiving the 4th instruction of user, is carried out the depth point in point diagram according to described 4th instruction Select, and any non-selected position in the convex closure formed by the depth point in all life cycle of depth point, selected location is deep Degree point deletion;
Add point module, for receiving the 5th instruction of user, according to the position in described 5th instruction selected element figure, and selected Position add depth point;
Derive module, for being derived with the form of depth by amended point diagram, turn 3D in degree of deep learning network 2D During use said units block of pixels depth map flow process use.
Device the most according to claim 5, it is characterised in that described color point includes: the square point of yellow, green side Point, red square point, the square point of purple, red single arrow and green double-head arrow;
Side's point of described yellow represents isolated positions depth point in described point diagram;
Side's point of described green represents the part that in described point diagram, continuous position depth point is good, is called better, better existing position There is again the degree of depth;
Side's point of described redness represents the part that in described point diagram, continuous position depth point is bad, is called bad point, and bad point only has position There is no the degree of depth;
Described red single arrow represents that the frame referred to respectively according to direction above or frame below are bad points, and it at present frame is Point;Described red single arrow is also represented by not catching up with the depth point of rollback, represents that above or below does not catch up with;
Described green double-head arrow represents the depth point newly added, life cycle only at present frame, i.e. isolated positions depth point;
Side's point of described purple represents the depth point chosen, if in the life cycle of depth point, position, display is current Frame should position, if beyond life cycle, showing the position at nearest key frame place, position;
Wherein, having position to refer to this frame itself is position key frame, or this frame be not position key frame but before it and after There is position key frame in face;Having the degree of depth to refer to this frame itself is degree of depth key frame, or but this frame is not degree of depth key frame it There is the degree of depth key frame front and back.
Device the most according to claim 5, it is characterised in that described in smear module, be additionally operable to
Choose whether to show color point according to described first instruction, if display gray-scale map, if display artwork, if display is handed over Wrong figure, if display contour map, if the depth map that superposition is semi-transparent.
Device the most according to claim 5, it is characterised in that the mode that described images match is followed the tracks of, including: the overall situation is unidirectional Isolated formula is followed the trail of, the most unidirectional isolated formula is followed the trail of, overall unidirectional continuous way is followed the trail of, the most unidirectional continuous way is followed the trail of, the overall situation is intersected Isolated formula is followed the trail of and overall situation intersection continuous way is followed the trail of;Wherein:
The unidirectional isolated formula of the described overall situation is followed the trail of, including: unidirectional tracking, depth value and the tracking of assignment original position depth point obtain Positional information, delete the depth point of frame followed the trail of, preserve in the way of isolated positions depth point;
The unidirectional isolated formula in described local is followed the trail of, including: unidirectional tracking, depth value and the tracking of assignment original position depth point obtain Positional information, delete follow the trail of frame catch up with depth point composition Convex range in depth point, with isolated positions The mode of depth point preserves;
The unidirectional continuous way of the described overall situation is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, the position deleting the frame followed the trail of is deep Degree point, preserves in the way of continuous position depth point, and the depth point not catching up with rolls back to start frame, and labelling follows the trail of direction Red unidirectional arrow, the depth information finding nearest depth point at the end of tracking gives depth value;
The unidirectional continuous way in described local is followed the trail of, including: unidirectional tracking, the position that assignment is followed the trail of, delete and follow the trail of the position that frame catch up with Depth point in the Convex range of depth point composition, preserves in the way of continuous position depth point, and the position not catching up with is deep Degree point rolls back to start frame, and labelling follows the trail of the red unidirectional arrow in direction, finds nearest depth point at the end of tracking The degree of depth gives depth value;
The described isolated formula of overall situation intersection is followed the trail of, including: carry out twice unidirectional isolated formula of the overall situation and follow the trail of, but only in first pass deletion position Put depth point;
Described overall situation intersection continuous way is followed the trail of, including: carry out twice overall unidirectional continuous way and follow the trail of, but only delete position at first pass Put depth point.
CN201610397022.5A 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device Active CN106056616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610397022.5A CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610397022.5A CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Publications (2)

Publication Number Publication Date
CN106056616A true CN106056616A (en) 2016-10-26
CN106056616B CN106056616B (en) 2019-02-26

Family

ID=57170452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610397022.5A Active CN106056616B (en) 2016-06-07 2016-06-07 Deep learning 2D turns 3D unit pixel block depth map amending method and device

Country Status (1)

Country Link
CN (1) CN106056616B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080448A1 (en) * 2007-04-03 2010-04-01 Wa James Tam Method and graphical user interface for modifying depth maps
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN104243948A (en) * 2013-12-20 2014-12-24 深圳深讯和科技有限公司 Depth adjusting method and device for converting 2D image to 3D image
CN104639930A (en) * 2013-11-13 2015-05-20 三星电子株式会社 Multi-view image display apparatus and multi-view image display method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080448A1 (en) * 2007-04-03 2010-04-01 Wa James Tam Method and graphical user interface for modifying depth maps
US20120007950A1 (en) * 2010-07-09 2012-01-12 Yang Jeonghyu Method and device for converting 3d images
CN103177440A (en) * 2012-12-20 2013-06-26 香港应用科技研究院有限公司 System and method of generating image depth map
CN104639930A (en) * 2013-11-13 2015-05-20 三星电子株式会社 Multi-view image display apparatus and multi-view image display method thereof
CN104243948A (en) * 2013-12-20 2014-12-24 深圳深讯和科技有限公司 Depth adjusting method and device for converting 2D image to 3D image

Also Published As

Publication number Publication date
CN106056616B (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN101394573B (en) Panoramagram generation method and system based on characteristic matching
CN100355272C (en) Synthesis method of virtual viewpoint in interactive multi-viewpoint video system
CN104159093B (en) The time domain consistence hole region method for repairing and mending of the static scene video of moving camera shooting
CN106651766A (en) Image style migration method based on deep convolutional neural network
CN102685533A (en) Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
CN100448271C (en) Video editing method based on panorama sketch split joint
CN111724439A (en) Visual positioning method and device in dynamic scene
CN101558404A (en) Image segmentation
CN111091151B (en) Construction method of generation countermeasure network for target detection data enhancement
CN104751466B (en) A kind of changing object tracking and its system based on conspicuousness
CN110136174B (en) Target object tracking method and device
CN103443826A (en) Mesh animation
CN107689050A (en) A kind of depth image top sampling method based on Color Image Edge guiding
CN106952276A (en) A kind of image matting method and device
CN104272377A (en) Motion picture project management system
CN103051915A (en) Manufacture method and manufacture device for interactive three-dimensional video key frame
CN108171249A (en) A kind of local description learning method based on RGBD data
CN105488771A (en) Light-field image editing method and device
US20110149039A1 (en) Device and method for producing new 3-d video representation from 2-d video
Zeng et al. Hallucinating stereoscopy from a single image
CN105957068A (en) Method and system of constructing three-dimensional reconstruction model surface
CN108122249A (en) A kind of light stream method of estimation based on GAN network depth learning models
CN114627163A (en) Global image target tracking method and system based on rapid scene splicing
Wang et al. JAWS: just a wild shot for cinematic transfer in neural radiance fields
CN102724530B (en) Three-dimensional method for plane videos based on feedback control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20161215

Address after: 100024 Beijing City, Chaoyang District, Five Mile Bridge No. 1 Street, building 5, building 4, floor 1

Applicant after: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

Address before: 100024 Beijing City, Chaoyang District, Five Mile Bridge No. 1 Street, building 5, building 4, floor 1

Applicant before: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190102

Address after: Room 408-409, F-4 R&D Center, 4th floor, No. 1 Hospital, Wuliqiao First Street, Chaoyang District, Beijing, 100024

Applicant after: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Address before: 100024 Fourth Floor, Building 5, Courtyard 1, Wuliqiao First Street, Chaoyang District, Beijing

Applicant before: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Depth map modification method and device of 2D to 3D pixel block in depth learning

Effective date of registration: 20201021

Granted publication date: 20190226

Pledgee: Hubble Technology Investment Ltd.

Pledgor: TWELVE DIMENSION (BEIJING) TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001241

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230217

Granted publication date: 20190226

Pledgee: Hubble Technology Investment Ltd.

Pledgor: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Registration number: Y2020990001241

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20161026

Assignee: BEIJING JULI DIMENSION TECHNOLOGY CO.,LTD.

Assignor: TWELVE DIMENSION (BEIJING) TECHNOLOGY CO.,LTD.

Contract record no.: X2023980051328

Denomination of invention: Method and device for modifying depth maps of 2D to 3D unit pixel blocks in deep learning

Granted publication date: 20190226

License type: Common License

Record date: 20231211