CN109389674A - Data processing method and device, MEC server and storage medium - Google Patents
Data processing method and device, MEC server and storage medium Download PDFInfo
- Publication number
- CN109389674A CN109389674A CN201811161434.4A CN201811161434A CN109389674A CN 109389674 A CN109389674 A CN 109389674A CN 201811161434 A CN201811161434 A CN 201811161434A CN 109389674 A CN109389674 A CN 109389674A
- Authority
- CN
- China
- Prior art keywords
- image
- depth value
- depth
- value
- video data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present invention provides a kind of data processing method and device, MEC server and storage medium.The data processing method include: include: matching the first two dimension 2D image and the 2nd 2D image, the first object that the 2nd 2D image described in the first 2D image of acquisition and at least two frames includes;In conjunction with the first depth value corresponding with the first 2D image, and the second depth value corresponding with the 2nd 2D image, the third depth value of first object is determined;In conjunction with the first 2D image and the third depth value, the 3 D video of first object is established.
Description
Technical field
The present invention relates to information technology field but it is not limited to technical information technical field more particularly to a kind of data processing side
Method and device, MEC server and storage medium.
Background technique
During 3 D video models, it may be necessary to while acquiring two-dimentional (2D) image and depth image.But in phase
It is found in the technology of pass, during carrying out three-dimensional modeling based on this 2D image and depth image, it is easy to modeling failure occur
Or modeling is abnormal.
Summary of the invention
The embodiment of the invention provides plant a kind of data processing method and device, MEC server and storage medium.
A kind of data processing method is applied in mobile edge calculations MEC server, comprising:
The first two dimension 2D image and the 2nd 2D image are matched, the 2nd 2D image described in the first 2D image and at least two frames is obtained
The first object for including;
In conjunction with the first depth value corresponding with the first 2D image, and the second depth corresponding with the 2nd 2D image
Value, determines the third depth value of first object;
In conjunction with the first 2D image and the third depth value, the 3 D video of first object is established.
Based on above scheme, which comprises
Determine whether three dimensional video data meets depth value calibration condition, wherein the three dimensional video data includes: described
First 2D image and the 2nd 2D image;
The first two dimension 2D image of the matching includes: with the 2nd 2D image
If the three dimensional video data meets the depth value calibration condition, the first 2D image is matched and with described the
Two 2D images.
Based on above scheme, whether the determining three dimensional video data meets depth value corrector strip, comprising:
It is lacked if the depth value of the first object described in corresponding first depth value exists in the first 2D image
It loses, determines that the 3 D video meets the depth value calibration condition.
Based on above scheme, whether the determining three dimensional video data meets depth value calibration condition, comprising:
If first object in the first 2D image is blocked by the second object, the three dimensional video data is determined
Meet the depth value calibration condition.
Based on above scheme, the combination the first depth value corresponding with the first 2D image, and with the 2nd 2D
Corresponding second depth value of image, determines the third depth value of first object, comprising:
Second depth value of first depth value and M frame depth image is compared respectively;Wherein, the M frame is deep
It spends image and corresponds to the 2nd 2D image described in M frame, the M is the positive integer not less than 2;
If first depth value and N number of second depth value meet default condition of similarity, it is determined that described first is deep
Angle value is the third depth value, wherein the N is the positive integer no more than M.
Based on above scheme, the N is greater than M-N.
Based on above scheme, the combination the first depth value corresponding with the first 2D image, and with the 2nd 2D
Corresponding second depth value of image, determines the third depth value of first object, comprising:
If first depth value and N number of second depth value are unsatisfactory for the default condition of similarity, according to N number of institute
It states the second depth value and determines the third depth value.
Based on above scheme, the method also includes:
Obtain corresponding with the first 2D image at least two depth image, wherein described two depth images include
First depth value;Described two depth images are to utilize different coding light collection.
A kind of data processing equipment is applied in mobile edge calculations MEC server, comprising:
Matching module obtains the first 2D image and at least two frames for matching the first two dimension 2D image and the 2nd 2D image
The first object that the 2nd 2D image includes;
First determining module, in conjunction with the first depth value corresponding with the first 2D image, and with the 2nd 2D
Corresponding second depth value of image, determines the third depth value of first object;
Modeling module, for establishing the three of first object in conjunction with the first 2D image and the third depth value
Tie up video.
Based on above scheme, described device further include:
Second determining module, for determining whether three dimensional video data meets depth value calibration condition;
The matching module matches institute if meeting the depth value calibration condition specifically for the three dimensional video data
State the first 2D image and with the 2nd 2D image.
Based on above scheme, second determining module, if being specifically used for corresponding described in the first 2D image
There is missing in the depth value of the first object described in the first depth value, determine that the 3 D video meets the depth value corrector strip
Part.
Based on above scheme, second determining module, if being also used to described first pair in the first 2D image
As being blocked by the second object, determine that the three dimensional video data meets the depth value calibration condition.
Based on above scheme, first determining module, comprising:
Comparative sub-module, for the second depth value of first depth value and M frame depth image to be compared respectively;
Wherein, the M frame depth image corresponds to the 2nd 2D image described in M frame, and the M is the positive integer not less than 2;
Determine submodule, if meeting default condition of similarity for first depth value and N number of second depth value,
Determine that first depth value is the third depth value, wherein the N is the positive integer no more than M.
Based on above scheme, the N is greater than M-N.
Based on above scheme, the determining submodule, if being also used to first depth value and N number of second depth value
It is unsatisfactory for the default condition of similarity, then the third depth value is determined according to N number of second depth value.
Based on above scheme, described device further include:
Module is obtained, for obtaining at least two depth image corresponding with the first 2D image, wherein described two
Depth image includes first depth value;Described two depth images are to utilize different coding light collection.
A kind of computer storage medium is stored with computer instruction in the computer storage medium, which is characterized in that should
The step of data processing method that aforementioned one or more technical solutions provide, is realized in instruction when being executed by processor.
A kind of MEC server including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine instruction, which is characterized in that the processor executes the data that aforementioned one or more technical solutions provide when described instruction
The step of processing method.
Technical solution provided in an embodiment of the present invention can combine at least two frame 2D images when carrying out 3 D video modeling
Picture material matching, determine first object that two frame 2D images include and identical;Then in conjunction at least two frame 2D
The depth value of first object in the corresponding depth image of image, determines the third depth value of the first object.In this way, in the first object
Depth value because acquisition failure or the reasons such as transmission abnormality missing, exception or precision it is low in the case where, it can be combined
The depth value of preceding acquisition determines third depth value.Even if in the case that depth value lacks in present frame three dimensional video data,
3 D video modeling, the depth in present frame three dimensional video data can also be carried out successfully based on the depth image acquired before
In the case that value exception or precision are low, the depth value that can be acquired in conjunction with before be calibrated, and promote the precision of depth value,
To realize the accurate modeling of 3 D video.
Detailed description of the invention
Fig. 1 is a kind of system architecture schematic diagram of data transmission method application provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 3 A to 3C is the effect diagram of three frames 2D image provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of another data processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of data processing equipment provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of MEC server provided in an embodiment of the present invention.
Specific embodiment
Before the technical solution to the embodiment of the present invention is described in detail, first to the data of the embodiment of the present invention at
The system architecture of reason method application is briefly described.The data processing method of the embodiment of the present invention is applied to three dimensional video data
Related service, which is, for example, the business that three dimensional video data is shared, or the live broadcast service based on three dimensional video data
Etc..In this case, since the data volume of three dimensional video data is larger, the depth data and two-dimensional video number that transmit respectively
According to needing higher technical support in data transmission procedure, it is therefore desirable to which mobile communications network has faster data transmission speed
Rate, and more stable data transmission environments.
Fig. 1 is the system architecture schematic diagram that the data transmission method of the embodiment of the present invention is applied;As shown in Figure 1, system can
Including terminal, base station, MEC server, Service Process Server, core net and internet (Internet) etc.;MEC server with
High-speed channel is established to realize that data are synchronous by core net between Service Process Server.
By taking the application scenarios of two frames terminal interaction shown in FIG. 1 as an example, MEC server A is to be deployed in close to terminal A (hair
Sending end) MEC server, core net A be the region terminal A core net;Correspondingly, MEC server B is close to be deployed in
The MEC server of terminal B (receiving end), core net B are the core net of the region terminal B;MEC server A and MEC server
B can establish high-speed channel by core net A and core net B respectively between Service Process Server to realize that data are synchronous.
Wherein, after the three dimensional video data that terminal A is sent is transferred to MEC server A, core net is passed through by MEC server A
Data are synchronized to Service Process Server by A;The three-dimensional that terminal A is sent is obtained from Service Process Server by MEC server B again
Video data, and be sent to terminal B and presented.
Here, if terminal B and terminal A realize transmission by the same MEC server, terminal B and terminal A are straight at this time
The transmission that a MEC server realizes three dimensional video data was connected, the participation of Service Process Server, this mode are not needed
Referred to as local passback mode.Specifically, it is assumed that terminal B and terminal A realizes the transmission of three dimensional video data by MEC server A,
After the three dimensional video data that terminal A is sent is transferred to MEC server A, three dimensional video data is sent to terminal B by MEC server A
It is presented.
Here, terminal can configuring condition based on network condition or terminal itself or itself configuration algorithms selection
The evolved base station (eNB) of 4G network, or the next-generation evolved base station (gNB) of access 5G network are accessed, so that eNB
It accesses net by long term evolution (Long Term Evolution, LTE) to connect with MEC server, so that gNB passes through the next generation
Access net (NG-RAN) is connect with MEC server.
Here, MEC server disposition is in the network edge side close to terminal or data source header, so-called close terminal or leans on
Nearly data source header, is not only on logical place, close to terminal or close to data source header also on geographical location.It is different from existing
Mobile communications network in main Service Process Server be deployed in several big cities, MEC server can be in a city
Middle deployment is multiple.Such as in certain office building, user is more, then a MEC server can be disposed near the office building.
Wherein, MEC server as with converged network, calculating, storage, application core ability edge calculations gateway,
The platform including device Domain, network domains, data field and application domain is provided for edge calculations.Its couple all kinds of smart machines and
Sensor provides intelligence connection and data processing business nearby, and different types of application and data is allowed to carry out in MEC server
Processing realizes the crucial intelligent Services such as real-time business, business intelligence, data aggregate and interoperability, security and privacy protection, effectively
The intelligent decision efficiency of promotion business.
As shown in Fig. 2, be applied in mobile edge calculations MEC server the present embodiment provides a kind of data processing method,
Include:
Step 201: matching the first two dimension 2D image and the 2nd 2D image, the described in the first 2D image of acquisition and at least two frames
The first object that two 2D images include;
Step 202: in conjunction with the first depth value corresponding with the first 2D image, and it is corresponding with the 2nd 2D image
Second depth value determines the third depth value of first object;
Step 203: in conjunction with the first 2D image and the third depth value, establishing the three-dimensional view of first object
Frequently.
In some embodiments, the three dimensional video data includes: two dimensional image and depth image.Wherein, the two dimension
It include colored pixels in image.The pixel value of the colored pixels is color value.For example, the color value is red green blue
(RGB) value either brightness/coloration/concentration (YUV) value.
The depth image includes depth pixel, and the pixel value of the depth pixel is depth value.The 3 D video number
According to and depth image 3-D image can be built in three-dimensional image space.The two dimensional image and the depth image are
The image of the same moment acquisition.
In some embodiments, the two dimensional image is consistent with the picture size of depth image, for example, the X-Y scheme
The pixel that picture and depth image are included is W*H;W indicates the number of pixels for including on first direction, and H indicates second direction
On include number of pixels.W and H is positive integer.
In some embodiments, the two dimensional image and the depth image can be the image of acquisition of the same moment;For
Reduction data volume, the picture size of the two dimensional image and the depth image meet preset relation.For example, depth image
The pixel for being included is W*H, and the pixel that depth image includes is (W/a) * (H/b).In this way, a depth pixel has corresponded to a*
B colored pixels.When progress 3 D video is built, it is a adjacent a*b can be applied to according to the pixel value of a depth pixel
The pixel value of colored pixels.For example, (W/a) * (H/b) is equal to (W/2) * (H/2).In this way, a depth pixel has corresponded to 4 face
Color pixel.When progress 3 D video is built, 4 adjacent color pixels can be applied to according to the pixel value of a depth pixel
Pixel value;In this way, just reducing the image data amount of depth image.Due in the adjacent very little region of a usual object
Concave-convex sense is substantially consistent, if therefore the picture size of depth image is less than the picture size of the two dimensional image, can also be with
It maintains the reduction of the 3 D video of degree of precision and builds;The terminal and MEC server of reduction simultaneously need the data volume of interaction
And/or MEC server data volume to be treated.
In some embodiments, when generating picture size less than the two dimensional image, can have in following manner extremely
Few one kind: the depth image directly is acquired using the picture size of the depth image;Utilize the image of the two dimensional image
Size acquires original depth image;Further according to the picture size of depth image, generated according to the pixel value of a*b adjacent pixel
The depth image.For example, generating the depth image according to the mean value of a*b adjacent pixel value or intermediate value.
Each moment of terminal acquisition in embodiments of the present invention has 2D image and corresponding depth image.2D image
Colored pixels and the depth pixel value of depth image have corresponding relationship, in this way, can be determined every according to this corresponding relationship
The corresponding depth value of one colored pixels.
In the present embodiment, first depth value is the depth value in the corresponding depth image of the first 2D image;
Second depth value is the depth value in the corresponding depth image of the 2nd 2D image.
First depth value and the second depth value refer to, the multiple depth values referred to, not only an and depth value.
The third depth value is the depth value of the first object, is similarly the general designation of all depth values of the first object, and
Which depth value is not referred specifically to.For example, there are certain bumps not simultaneously in the surface of the first object, at this point, the first object and end
Distance before end there is difference in various pieces, in this way, third depth value will characterize this distance by multiple depth values
Difference.
At least one possible acquisition target is relative motion in currently acquisition scene, may interfere with other acquisitions
The acquisition of the depth information of target.For example, acquisition target movement cause to have the project structured lights of some acquisition targets, reflection or
Image Acquisition based on structure light is interfered;So as to cause the depth value missing or exception of some objects in depth image.
In order at least partly solve the above problems, the first 2D image and the 2nd 2D image are the different acquisition moment
The image of acquisition.In some embodiments, the 2nd 2D image is the image acquired before the first 2D image.For example, the
One 2D image is the image of the first moment acquisition;2nd 2D image can be the image of the second moment acquisition.Second moment is early
In first moment.In this way, the 2D image that two frames acquire the moment can be matched in step 201.In some embodiments
Described in the first moment and second moment can are as follows: the adjacent acquisition moment, in also some embodiments, first moment and
Second moment can acquire the moment for one or 2, interval, two frames for waiting finite numbers to acquire the moment, and the first moment was n-th of acquisition
Moment, the second moment are the n-th+x acquisition moment, and the value of the x can be the lesser value such as 1 or 2, when the n-th+x acquisitions
N-th of acquisition moment of distance quarter is far, all that it fails to match caused by identical part in the 2D image of two frame moment acquisition
The problem of.
Under moving scene, based on motion continuity under moving scene, in the 2D figure that the first moment and the second moment acquire
The imaging of same object is had as in.For example, movement in user hold terminal edge walk side acquisition user and user behind landscape
For.The movement of user is continuous, if movement velocity is not excessively fast, can make adjacent two frames acquisition moment camera alignment
Landscape is entirely different, then at least partly landscape is overlapping in the adjacent image acquired twice.
In some embodiments, the possibility of 2D image is smaller compared to the acquisition interval of depth image, alternatively, in sports ground
The effect acquired in scape is more preferable.In this way, under moving scene, it is possible to acquire clearly 2D image, but depth image is adopted
Collect simultaneously inaccurate or it could even be possible to absolutely not collects.In the present embodiment, in order to obtain depth information or acquisition
More accurate depth information;The 2D image of at least two frame moment acquisition can be matched;If having had matched same object, this is identical right
As the first object can be referred to as in the present embodiment;It then can be in conjunction with the depth in the corresponding depth image of at least two frame 2D images
Value, to determine the distance of first object relative to camera.
As shown in 3A to Fig. 3 C, camera and collected people keep moving synchronously, in this way, between camera and human body
Distance is to maintain constant, and size of the size of human body in the 2D image shown in Fig. 3 A to Fig. 3 C is also to remain unchanged.?
In the case where defaulting parallel motion, the background of substantial people behind is also to keep equidistant with terminal.But under moving scene,
Human motion can block background object, this wirelessly involving back wave and lifted according to structure light collection or based on transmitting
The acquisition of example depth value, then may result in some objects and appear in 2D image, since blocking for foreground object leads to its depth
Angle value missing is abnormal.So when the third depth of same object is determined in conjunction with the depth image of multiframe 3 d video images
Value, to construct 3 D video.
In step 203, due to the introducing of third depth value, and two dimensional image is combined, can use the sides such as three moulds modeling
The stereopsis of available first object of formula in three-dimensional space, in this way, can be established in conjunction with 2D image and depth value three-dimensional
Image shows three-dimensional image effect.The foundation of 3 D video are as follows: 2D image and its correspondence based on continuous acquisition in the time domain
Depth value, continuously construct 3-D image;And video effect is showed using persistence of vision effect.In some embodiments, such as
Shown in Fig. 4, which comprises
Step 200: determining whether three dimensional video data meets depth value calibration condition;
The step 201, comprising:
If the three dimensional video data meets the depth value calibration condition, the first 2D image is matched and with described the
Two 2D images.
In some embodiments, MEC server can first judge whether three dimensional video data meets depth value calibration condition, if
Step 201 can just be executed by meeting depth value calibration condition;Otherwise step 201 is not executed, unnecessary matching and depth value are reduced
The operation such as extraction.
For example, the step 200 can include: determine the first 2D image to whether meeting depth value calibration condition, and/
Or, whether the determining depth image with the first 2D picture synchronization collection meets depth value calibration condition.Herein, with the first 2D
The depth image of picture synchronization collection, i.e., are as follows: the corresponding depth image of the first 2D image.
For example, the step 200 can include: determine whether depth image corresponding with the first 2D image exception occurs
Depth value, be believed that if there is abnormal depth value: the three dimensional video data meets the depth value calibration condition.The exception
Depth value can include: the depth value being clearly located in abnormal value range.For example, the acquisition camera of depth image has most
Big depth value acquisition range, if including more than the depth value of the maximum depth value in depth image, it is believed that occur deep
Spend image abnormity.For another example if in depth image including the depth value for negative value, equally it is believed that depth image is abnormal.These
When, it is necessary to the depth of one or more objects in the first 2D image is determined in conjunction with the depth image acquired before
Value.
In some embodiments, the step 200 can include:
It is lacked if the depth value of the first object described in corresponding first depth value exists in the first 2D image
It loses, determines that the 3 D video meets the depth value calibration condition.
For example, there is missing in the first depth value of first object, comprising: the first object is corresponding in the first 2D image
Depth value all lacks in first depth value, may also include that the first object is deep in corresponding first depth value of the first 2D image
Angle value excalation.The reason of at least partly missing herein may be do not collect or transmission process in be lost.
In order to obtain the first depth value of the first object, it may be necessary to pass through the depth image of a little acquisition moment acquisitions in the past
The middle depth value for obtaining first object.
In some embodiments, the step 200 can include:
If first object in the first 2D image is blocked by the second object, the three dimensional video data is determined
Meet the depth value calibration condition.
Presence herein is blocked, including the first object in the first 2D image by another object (i.e. described second object)
Partial occlusion all blocks.
In some embodiments, if the second object is the foreground object of the first object, and the first object is the second object
Background object, at this point, then it is possible that the case where the first object is blocked by the second object under moving scene.If the first object exists
It is blocked in first 2D image by the second object, then the depth image of the first object may be because that the blocking of the first object leads to first
At least partly depth value or full depth value of object acquire failure, so that the depth value of the first object is if desired obtained, it can
It can need to determine in conjunction with the depth image acquired before.
In short, in the present embodiment can not have to calibrate each corresponding depth value of 2D image frame by frame, can need into
When one step calibration depth value, just the three dimensional video data of two frames or two frames or more is combined to be calibrated.
In some embodiments, the step 202 can include:
By first depth value, it is compared respectively with the second depth value of M frame depth image;Wherein, the M frame is deep
It spends image and corresponds to the 2nd 2D image described in M frame, the M is the positive integer not less than 2;
If first depth value and N number of second depth value meet default condition of similarity, it is determined that described first is deep
Angle value is the third depth value;The N is the positive integer no more than M.
If default condition of similarity is not satisfied in the first depth value and N number of second depth value, closed in obvious first depth value
It cannot be used as third depth value in the depth value of the first object.
It, can be by the first depth value the if the first depth value and at least one second depth value meet default condition of similarity
The depth value of an object is used directly as third depth value;Alternatively, further determining that the depth of the first object in the first depth value
Whether angle value, which can be used as third depth value, uses or for generating third depth value.For example, the N is greater than M-N, in this way,
If the first depth value needs to use directly as third depth value, need to meet N greater than M-N;It otherwise can be based on M the
Two depth values generate the third depth value, alternatively, based on described in first depth value and M the second depth values generations
Third depth value.
In further embodiments, the combination the first depth value corresponding with the first 2D image, and with described
Corresponding second depth value of two 2D images, determines the third depth value of first object, comprising:
If first depth value and N number of second depth value are unsatisfactory for the default condition of similarity, according to N number of institute
It states the second depth value and determines the third depth value.
In the present embodiment, the corresponding depth image of the first 2D image has collected the depth value of the first object, but
It is the precision in order to promote the depth value of the first object, it in the present embodiment, can be by the first 2D image and at least two frames second
2D image is matched, can be deep based on first if then the 2nd 2D image of N frame includes the depth value of first object
Angle value and the corresponding depth image of the 2nd 2D image of N frame obtain the third depth value.For example, the first 2D image and 2
Two 2D images are matched, it is found that 3 2D images include object A, then respectively from this corresponding depth image of 3 2D images
The depth value of object A is extracted, 3 depth values are then compared, it is found that wherein 2 depth values are the same, another depth value has
Difference, since acquisition time is closely spaced, it is believed that 2 identical depth values can be the third depth value of object A.
It in some embodiments, can be according to first if the depth value of corresponding first object of the first 2D image lacks
The depth image of the previous acquisition moment acquisition of 2D image determines the third depth value, at this point, the third depth value can be
The depth value of the 2nd 2D image directly extracted from the previous acquisition moment is also possible to deep based on the acquisition of previous acquisition moment
The depth value of the first object and the moving situation of terminal device in image are spent, the third depth value is calculated.
It, can basis if the depth value of corresponding first object of the first 2D image lacks in further embodiments
Front cross frame acquires the second depth value of the first object in the depth image of moment acquisition, calculates the third depth value.For example,
The change in depth trend that the first object is determined in conjunction with second depth value at two frames acquisition moment acquires the moment according to adjacent two frame
Time interval, calculate the third depth value.
In some embodiments, the method also includes:
The method also includes: obtain at least two depth image corresponding with the first 2D image, wherein described two
A depth image includes first depth value;Described two depth images are to utilize different coding light collection.
In order to reduce, at least partly the missing of depth value, terminal may pass through in the corresponding depth image of the first 2D image
The depth image of different coding light collection.
In a kind of mode: different encoded lights can are as follows: the different shape and/or line that depth camera is projected to acquisition target
The structure light of reason, which can be non-visible light, to not interfere the acquisition of the 2D image based on visual light imaging.Not similar shape
The structure light of shape and/or texture, after projecting acquisition target, due to the bumps on acquisition target surface and apart from depth camera
Distance it is different so that depth camera is different based on the depth image of structure light collection.In this way, terminal combines projection
The shape and/or texture of structure light, with collect structure light image in the comparison of shape and/or texture that is presented, can be with
It determines the depth value of each project structured light point, depth image is constructed based on the depth value.Due to utilizing different structure
Light collection at least two frame depth images, terminal can obtain the acquisition same position of target according in conjunction at least two frame depth images
At least two frame depth values set.In this way, MEC server can obtain at least two frame depth corresponding with the first 2D image from terminal
Image can obtain more accurate depth value in conjunction with depth value in this two frames depth image, alternatively, reducing in the first 2D image
Partial objects are in the three dimensional video data described in a frame the phenomenon that depth value missing.In the present embodiment, the different coding light
It can are as follows: the encoded light of different wave length, then at this point, at least two frame depth images corresponding with the first 2D image can acquire simultaneously,
Acquisition when can also be different.
In further embodiments, the different coding light can are as follows: the encoded light of phase co-wavelength;Then at this point, with first
The corresponding at least two frame depth images of 2D image can be acquired by time-multiplexed mode in same frame three dimensional video data
Different moments acquisition in journey;To realize the time division multiplexing of depth image.
In another embodiment, using the encoded light of different emission, encoded light is reflected after encountering acquisition target
Reflected light is formed, can be solved based on reflection interval and the light propagation speed for the launch time and reflected light for emitting light
Depth value.At this point, the encoded light of different shape and/or texture, can make light transmission path inconsistent, so in some coding
Light is blocked, and may will form reflected light return after another coding light emitting, to collect depth value;Alternatively, two
Frame encoded light has all reflected to form reflected light and has reached the receiver of depth camera, can also generate two frame depth maps respectively
Picture.
It, then can be with two frame depth image of synchronous acquisition according to the encoded light of different wave length;If utilizing the volume of phase co-wavelength
Code light, then can be used time division multiplexing and handled.
As shown in figure 5, be applied in mobile edge calculations MEC server the present embodiment provides a kind of data processing equipment,
Include:
Matching module 501 obtains the first 2D image and at least two for matching the first two dimension 2D image and the 2nd 2D image
The first object that the 2nd 2D image includes described in frame;
First determining module 502, in conjunction with the first depth value corresponding with the first 2D image, and with described second
Corresponding second depth value of 2D image, determines the third depth value of first object;
Modeling module 503, for establishing first object in conjunction with the first 2D image and the third depth value
3 D video.
In some embodiments, the matching module 501, the first determining module 502 and modeling module 503 can be program mould
Block corresponds to computer-executable code, after which is performed, can be realized aforementioned pixel coder data
And the transmission of three dimensional video data.
In further embodiments, the matching module 501, the first determining module 502 and modeling module 503 can also be hard
The combination of part module and program module, for example, complex programmable array or field programmable gate array.
In further embodiments, the matching module 501, the first determining module 502 and modeling module 503 can be corresponded to
In hardware module, for example, the matching module 501, the first determining module 502 and modeling module 503 can be specific integrated circuit.
In some embodiments, described device further include:
Second determining module, for determining whether three dimensional video data meets depth value calibration condition, wherein the three-dimensional
Video data includes: the first 2D image and the 2nd 2D image;
The matching module 501, if meeting the depth value calibration condition specifically for the three dimensional video data, matching
The first 2D image and with the 2nd 2D image.
In some embodiments, second determining module, if being specifically used for the corresponding institute in the first 2D image
There is missing in the depth value for stating the first object described in the first depth value, determine that the 3 D video meets the depth value calibration
Condition.
In some embodiments, second determining module, if being also used to described first in the first 2D image
Object is blocked by the second object, determines that the three dimensional video data meets the depth value calibration condition.
In some embodiments, first determining module 502, comprising:
Comparative sub-module, for the second depth value of first depth value and M frame depth image to be compared respectively;
Wherein, the M frame depth image corresponds to the 2nd 2D image described in M frame, and the M is the positive integer not less than 2;
Determine submodule, if meeting default condition of similarity for first depth value and N number of second depth value,
Determine that first depth value is the third depth value, wherein the N is the positive integer no more than M.
In some embodiments, the N is greater than M-N.
In some embodiments, the determining submodule, if being also used to first depth value and N number of second depth
Value is unsatisfactory for the default condition of similarity, then determines the third depth value according to N number of second depth value.
The present embodiment provides a kind of computer storage medium, it is stored with computer instruction in the computer storage medium,
The step of data processing method being applied in terminal or MEC server is realized when the instruction is executed by processor, for example, such as
One or more of Fig. 2 or method shown in Fig. 4.
As shown in fig. 6, the present embodiment provides a kind of MEC server, including memory, processor and storage are on a memory
And the computer instruction that can be run on a processor, which is characterized in that realize and be applied to when the processor executes described instruction
The step of data processing method in terminal or MEC server, for example, one in executable method as shown in figures 2 or 4
It is a or multiple.
In some embodiments, the MEC server further include: connect mouth, which can be used for and other equipment
Information exchange.For example, the communication interface at least can carry out information exchange with MEC server if the MEC server is terminal.
If the MEC server is MEC server, which at least can carry out information exchange with terminal.
A specific example is provided below in conjunction with above-mentioned any embodiment:
This example provides depth value of the one kind based on space division multiplexing coding (spatial multiplexing coding)
Information processing method, comprising:
The confirmation of depth information (depth information as above-mentioned), specifically can include: in conjunction with identical in different RGB informations
Part, the depth image based on multiframe three dimensional video data obtain final depth information.To which finally obtained depth be believed
The RGB information acquired with the corresponding moment is ceased to be matched.Same section herein can correspond to aforementioned first object.
In some embodiments, by time division multiplexing by using under different encoded lights, the depth information of acquisition, timesharing is passed
It is defeated to arrive MEC, accurate depth information is obtained for MEC.By determining movement for the comparison of the RGB information in different time
Position where target object, to obtain accurate depth information;To build accurate model.
According to person's method, then the requirement to the structure light video camera head of sampling depth information is low, and moving object may be implemented
Depth information acquisition;Accurate depth information is obtained, the modeling accuracy based on depth information is further improved.
In several embodiments provided by the present invention, it should be understood that disclosed method and smart machine, Ke Yitong
Other modes are crossed to realize.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can be tied
It closes, or is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each group
Can be through some interfaces at the mutual coupling in part or direct-coupling or communication connection, equipment or unit it is indirect
Coupling or communication connection, can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a second processing unit,
Be also possible to each unit individually as a unit, can also two frames or the above unit of two frames be integrated in one unit;
Above-mentioned integrated unit both can take the form of hardware realization, can also add the form of SFU software functional unit real using hardware
It is existing.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, ROM,
The various media that can store program code such as RAM, magnetic or disk.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product
When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with
It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention.
And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
It should be understood that between technical solution documented by the embodiment of the present invention, in the absence of conflict, Ke Yiren
Meaning combination.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.
Claims (18)
1. a kind of data processing method, which is characterized in that be applied in mobile edge calculations MEC server, comprising:
The first two dimension 2D image and the 2nd 2D image are matched, the 2nd 2D image described in the first 2D image and at least two frames is obtained and wraps
The first object included;
In conjunction with the first depth value corresponding with the first 2D image, and the second depth value corresponding with the 2nd 2D image,
Determine the third depth value of first object;
In conjunction with the first 2D image and the third depth value, the 3 D video of first object is established.
2. the method according to claim 1, wherein
The described method includes:
Determine whether three dimensional video data meets depth value calibration condition, wherein the three dimensional video data includes: described first
2D image and the 2nd 2D image;
The first two dimension 2D image of the matching and the 2nd 2D image, comprising:
If the three dimensional video data meets the depth value calibration condition, match the first 2D image and with the 2nd 2D
Image.
3. according to the method described in claim 2, it is characterized in that,
Whether the determining three dimensional video data meets depth value calibration condition, comprising:
If in the first 2D image there is missing in the depth value of the first object described in corresponding first depth value, really
The fixed 3 D video meets the depth value calibration condition.
4. according to the method described in claim 2, it is characterized in that,
Whether the determining three dimensional video data meets depth value calibration condition, comprising:
If first object in the first 2D image is blocked by the second object, determine that the three dimensional video data meets
The depth value calibration condition.
5. the method according to claim 1, wherein
The combination the first depth value corresponding with the first 2D image, and the second depth corresponding with the 2nd 2D image
Value, determines the third depth value of first object, comprising:
Second depth value of first depth value and M frame depth image is compared respectively;Wherein, the M frame depth map
As corresponding to the 2nd 2D image described in M frame, the M is the positive integer not less than 2;
If first depth value and N number of second depth value meet default condition of similarity, it is determined that first depth value
For the third depth value, wherein the N is the positive integer no more than M.
6. according to the method described in claim 5, it is characterized in that, the N is greater than M-N.
7. according to the method described in claim 5, it is characterized in that,
The combination the first depth value corresponding with the first 2D image, and the second depth corresponding with the 2nd 2D image
Value, determines the third depth value of first object, further includes:
If first depth value and N number of second depth value are unsatisfactory for the default condition of similarity, according to N number of described the
Two depth values determine the third depth value.
8. method according to any one of claims 1 to 6, which is characterized in that
The method also includes:
Obtain at least two depth image corresponding with the first 2D image, wherein described two depth images include described
First depth value;Described two depth images are to utilize different coding light collection.
9. a kind of data processing equipment, which is characterized in that be applied in mobile edge calculations MEC server, comprising:
Matching module obtains described in the first 2D image and at least two frames for matching the first two dimension 2D image and the 2nd 2D image
The first object that 2nd 2D image includes;
First determining module, in conjunction with the first depth value corresponding with the first 2D image, and with the 2nd 2D image
Corresponding second depth value determines the third depth value of first object;
Modeling module, for establishing the three-dimensional view of first object in conjunction with the first 2D image and the third depth value
Frequently.
10. device according to claim 9, which is characterized in that
Described device further include:
Second determining module, for determining whether three dimensional video data meets depth value calibration condition, wherein the 3 D video
Data include: the first 2D image and the 2nd 2D image;
The matching module matches described the if meeting the depth value calibration condition specifically for the three dimensional video data
One 2D image and with the 2nd 2D image.
11. device according to claim 10, which is characterized in that
Second determining module, if being specifically used in the first 2D image the described in corresponding first depth value
There is missing in the depth value of an object, determine that the 3 D video meets the depth value calibration condition.
12. device according to claim 11, which is characterized in that
Second determining module, if first object being also used in the first 2D image is blocked by the second object,
Determine that the three dimensional video data meets the depth value calibration condition.
13. device according to claim 11, which is characterized in that
First determining module, comprising:
Comparative sub-module, for the second depth value of first depth value and M frame depth image to be compared respectively;Its
In, the M frame depth image corresponds to the 2nd 2D image described in M frame, and the M is the positive integer not less than 2;
Submodule is determined, if meeting default condition of similarity for first depth value and N number of second depth value, it is determined that
First depth value is the third depth value, and the N is the positive integer no more than M.
14. device according to claim 13, which is characterized in that wherein, the N is greater than M-N.
15. device according to claim 14, which is characterized in that the determining submodule, if it is deep to be also used to described first
Angle value and N number of second depth value are unsatisfactory for the default condition of similarity, then according to N number of second depth value determination
Third depth value.
16. according to the described in any item devices of claim 9 to 15, which is characterized in that
Described device further include:
Module is obtained, for obtaining at least two depth image corresponding with the first 2D image, wherein described two depth
Image includes first depth value;Described two depth images are to utilize different coding light collection.
17. a kind of computer storage medium, computer instruction is stored in the computer storage medium, which is characterized in that should
The step of any one of claim 1 to 8 data processing method is realized when instruction is executed by processor.
18. a kind of MEC server including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine instruction, which is characterized in that the processor is realized when executing described instruction at any one of claim 1 to 8 data
The step of reason method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811161434.4A CN109389674B (en) | 2018-09-30 | 2018-09-30 | Data processing method and device, MEC server and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811161434.4A CN109389674B (en) | 2018-09-30 | 2018-09-30 | Data processing method and device, MEC server and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109389674A true CN109389674A (en) | 2019-02-26 |
CN109389674B CN109389674B (en) | 2021-08-13 |
Family
ID=65419292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811161434.4A Active CN109389674B (en) | 2018-09-30 | 2018-09-30 | Data processing method and device, MEC server and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109389674B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110336973A (en) * | 2019-07-29 | 2019-10-15 | 联想(北京)有限公司 | Information processing method and its device, electronic equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102385237A (en) * | 2010-09-08 | 2012-03-21 | 微软公司 | Depth camera based on structured light and stereo vision |
US20160267710A1 (en) * | 2013-11-19 | 2016-09-15 | Huawei Technologies Co., Ltd. | Image Rendering Method and Apparatus |
CN108090877A (en) * | 2017-11-29 | 2018-05-29 | 深圳慎始科技有限公司 | A kind of RGB-D camera depth image repair methods based on image sequence |
CN108495112A (en) * | 2018-05-10 | 2018-09-04 | Oppo广东移动通信有限公司 | Data transmission method and terminal, computer storage media |
CN108564614A (en) * | 2018-04-03 | 2018-09-21 | Oppo广东移动通信有限公司 | Depth acquisition methods and device, computer readable storage medium and computer equipment |
-
2018
- 2018-09-30 CN CN201811161434.4A patent/CN109389674B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102385237A (en) * | 2010-09-08 | 2012-03-21 | 微软公司 | Depth camera based on structured light and stereo vision |
US20160267710A1 (en) * | 2013-11-19 | 2016-09-15 | Huawei Technologies Co., Ltd. | Image Rendering Method and Apparatus |
CN108090877A (en) * | 2017-11-29 | 2018-05-29 | 深圳慎始科技有限公司 | A kind of RGB-D camera depth image repair methods based on image sequence |
CN108564614A (en) * | 2018-04-03 | 2018-09-21 | Oppo广东移动通信有限公司 | Depth acquisition methods and device, computer readable storage medium and computer equipment |
CN108495112A (en) * | 2018-05-10 | 2018-09-04 | Oppo广东移动通信有限公司 | Data transmission method and terminal, computer storage media |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110336973A (en) * | 2019-07-29 | 2019-10-15 | 联想(北京)有限公司 | Information processing method and its device, electronic equipment and medium |
CN110336973B (en) * | 2019-07-29 | 2021-04-13 | 联想(北京)有限公司 | Information processing method and device, electronic device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109389674B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706403B2 (en) | Positional zero latency | |
US8330801B2 (en) | Complexity-adaptive 2D-to-3D video sequence conversion | |
CN104956404B (en) | It is rebuild with the real-time three-dimensional that power effective depth sensor uses | |
US6573912B1 (en) | Internet system for virtual telepresence | |
CN1956555B (en) | Apparatus and method for processing 3d picture | |
EP2353298B1 (en) | Method and system for producing multi-view 3d visual contents | |
JP4198054B2 (en) | 3D video conferencing system | |
EP2299726B1 (en) | Video communication method, apparatus and system | |
US20180192033A1 (en) | Multi-view scene flow stitching | |
Ohm et al. | A realtime hardware system for stereoscopic videoconferencing with viewpoint adaptation | |
CN106254854B (en) | Preparation method, the apparatus and system of 3-D image | |
CN110114803A (en) | Threedimensional model distribution method, threedimensional model method of reseptance, threedimensional model diostribution device and threedimensional model reception device | |
CN110036644A (en) | Threedimensional model distribution method and threedimensional model diostribution device | |
EP0955606A2 (en) | Measurement of depht image considering time delay | |
CN104299220B (en) | A kind of method that cavity in Kinect depth image carries out real-time filling | |
CN102447925B (en) | Method and device for synthesizing virtual viewpoint image | |
CN108932725B (en) | Scene flow estimation method based on convolutional neural network | |
US11204495B2 (en) | Device and method for generating a model of an object with superposition image data in a virtual environment | |
KR100560464B1 (en) | Multi-view display system with viewpoint adaptation | |
CN112233165B (en) | Baseline expansion implementation method based on multi-plane image learning visual angle synthesis | |
CN109194946A (en) | Data processing method and device, electronic equipment and storage medium | |
US20120113221A1 (en) | Image processing apparatus and method | |
Knorr et al. | Stereoscopic 3D from 2D video with super-resolution capability | |
JP2004229093A (en) | Method, device, and program for generating stereoscopic image, and recording medium | |
CN109389674A (en) | Data processing method and device, MEC server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |