CN103247027A - Image processing method and electronic terminal - Google Patents

Image processing method and electronic terminal Download PDF

Info

Publication number
CN103247027A
CN103247027A CN2012100317506A CN201210031750A CN103247027A CN 103247027 A CN103247027 A CN 103247027A CN 2012100317506 A CN2012100317506 A CN 2012100317506A CN 201210031750 A CN201210031750 A CN 201210031750A CN 103247027 A CN103247027 A CN 103247027A
Authority
CN
China
Prior art keywords
foreground object
object zone
zone
depth value
translational movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100317506A
Other languages
Chinese (zh)
Other versions
CN103247027B (en
Inventor
张磊
王哲鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210031750.6A priority Critical patent/CN103247027B/en
Publication of CN103247027A publication Critical patent/CN103247027A/en
Application granted granted Critical
Publication of CN103247027B publication Critical patent/CN103247027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an image processing method and an electronic terminal. The method comprises steps as follows: obtaining a depth value of a foreground object area of a to-be-processed image, determining a translation amount of the foreground object area on the basis of the depth value of the foreground object area, then determining an expansion amount of the foreground object area on the basis of the depth value of the foreground object area, and processing the foreground object area on the basis of the translation amount and the expansion amount. According to the image processing method and the electronic terminal, a 'Hole' problem occurring in a technology of converting a two dimensional image into a three dimensional image in the prior art can be solved.

Description

Image processing method and electric terminal
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of image processing method and electric terminal.
Background technology
3D science fiction movies monumental work " A Fanda " by Ka Meilong director is ignited the 3D upsurge, and 3D moment sweeps over the country.As a kind of picture display technique, the 3D technology realizes the unprecedented vivid effect of television image.Under the 2D picture showed, the televiewer was as just a beholder, and the TV content displayed allows the spectators be exactly an eyewitness or onlooker in the subconsciousness the inside.But under the situation of 3D picture, psychological displacement has taken place in the beholder.The 3D stereoscopic picture plane that presents in TV, being given to the most direct sensation of beholder is exactly that oneself is as being brought in the sight of television image.This real visual experience has enriched spectators' visual experience, makes the demand of stereoscopic video content increase severely with day.
In order to cooperate stereoscopic device, a large amount of stereoscopic video contents of market demand, but it is also very deficient to can be used for the three-dimensional video-frequency of stereo display at present, influences market development.Present stage solves the most quick, the valid approach of stereo content shortage and utilizes the 2D image to change the 3D rendering technology exactly, allows huge data bank maximum using.Image is synthetic to be that the 2D image changes pith in the 3D rendering technology, obtains the right and left eyes image by this processing.Adopt the pixel shift method at present, but bring a major issue, be i.e. " Hole " problem.Because depth value difference, translational movement are also different, after the pixel translation, owing to do not know the pixel value of occlusion area, " Hole " can occur, and cause image unsmooth.Wherein " Hole ", i.e. " cavity ", because the pixel value the unknown in the zone that is blocked in the image, the Drawing Object in the image is through after the translation of different distance, the zone that is blocked in the image shows as the cavity.
Summary of the invention
The invention provides a kind of image process method and electric terminal, being used for solving prior art 2D image changes " Hole " problem that occurs in the 3D rendering technology.
One aspect of the present invention provides a kind of image processing method, and for the treatment of a pending image, described method comprises: the depth value that obtains the foreground object zone of described pending image; Based on the depth value in described foreground object zone, determine the translational movement in described foreground object zone; Based on the depth value in described foreground object zone, determine the stroke in described foreground object zone; And handle described foreground object zone based on described translational movement and described stroke.
Preferably, before the stroke in described definite described foreground object zone, described method also comprises: the depth value that obtains the background subject area of described pending image; Based on the depth value of described background subject area, determine the translational movement of described background subject area.
Preferably, described depth value based on described foreground object zone, determine the stroke in described foreground object zone, specifically comprise: based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the size in the pending zone of described pending image; And based on the size in described pending zone, determine the stroke in described foreground object zone.
Preferably, described based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the pending area size of described pending image, be specially: determine that described pending zone is of a size of the absolute value of difference between the translational movement in the translational movement of described background subject area and described foreground object zone.
Preferably, described depth value based on described foreground object zone is determined to be specially the translational movement in described foreground object zone: S1=D1*c/ (2* (D1+b)); Wherein S1 represents the translational movement in described foreground object zone; D1 represents the depth value in described foreground object zone; C represents the distance between two of user; B represents two bee-lines to the screen that shows described pending image of described user.
Preferably, described depth value based on described foreground object zone is determined to be specially the translational movement in described foreground object zone: S1=K*D1; Wherein S1 represents the translational movement in described foreground object zone; D1 represents the depth value in described foreground object zone; K represents scale factor.
Preferably, described depth value based on described foreground object zone is determined to be specially the stroke in described foreground object zone: based on the mapping table of depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.
Preferably, described handle described foreground object zone based on described translational movement and described stroke after, described method also comprises: whether have pending zone between the described foreground object zone after the judgment processing and the background subject area of described pending image; If then interpolation processing is carried out in described pending zone.
The present invention also provides a kind of electric terminal on the other hand, and described electric terminal comprises: display is used for showing pending image; Mainboard electrically connects with described display; Processing unit, be arranged on the described mainboard, be used for obtaining described pending image the foreground object zone depth value and based on the depth value in described foreground object zone, determine the translational movement in described foreground object zone, then based on the depth value in described foreground object zone, determine the stroke in described foreground object zone, and handle described foreground object zone based on described translational movement and described stroke.
Preferably, described processing unit also comprises first process chip, be used for before the stroke in described definite described foreground object zone, obtain described pending image the background subject area depth value and based on the depth value of described background subject area, determine the translational movement of described background subject area.
Preferably, described processing unit also comprises second process chip, is used for described depth value based on described foreground object zone, determines the stroke in described foreground object zone, and described second process chip specifically comprises:
First confirmation unit is used for based on the translational movement of described background subject area and the translational movement in described foreground object zone, determines the size in the pending zone of described pending image; And
Second confirmation unit is used for the size based on described pending zone, determines the stroke in described foreground object zone.
Preferably, described processing unit also comprises the 3rd process chip, for described based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the pending area size of described pending image, described the 3rd process chip specifically comprises:
First computing unit be used for to determine that described pending zone is of a size of the absolute value of difference between the translational movement in the translational movement of described background subject area and described foreground object zone.
Preferably, described processing unit comprises that also manages chip everywhere, be used for described depth value based on described foreground object zone, determine the stroke in described foreground object zone, described manages chip everywhere specifically comprises: the 3rd confirmation unit, be used for the mapping table based on depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.
Preferably, described processing unit also comprises the 5th process chip, be used for described handle described foreground object zone based on described translational movement and described stroke after, the processing of described the 5th process chip also comprises:
Whether judging unit exists pending zone between the background subject area for the described foreground object zone after the judgment processing and described pending image;
Processing unit is if the judged result of judging unit is for being that processing unit then carries out interpolation processing to described pending zone.
Beneficial effect of the present invention is as follows:
The depth value in the foreground object zone by obtaining described pending image in one embodiment of the invention, depth value based on described foreground object zone, determine the translational movement in described foreground object zone, depth value based on described foreground object zone, determine the stroke in described foreground object zone, and handle described foreground object zone based on described translational movement and described stroke.Therefore by the present invention the flexible processing of yardstick is carried out in the foreground object zone, made foreground object regional occlusion " Hole " zone, thereby effectively solve " Hole " problem.
In addition, in one embodiment of the invention, obtain the depth value of the depth value in foreground object zone and background subject area and based on the depth value in described foreground object zone and the depth value of background subject area, determine the translational movement in foreground object zone and the translational movement of background subject area, then based on the translational movement in foreground object zone and the translational movement of background subject area, determine the size in the pending zone in the pending image and based on the size in pending zone, determine the stroke in foreground object zone, and utilize the stroke of foreground object to handle pending image.Therefore the present invention determines the stroke in foreground object zone by the size in the zone in pending zone, can exactly " Hole " zone be covered, and improves reliability.
Further again, in one embodiment of the invention, based on the mapping table of depth value and expansion and contraction, reach the depth value based on the foreground object zone, determine the stroke in foreground object zone.Therefore the present invention can obtain stroke fast, improves the image treatment effeciency.
Description of drawings
Fig. 1 is the pending image in one embodiment of the invention;
Fig. 2 is the process flow diagram of image processing method in one embodiment of the invention;
Fig. 3 is pending size of images figure in one embodiment of the invention;
Fig. 4 is the image after the translation in one embodiment of the invention;
Fig. 5 is the image after the convergent-divergent in one embodiment of the invention;
Fig. 6 is the mapping table of depth value and expansion and contraction in one embodiment of the invention;
Fig. 7 is the image after the translation in another embodiment of the present invention;
Fig. 8 is the image after the convergent-divergent in another embodiment of the present invention;
Fig. 9 is electric terminal Organization Chart in one embodiment of the invention;
Figure 10 is processing unit Organization Chart among Fig. 9 of the present invention.
Embodiment
One embodiment of the invention provides a kind of image processing method, and this method is applied to a pending image, and pending image is a 2D image, changes the image resource of 3D rendering technology as the 2D image.
For making more detail knowledge the present invention of those skilled in the art, describe the present invention below in conjunction with accompanying drawing.
As shown in Figure 1, pending image 10 comprises foreground object zone 101 and background subject area 102, and foreground object zone 101 and background subject area 102 have hiding relation.For realizing that the 2D image changes the 3D rendering technology, adopt the synthetic structure of image left and right sides eye pattern, the present pixel shift method that adopts, and have hiding relation foreground object zone 101 and a background subject area 102, because depth value difference, translational movement are also different, after the pixel translation, owing to do not know the pixel value of occlusion area, " Hole " zone 403 as shown in Figure 4 will occur.
Change " Hole " problem that occurs in the 3D rendering technology in order to solve the 2D image, as shown in Figure 2, Fig. 2 is the process flow diagram of image processing method in one embodiment of the invention, and the method for present embodiment comprises:
Step 210: the depth value that obtains the foreground object zone of pending image;
Step 212: based on the depth value in foreground object zone, determine to state the translational movement in foreground object zone;
Step 214: based on the depth value in foreground object zone, determine the stroke in foreground object zone;
Step 216: handle the foreground object zone based on translational movement and stroke.
Wherein, in step 210, obtain the depth value in the foreground object zone of pending image.Be specially: adopt SfM method (from exercise recovery structure method), DfC method (Depth from cues degree of depth clue method) or MLA method (Machine Learning Algorithm machine learning algorithm) to obtain the depth value in the foreground object zone of pending image, SfM method wherein, namely utilize physical relations such as amount of movement, camera motion and the object of object in the image move in three dimensions, the depth value of estimation image.SfM method, DfC method, MLA method are those skilled in the art to be familiar with, so do not repeat them here, certainly, in other embodiments, also can obtain depth value by other means, and the present invention is not restricted.For convenience of description, the depth value in foreground object zone is designated as D1, in addition, in the present embodiment, the depth value D1 in described foreground object zone is the absolute depth value, and unit is rice (m).For example, in the pending image 10 shown in Figure 1, the depth value D1 that adopts the SfM method can obtain foreground object zone 101 is 3m.In another embodiment, the depth value D1 in described foreground object district can be the relative depth value, uses 8 gray-scale maps to represent, scope is 0-255, the relative depth value can be converted into the absolute depth value, carries out the unit unification, in following each embodiment, for convenience of description, all adopt the absolute depth value.
In step 212, based on the depth value in foreground object zone, determine the translational movement in foreground object zone.Be specially: based on the depth value D1 in foreground object zone, by formula S 1=D1*c/ (2* (D1+b)), can determine the translational movement S1 in foreground object zone, in addition, in the present embodiment, the translational movement S1 in described foreground object zone, unit is rice (m), c represents the distance between two of user, and b represents two bee-lines to the screen that shows pending image of described user.For example, as shown in Figure 3, show foreground object zone 101 on the screen 30, background subject area 102, known c is 0.05m, b is 5m, D1 is 3m, above data substitution is calculated, and according to formula S 1=D1*c/ (2* (D1+b)), the translational movement S1 that can calculate foreground object zone 101 is 0.0067m.In another embodiment, the translational movement in described foreground object zone also can adopt relative translation amount S1 ', represents with number of pixels, be S1 '=S1/p, wherein p is the size of a pixel horizontal direction of display, in following each embodiment, for convenience of description, all adopt absolute translational movement.
Before step 214, this method also comprises: obtain pending image the background subject area depth value and based on the depth value of background subject area, determine the translational movement of background subject area.The depth value that obtains the background subject area of pending image is specially: adopt SfM method, DfC method or MLA method to obtain the depth value of the background subject area of pending image, for convenience of description, the depth value of background subject area is designated as D2, obtain three kinds of methods of depth value, namely SfM method, DfC method, MLA method are those skilled in the art and are familiar with, so do not repeat them here, certainly, in other embodiments, can obtain depth value by other means, the present invention is not restricted yet.In addition, in the present embodiment, the depth value D2 of described background subject area is the absolute depth value, and unit is rice (m).For example, in the pending image 10 shown in Figure 1, the depth value D2 that adopts the SfM method can obtain background subject area 102 is 5m.In another embodiment, the depth value D2 of described background subject area is the relative depth value, uses 8 gray-scale maps to represent, scope is 0-255, the relative depth value can be converted into the absolute depth value, carries out the unit unification, in following each embodiment, for convenience of description, all adopt the absolute depth value.
Depth value based on the background subject area, determine the translational movement of background subject area, be specially: based on the depth value D2 of background subject area, by formula S 2=D2*c/ (2* (D2+b)), can determine the translational movement S2 of background subject area, in the present embodiment, the translational movement S2 of background subject area is absolute translational movement, unit is rice (m), and c represents the distance between two of user, and b represents two bee-lines to the screen that shows pending image of described user.For example, as shown in Figure 3, show foreground object zone 101 on the screen 30, background subject area 102, known c is 0.05m, b is 5m, D2 is 5m, above data substitution is calculated, and according to formula S 2=D2*c/ (2* (D2+b)), the translational movement S2 that can calculate background subject area 102 is 0.0125m.In another embodiment, the translational movement of described background subject area also can adopt relative translation amount S2 ', represents with number of pixels, be S2 '=S2/p, wherein p is the size of a pixel horizontal direction of display, in following each embodiment, for convenience of description, all adopt absolute translational movement.
In step 214, based on the depth value in foreground object zone, determine the stroke in foreground object zone.Be specially: based on the translational movement of background subject area and the translational movement in foreground object zone, determine pending image pending zone size and based on the size in pending zone, determine the stroke in foreground object zone.Wherein, pending zone is of a size of the absolute value of difference between the translational movement of the translational movement in foreground object zone and background subject area | and S1-S2|, pending zone i.e. " Hole " zone.Size according to pending zone | S1-S2|, utilize formula Z=(2*|S1-S2|) to determine the stroke Z of foreground object.For example, when the 2D image changes 3D rendering left eye figure into, by aforementioned S1 and the S2 of calculating, foreground object among Fig. 1 zone 101 and background subject area 102 are carried out can obtaining as shown in Figure 4 foreground object regional 401 and background subject area 402 according to the translational movement S2 of the translational movement S1 in foreground object zone 101 and background subject area 102 respectively to left after, because S1 and S2 vary in size, be that foreground object zone 101 is different with background subject area 102 translational movements, so " Hole " zone 403 will occur, according to formula Hole=|S1-S2|, the size that can calculate " Hole " zone 403 is 0.0058m.Among Fig. 4, according to formula Z=2*|S1-S2|, the stroke that can calculate foreground object zone 401 is 0.0116m, i.e. foreground object zone each broadening 0.0058m of 401 the right and lefts.
In step 216, handle the foreground object zone based on translational movement and stroke.Human-eye visual characteristic shows, for same object, and the size difference of human eye impression during different distance, the close shot size is bigger than distant view.Utilize this principle, foreground object area size is increased, " Hole " zone will be automatically by the foreground object regional occlusion like this, thereby solves " Hole " filling problem.For example, the foreground object zone 401 among Fig. 4 being amplified, is 0.0116 to amplify when foreground object zone 401 being carried out according to stroke.Fig. 5 comprises among Fig. 5 that for through the image after the processing after handling by amplification, " Hole " zone 503 is blocked in foreground object zone 501 through amplifying foreground object zone 501, background subject area 502 and " Hole " zone 503 after handling.
Further, processing procedure also comprises, adopts antialiasing processing region edge, reduces regional discontinuous sense, makes the image nature.
In another embodiment, in step 212, based on the depth value in foreground object zone, determine the translational movement in foreground object zone.Be specially: based on the depth value in foreground object zone, according to formula S 1=K*D1, wherein K is scale factor, determines the translational movement in foreground object zone.In one embodiment, can carry out a large amount of sample experiments by formula S 1=D1*c/ (2* (D1+b)), test the result factually and show that the depth value in the translational movement in foreground object zone and foreground object zone is proportionate relationship, 1=K*D1 represents with formula S.For example, in the pending image 70 shown in Figure 7, foreground object zone 701 and background subject area 702 are when screen display, and viewing effect is that foreground object zone 701 and background subject area 702 are positioned at before the screen, and the span of K is 0.05-0.1 in the S1=K*D1 formula in this case.The depth value D1 in known foreground object zone 701 is 3m, makes that the K value is at 0.05 o'clock, substitution formula S 1=K*D1, and the translational movement S1 that can calculate foreground object zone 701 is 0.15m.
In step 214, based on the depth value in foreground object zone, determine the stroke in foreground object zone.Be specially: based on the mapping table of depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.For example, in the pending image shown in Figure 1, the depth value D1 in foreground object zone 101 is 3m, mapping table Fig. 6 of query depth value and expansion and contraction as can be known, when D1 was 3m, corresponding expansion and contraction was 0.1, and stroke is the product of described expansion and contraction and foreground object zone 101 sizes, the foreground object zone 101 of supposing Fig. 1 is of a size of a unit area, and then Dui Ying stroke is 0.1m.
In step 216, handle the foreground object zone based on translational movement and stroke.Be specially: based on translational movement, determine the pending zone in the pending image.Based on stroke, with the processing of stretching of foreground object zone.Human-eye visual characteristic shows, for same object, and the size difference of human eye impression during different distance, the close shot size is bigger than distant view.Utilize this principle, foreground object area size is increased, " Hole " zone will be automatically by the foreground object regional occlusion like this, thereby solves " Hole " filling problem.For example, when the 2D image changes 3D rendering right eye figure into, translational movement S1 according to foreground object zone 101 carries out to right translation with the foreground object zone 101 among Fig. 1, obtains foreground object zone 701 as shown in Figure 7 after the translation, will occur " Hole " zone 703 after the translation.Foreground object zone 701 among Fig. 7 is amplified, and is 0.1 to carry out equal proportion and amplify when foreground object zone 701 being carried out according to expansion and contraction.Fig. 8 is through the image after handling, and Fig. 8 comprises foreground object zone 801, background subject area 802 and " Hole " zone 803, and after handling by amplification, " Hole " zone 803 is blocked in foreground object zone 801.
After step 216, also further whether there is " Hole " zone between the background subject area of the described foreground object zone after the judgment processing and described pending image, if, then interpolation processing is carried out in described " Hole " zone, utilize the neighbor pixel interpolation to obtain " Hole " area pixel, because " Hole " zone after step 216 is handled is very little, even do not had, so need the elemental area of interpolation processing very little, so with respect to the method for all using pixel prediction in the prior art, better effects if.
Further, after step 216, processing procedure also comprises, adopts antialiasing processing region edge, reduces regional discontinuous sense, makes the image nature.
One embodiment of the invention also provides a kind of electric terminal, and as shown in Figure 9, electric terminal comprises:
Display 80 is used for showing pending image; Mainboard 90 electrically connects with display; Processing unit 901, be arranged on the mainboard 90, be used for obtaining pending image the foreground object zone depth value and based on the depth value in foreground object zone, determine the translational movement in foreground object zone, then based on the depth value in foreground object zone, determine the stroke in foreground object zone, and handle the foreground object zone based on translational movement and stroke.
As shown in figure 10, processing unit 901 also comprises first process chip 9011, be used for before the stroke of determining the foreground object zone, obtain pending image the background subject area depth value and based on the depth value of background subject area, determine the translational movement of background subject area.
Processing unit 901 also comprises second process chip 9012, is used for described depth value based on described foreground object zone, determines the stroke in described foreground object zone, and described second process chip 9012 specifically comprises:
First confirmation unit is used for based on the translational movement of described background subject area and the translational movement in described foreground object zone, determines the size in the pending zone of described pending image; And
Second confirmation unit is used for the size based on described pending zone, determines the stroke in described foreground object zone.
Processing unit 901 also comprises the 3rd process chip 9013, be used for described based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the pending area size of described pending image, described the 3rd process chip 9013 specifically comprises:
First computing unit be used for to determine that described pending zone is of a size of the absolute value of difference between the translational movement in the translational movement of described background subject area and described foreground object zone.
Processing unit 901 comprises that also manages chip 9014 everywhere, be used for described depth value based on described foreground object zone, determine the stroke in described foreground object zone, described manages chip 9014 everywhere specifically comprises: the 3rd confirmation unit, be used for the mapping table based on depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.
Described processing unit 901 also comprises the 5th process chip 9015, be used for described handle described foreground object zone based on described translational movement and described stroke after, the processing of described the 5th process chip 9015 also comprises:
Whether judging unit exists pending zone between the background subject area for the described foreground object zone after the judgment processing and described pending image;
Processing unit is if the judged result of judging unit is for being that processing unit then carries out interpolation processing to described pending zone.
By reading the operating process of the image processing method according to the embodiment of the invention as described above, the implementation process of above-mentioned each unit of Fig. 9 and electric terminal shown in Figure 10 just becomes and has been perfectly clear, therefore, succinct for instructions is not described in detail in this.
By the various embodiments described above among the present invention, can reach following technique effect at least:
The depth value in the foreground object zone by obtaining described pending image in one embodiment of the invention, depth value based on described foreground object zone, determine the translational movement in described foreground object zone, depth value based on described foreground object zone, determine the stroke in described foreground object zone, and handle described foreground object zone based on described translational movement and described stroke.Therefore by the present invention the flexible processing of yardstick is carried out in the foreground object zone, made foreground object regional occlusion " Hole " zone, thereby effectively solve " Hole " problem.
In addition, in one embodiment of the invention, obtain the depth value of the depth value in foreground object zone and background subject area and based on the depth value in described foreground object zone and the depth value of background subject area, determine the translational movement in foreground object zone and the translational movement of background subject area, then based on translational movement and the background object in foreground object zone, determine the size in the pending zone in the pending image and based on the size in pending zone, determine the stroke in foreground object zone, and utilize the stroke of foreground object to handle pending image.Therefore the present invention determines the stroke in foreground object zone by the size in the zone in pending zone, can exactly " Hole " zone be covered, and improves reliability.
Further again, in one embodiment of the invention, based on the mapping table of depth value and expansion and contraction, reach the depth value based on the foreground object zone, determine the stroke in foreground object zone.Therefore the present invention can obtain stroke fast, improves the image treatment effeciency.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (14)

1. image processing method, is characterized in that described method comprises for the treatment of a pending image:
Obtain the depth value in the foreground object zone of described pending image;
Based on the depth value in described foreground object zone, determine the translational movement in described foreground object zone;
Based on the depth value in described foreground object zone, determine the stroke in described foreground object zone; And
Handle described foreground object zone based on described translational movement and described stroke.
2. the method for claim 1 is characterized in that, before the stroke in described definite described foreground object zone, described method also comprises:
Obtain the depth value of the background subject area of described pending image;
Based on the depth value of described background subject area, determine the translational movement of described background subject area.
3. method as claimed in claim 2 is characterized in that, described depth value based on described foreground object zone is determined specifically to comprise the stroke in described foreground object zone:
Based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the size in the pending zone of described pending image; And
Based on the size in described pending zone, determine the stroke in described foreground object zone.
4. method as claimed in claim 3 is characterized in that, and is described based on the translational movement of described background subject area and the translational movement in described foreground object zone, determines the pending area size of described pending image, is specially:
Determine that described pending zone is of a size of the absolute value of difference between the translational movement in the translational movement of described background subject area and described foreground object zone.
5. the method for claim 1 is characterized in that, described depth value based on described foreground object zone is determined to be specially the translational movement in described foreground object zone:
S1=D1*c/(2*(D1+b));
Wherein S1 represents the translational movement in described foreground object zone;
D1 represents the depth value in described foreground object zone;
C represents the distance between two of user;
B represents two bee-lines to the screen that shows described pending image of described user.
6. the method for claim 1 is characterized in that, described depth value based on described foreground object zone is determined to be specially the translational movement in described foreground object zone:
S1=K*D1;
Wherein S1 represents the translational movement in described foreground object zone;
D1 represents the depth value in described foreground object zone;
K represents scale factor.
7. the method for claim 1, it is characterized in that, described depth value based on described foreground object zone, determine the stroke in described foreground object zone, be specially: based on the mapping table of depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.
8. method as claimed in claim 7 is characterized in that, described handle described foreground object zone based on described translational movement and described stroke after, described method also comprises:
Whether there is pending zone between described foreground object zone after the judgment processing and the background subject area of described pending image;
If then interpolation processing is carried out in described pending zone.
9. an electric terminal is characterized in that, described electric terminal comprises:
Display is used for showing pending image;
Mainboard electrically connects with described display;
Processing unit, be arranged on the described mainboard, be used for obtaining described pending image the foreground object zone depth value and based on the depth value in described foreground object zone, determine the translational movement in described foreground object zone, then based on the depth value in described foreground object zone, determine the stroke in described foreground object zone, and handle described foreground object zone based on described translational movement and described stroke.
10. electric terminal as claimed in claim 9, it is characterized in that, described processing unit also comprises first process chip, be used for before the stroke in described definite described foreground object zone, obtain described pending image the background subject area depth value and based on the depth value of described background subject area, determine the translational movement of described background subject area.
11. electric terminal as claimed in claim 10, it is characterized in that described processing unit also comprises second process chip, be used for described depth value based on described foreground object zone, determine the stroke in described foreground object zone, described second process chip specifically comprises:
First confirmation unit is used for based on the translational movement of described background subject area and the translational movement in described foreground object zone, determines the size in the pending zone of described pending image; And
Second confirmation unit is used for the size based on described pending zone, determines the stroke in described foreground object zone.
12. electric terminal as claimed in claim 11, it is characterized in that, described processing unit also comprises the 3rd process chip, be used for described based on the translational movement of described background subject area and the translational movement in described foreground object zone, determine the pending area size of described pending image, described the 3rd process chip specifically comprises:
First computing unit be used for to determine that described pending zone is of a size of the absolute value of difference between the translational movement in the translational movement of described background subject area and described foreground object zone.
13. electric terminal as claimed in claim 9, it is characterized in that, described processing unit comprises that also manages chip everywhere, be used for described depth value based on described foreground object zone, determine the stroke in described foreground object zone, described the manages chip everywhere specifically comprises: the 3rd confirmation unit is used for the mapping table based on depth value and expansion and contraction, reach the depth value based on described foreground object zone, determine the stroke in described foreground object zone.
14. electric terminal as claimed in claim 13 is characterized in that, described processing unit also comprises the 5th process chip, be used for described handle described foreground object zone based on described translational movement and described stroke after, the processing of described the 5th process chip also comprises:
Whether judging unit exists pending zone between the background subject area for the described foreground object zone after the judgment processing and described pending image;
Processing unit is if the judged result of judging unit is for being that processing unit then carries out interpolation processing to described pending zone.
CN201210031750.6A 2012-02-13 2012-02-13 Image processing method and electric terminal Active CN103247027B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210031750.6A CN103247027B (en) 2012-02-13 2012-02-13 Image processing method and electric terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210031750.6A CN103247027B (en) 2012-02-13 2012-02-13 Image processing method and electric terminal

Publications (2)

Publication Number Publication Date
CN103247027A true CN103247027A (en) 2013-08-14
CN103247027B CN103247027B (en) 2016-03-30

Family

ID=48926532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210031750.6A Active CN103247027B (en) 2012-02-13 2012-02-13 Image processing method and electric terminal

Country Status (1)

Country Link
CN (1) CN103247027B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980796A (en) * 2014-04-10 2015-10-14 Tcl集团股份有限公司 Video play method and device of television system
CN104983511A (en) * 2015-05-18 2015-10-21 上海交通大学 Voice-helping intelligent glasses system aiming at totally-blind visual handicapped

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282492A (en) * 2008-05-23 2008-10-08 清华大学 Method for regulating display depth of three-dimensional image
US20090103616A1 (en) * 2007-10-19 2009-04-23 Gwangju Institute Of Science And Technology Method and device for generating depth image using reference image, method for encoding/decoding depth image, encoder or decoder for the same, and recording medium recording image generated using the method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090103616A1 (en) * 2007-10-19 2009-04-23 Gwangju Institute Of Science And Technology Method and device for generating depth image using reference image, method for encoding/decoding depth image, encoder or decoder for the same, and recording medium recording image generated using the method
CN101282492A (en) * 2008-05-23 2008-10-08 清华大学 Method for regulating display depth of three-dimensional image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
薛玖飞: "自然三维电视系统中重构显示算法研究", 《中国学位论文全文数据库》, 3 August 2011 (2011-08-03), pages 1 - 79 *
马士超: "数字电影内容的2D到3D转换", 《现代电影技术》, 30 September 2011 (2011-09-30), pages 3 - 11 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980796A (en) * 2014-04-10 2015-10-14 Tcl集团股份有限公司 Video play method and device of television system
CN104980796B (en) * 2014-04-10 2018-05-08 Tcl集团股份有限公司 The video broadcasting method and device of a kind of television system
CN104983511A (en) * 2015-05-18 2015-10-21 上海交通大学 Voice-helping intelligent glasses system aiming at totally-blind visual handicapped

Also Published As

Publication number Publication date
CN103247027B (en) 2016-03-30

Similar Documents

Publication Publication Date Title
CN103900583B (en) For positioning the apparatus and method with map structuring immediately
CN101909219B (en) Stereoscopic display method, tracking type stereoscopic display
TWI488470B (en) Dimensional image processing device and stereo image processing method
CN103581650B (en) Binocular 3D video turns the method for many orders 3D video
CN101984670A (en) Stereoscopic displaying method, tracking stereoscopic display and image processing device
CN105516579B (en) A kind of image processing method, device and electronic equipment
CN108833877B (en) Image processing method and device, computer device and readable storage medium
CN101742348A (en) Rendering method and system
CN108076208B (en) Display processing method and device and terminal
CN110740309B (en) Image display method and device, electronic equipment and storage medium
CN102692806A (en) Methods for acquiring and forming free viewpoint four-dimensional space video sequence
US9007404B2 (en) Tilt-based look around effect image enhancement method
KR20080000149A (en) Method and apparatus for processing multi-view images
CN104216533B (en) A kind of wear-type virtual reality display based on DirectX9
Suenaga et al. A practical implementation of free viewpoint video system for soccer games
Knorr et al. An image-based rendering (ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion
CN115965672A (en) Three-dimensional object display method, device, equipment and medium
CN103247027B (en) Image processing method and electric terminal
Knorr et al. From 2D-to stereo-to multi-view video
Xie et al. Depth-tunable three-dimensional display with interactive light field control
Ramachandran et al. Multiview synthesis from stereo views
CN101566784B (en) Method for establishing depth of field data for three-dimensional image and system thereof
TWI489151B (en) Method, apparatus and cell for displaying three dimensional object
Zhu et al. Virtual view synthesis using stereo vision based on the sum of absolute difference
Yuan et al. 18.2: Depth sensing and augmented reality technologies for mobile 3D platforms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant