CN106034233A - Information processing method and electronic device - Google Patents
Information processing method and electronic device Download PDFInfo
- Publication number
- CN106034233A CN106034233A CN201510114547.9A CN201510114547A CN106034233A CN 106034233 A CN106034233 A CN 106034233A CN 201510114547 A CN201510114547 A CN 201510114547A CN 106034233 A CN106034233 A CN 106034233A
- Authority
- CN
- China
- Prior art keywords
- image
- level
- depth
- frame data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses an information processing method and an electronic device. The information processing method comprises steps of extracting M frame data of N frames, determining a first image corresponding to each frame data in data of first M frames, and performing fuzzification processing on the first image corresponding to data of each frame among first M frames. In the embodiment of the information processing method and the electronic device, through performing corresponding fuzzification processing on each first image in the data of first M frames, the Information processing method and the electronic device can reduce and relieve a sharpening degree of an image edge, further solve eye discomfort caused by image sharpening, improve user experience and highlight diversity of the electronic device.
Description
Technical field
The present invention relates to the information processing technology, be specifically related to a kind of information processing method and electronic equipment.
Background technology
When viewing is by video (two-path video) of three-dimensional 3D stereoscopic device shooting, human eye is adjusted according to parallax
The focal length of two eyeballs of joint and visual angle, in order to preferably watch image.Especially watching short-sighted frequency,
Want to reach preferable viewing effect, need human eye adjusting focal length at short notice and visual angle so that by two frame figures
As the dominant character of (two-path video is at the image presented the most respectively) registrates, so completely
The sense of discomfort of human eye will be caused.Wherein, display feature includes the images such as the edge of image, angle point and color
Texture information, causes the reason of sense of discomfort to essentially consist in image border more sharp-pointed.Generally user uses 3D
3D video watched by the electronic equipment such as intelligent glasses or mobile phone, how to be solved because of figure by these electronic equipments
The problem of human eye sense of discomfort that is relatively sharp-pointed as edge and that produce becomes research emphasis.
Summary of the invention
For solving the technical problem of existing existence, the embodiment of the present invention be to provide a kind of information processing method and
Electronic equipment, watches uncomfortable problem with the human eye at least solved owing to image border sharply causes, promotes
Consumer's Experience, highlights electronic functionalities multiformity.
The technical scheme of the embodiment of the present invention is achieved in that
Embodiments provide a kind of information processing method, be applied in an electronic equipment, described electronics
Equipment can play the first three-dimensional 3D video, and a 3D video includes N frame data, and N is positive integer;
Described method includes:
Extracting the front M frame data in N frame data, M is the positive integer less than or equal to N;
Determine the first image that in described front M frame data, every frame data are corresponding;
The first image that frame data every in described front M frame data are corresponding is carried out Fuzzy processing, is shown
Data.
In such scheme, after the first image that every frame data are corresponding in determining described front M frame data, institute
Method of stating also includes:
Obtain the depth information that in the first image, each pixel is corresponding;
According to described depth information, the first image is carried out level division;
Obtain the attribute of each level of the first image;
According to the attribute of the first level, each level of the first image is carried out the process of corresponding edge blurryization,
Obtain video data.
In such scheme, after the depth information that each pixel is corresponding in obtaining the first image, described side
Method also includes:
According to described depth information, obtain the depth image that described first image is corresponding;
Obtain the first depth value in depth image and the second depth value;
According to the first depth value, the second depth value and the first predetermined value P, determine the level of described depth image
Interval;
It is spaced according to described level, described depth image is divided into P level;
Determine that the image information in each level of depth image is the image letter in the corresponding level of the first image
Breath.
In such scheme, the attribute of each level of described acquisition the first image, including:
Obtain the depth information of each level on depth image;
Determine that on depth image, the depth information of each level is the depth information of corresponding level on the first image;
According to the depth information of each level on the first image, determine the attribute of corresponding level.
In such scheme, the described attribute according to each level, each level is carried out corresponding edge blurry
Change processes, including:
According to the attribute of each level, determine the first parameter of corresponding level;
Utilize the first parameter, the image information in the first corresponding level of image is carried out gaussian filtering.
The embodiment of the present invention additionally provides a kind of electronic equipment, and described electronic equipment can play the first three-dimensional 3D
Video, a 3D video includes N frame data, and N is positive integer;Described electronic equipment includes:
First extraction unit, for extracting the front M frame data in N frame data, M is less than or equal to N's
Positive integer;
First determines unit, for determining the first image that in described front M frame data, every frame data are corresponding;
First processing unit, for carrying out mould to the first image that frame data every in described front M frame data are corresponding
Gelatinizing processes, and obtains video data.
In such scheme, described electronic equipment also includes: the second processing unit, is used for:
Obtain the depth information that in the first image, each pixel is corresponding;According to described depth information, to first
Image carries out level division;Obtain the attribute of each level of the first image;
Accordingly, the first processing unit, for the attribute according to the first level, each layer to the first image
Level carries out corresponding edge blurryization and processes, and obtains video data.
In such scheme, described second processing unit, it is additionally operable to:
According to described depth information, obtain the depth image that described first image is corresponding;
Obtain the first depth value in depth image and the second depth value;
According to the first depth value, the second depth value and the first predetermined value P, determine the level of described depth image
Interval;
It is spaced according to described level, described depth image is divided into P level;
Determine that the image information in each level of depth image is the image letter in the corresponding level of the first image
Breath.
In such scheme, described second processing unit, it is additionally operable to:
Obtain the depth information of each level on depth image;
Determine that on depth image, the depth information of each level is the depth information of corresponding level on the first image;
According to the depth information of each level on the first image, determine the attribute of corresponding level.
In such scheme, described first processing unit, it is additionally operable to:
According to the attribute of each level, determine the first parameter of corresponding level;
Utilize the first parameter, the image information in the first corresponding level of image is carried out gaussian filtering.
The information processing method of embodiment of the present invention offer and electronic equipment, wherein, described method includes: carry
Take the front M frame data in N frame data;Determine the first figure that in described front M frame data, every frame data are corresponding
Picture;The first image that frame data every in described front M frame data are corresponding is carried out Fuzzy processing, is shown
Data.In the present embodiment, by the corresponding Fuzzy processing that the first image each in front M frame data is carried out,
Can reduce or alleviate the sharpening degree of image border, and then solve owing to image border sharply causes
The problem that human eye viewing is uncomfortable, promotes Consumer's Experience, highlights electronic functionalities multiformity.
Accompanying drawing explanation
The first embodiment of the information processing method that Fig. 1 provides for the present invention realize schematic flow sheet;
Second embodiment of the information processing method that Fig. 2 provides for the present invention realize schematic flow sheet;
The composition structural representation of the first embodiment of the electronic equipment that Fig. 3 provides for the present invention;
The composition structural representation of the second embodiment of the electronic equipment that Fig. 4 provides for the present invention.
Detailed description of the invention
Below in conjunction with accompanying drawing to a preferred embodiment of the present invention will be described in detail, it will be appreciated that described below
Bright preferred embodiment is merely to illustrate and explains the present invention, is not intended to limit the present invention.
In the information processing method and the following embodiment of electronic equipment of present invention offer, involved electronics
Equipment includes but not limited to: intelligent glasses, mobile phone, panel computer, integral computer, Industry Control calculate
The all kinds computer such as machine, personal computer, electronic reader etc..The preferred electronics of the embodiment of the present invention
The object of equipment is intelligent glasses or mobile phone.
Embodiment of the method one
The information processing method first embodiment that the present invention provides, is applied in an electronic equipment, described electronics
Equipment can be intelligent glasses or mobile phone;Described electronic equipment can play a 3D video, and a 3D regards
Frequency includes N frame data, and N is positive integer.A described 3D video can be the video of short-sighted frequency such as 10s,
Can also be the video of long video such as 120min, be not specifically limited here.The embodiment of the present invention preferred
One 3D video is short-sighted frequency.
The first embodiment of the information processing method that Fig. 1 provides for the present invention realize schematic flow sheet;Such as figure
Shown in 1, described method includes:
Step 101: extracting the front M frame data in N frame data, M is the positive integer less than or equal to N;
Here, when a 3D video play by electronic equipment, the front M frame data extracting this 3D video i.e. carry
Take the initial frame data of video;Wherein, numerical value M can be that predetermined value can also all for 3D video
Frame N has the data of predetermined relationship, such as, for the video of a 10s, play 10 frame data according to 1s
For, can be with M=N/10=10*10/10=10.
Step 102: determine the first image that in described front M frame data, every frame data are corresponding;
Here, in 3D video, every frame data (data being play sometime) are presented on use time played
A family actually image at the moment, determines the first image that in M frame data, each frame data are corresponding.
Step 103: the first image that frame data every in described front M frame data are corresponding is carried out Fuzzy processing,
Obtain video data.
Here, in the 3D video short-sighted frequency of especially 3D, the discomfort of human eye is owing to front M frame data are every
The image border that frame data are corresponding is sharp-pointed and causes, in the present embodiment, to the front M in short-sighted frequency first
Each first image of image all carries out corresponding Fuzzy processing, specifically edge blurryization and processes so that figure
As edge acuity reduces, bring and the most considerable view and admire picture.Wherein, it is impossible to by the limit in this programme
Edge Fuzzy processing is considered the fuzzyyest to image, the clear journey of the first image after Fuzzy processing
Degree does not change, but the sharpening journey of the image border of the first image after Fuzzy processing
Degree is alleviated or is reduced so that image texture is more suitable for the viewing of human eye.It addition, this programme have chosen video
Front M in flow data the first image, from the 1st image, the 2nd image ... m-th image, by
The individual Fuzzy processing that carries out, one image of every Fuzzy processing, image sharpening degree will have than before
Being reduced, image gradually becomes to be suitable for the viewing of human eye;Certainly, when after Fuzzy processing complete m-th image,
3D effect highlights the most.So, just can alleviate or reduce the sharpening degree of image border, so solve by
Human eye that is sharp-pointed in image border and that cause watches uncomfortable problem, promotes Consumer's Experience, highlights electronic equipment
Functional diversity.
Embodiment of the method two
Information processing method the second embodiment that the present invention provides, is applied in an electronic equipment, described electronics
Equipment can be intelligent glasses or mobile phone;Described electronic equipment can play a 3D video, and a 3D regards
Frequency includes N frame data, and N is positive integer.A described 3D video can be the video of short-sighted frequency such as 10s,
Can also be the video of long video such as 120min, be not specifically limited here.The embodiment of the present invention preferred
One 3D video is short-sighted frequency.
Second embodiment of the information processing method that Fig. 2 provides for the present invention realize schematic flow sheet;Such as figure
Shown in 2, described method includes:
Step 201: extracting the front M frame data in N frame data, M is the positive integer less than or equal to N;
Here, when a 3D video play by electronic equipment, the front M frame data extracting this 3D video i.e. carry
Take the initial frame data of video;Wherein, numerical value M can be that predetermined value can also all for 3D video
Frame N has the data of predetermined relationship, such as, for the video of a 10s, play 10 frame data according to 1s
For, can be with M=N/10=10*10/10=10.
Step 202: determine the first image that in described front M frame data, every frame data are corresponding;
Here, in 3D video, every frame data (data being play sometime) are presented on use time played
A family actually image at the moment, determines the first image that in M frame data, each frame data are corresponding.
Step 203: obtain the depth information that in the first image, each pixel is corresponding;
Here, the described depth information object that in being the first image, each pixel is corresponding (shooting object) with
Physical distance between the photographic head of the 3D picture pick-up device shooting this first image;Pass through Binocular Vision Principle
This physical distance can be calculated.Such as, 3D picture pick-up device have taken an image A (the first image), figure
As including one-pen and a television set in A, wherein, in image A, the position of pen is positioned at image A
Left hand edge region, the position of television set is positioned at the right hand edge region of image A, can pass through Binocular Vision Principle
Calculate the physical distance of the pen corresponding to image A left hand edge area pixel point and photographic head, it is also possible to calculate figure
The television set corresponding as A right hand edge area pixel point and the physical distance of photographic head.
Step 204: according to described depth information, the first image is carried out level division;
Further, this step can realize by the following method: according to described depth information, obtains described
The depth image that first image is corresponding;Obtain the first depth value in depth image and the second depth value;Foundation
First depth value, the second depth value and the first predetermined value P, determine the level interval of described depth image;Depend on
It is spaced according to described level, described depth image is divided into P level;Determine each level of depth image
On image information be the image information in the corresponding level of the first image.Wherein, the first depth value can be
Degree of depth maximum in depth image, the second depth value can be the deep minimum in depth image, otherwise
Also may be used.
Concrete, owing to 3D video is two-path video, when playing 3D video, play in the same moment
The first image that is two the first image corresponding to Shi Mei road video, wherein first image is imaged by 3D
Captured by one photographic head of equipment, and another image is by captured by another photographic head;So, at 3D
During video playback, first image that first via video play on this time point of 1s can be extracted the most aforementioned
Image A, meanwhile, be extracted in another the first image such as image that the second road video on this time point is play
AA.Owing to image A and image AA is that 3D picture pick-up device was adopted for same scene in the same moment
The image that collection arrives, so the object in image AA also includes pen and television set.Obtaining image A, figure
As after the depth information that each pixel in every image in AA is corresponding, by two image respective pixel
Depth information at Dian synthesizes, and obtains a depth image C.In depth image C, obtain the degree of depth
Maximum and deep minimum, if such as obtaining the depth information of pen i.e. this pen in image A, image AA
The physical distance of one photographic head of distance 3D picture pick-up device is 20cm, the depth information of television set i.e. this TV
Machine is 50cm apart from the physical distance of this photographic head, then degree of depth maximum=50cm, deep minimum=20cm.
When pre-setting P=3, depth image C just can be divided into 3 levels, and adjacent two by electronic equipment
Level between individual level is spaced apart 3 levels of (50cm-20cm)/3=10cm, i.e. depth image C
Being respectively as follows: 20cm-30cm is the first level (including pen), and 30cm-40cm is the second level, 40cm-50cm
For third layer level (including television set).Owing to image A, image AA are plane picture, so image
A, the corresponding level of image AA can be considered the respective regions of image.Simultaneously as depth image C be through
One stereo-picture of the depth information synthesis at the corresponding pixel points of image A and image AA, so, when
When depth image C is divided into P=3 level by electronic equipment, image A, image AA all can be divided into P=3
The image information in each level on individual region, and depth image C is image A, corresponding on image AA
Image information on region, such as a example by image A, pen is positioned at the first level in depth image C,
The pixel that can characterize this object of pen in image A should be positioned at the firstth district in tri-regions of image A
Territory.Wherein, definition and characteristic about depth image refer to existing related description, do not repeat.The
One predetermined value P is positive integer, and generally its value can set flexibly according to the shooting environmental of 3D picture pick-up device
Fixed, preferred P can take any one positive integer in 3~5.
Step 205: obtain the attribute of each level of the first image;
Further, this step can realize by the following method: on acquisition depth image, each level is deep
Degree information;Determine that on depth image, the depth information of each level is the degree of depth letter of corresponding level on the first image
Breath;According to the depth information of each level on the first image, determine the attribute of corresponding level.Wherein, often
The depth information of one level is the object and the shooting of 3D picture pick-up device that the image slices vegetarian refreshments in this level is corresponding
Physical distance between Tou;The attribute of the corresponding level of the first image (region) is certain region on this image
On the far and near information of object distance photographic head corresponding to pixel.
Concrete, illustrate, due to deeply as a example by aforesaid depth image C, the first image such as image A
Degree image C is divided into 3 levels, and being respectively as follows: 20cm-30cm is the first level (including pen),
30cm-40cm is the second level, and 40cm-50cm is third layer level (including television set);So image A
Also can be divided into that is three regions of three levels, these three regions respectively: the left area of image A (bag
Include the region of a pattern) be first area, the zone line of image A be that second area (had not both had pen figure
Case does not has the region of television set pattern yet), the right area of image A is that the 3rd region (includes television set
The region of pattern).From the level dividing condition to depth image C, in depth image C, first
Level is the level that distance photographic head is nearest, corresponding, the pen distance in the first area of image A images
The physical distance of head is nearest;In depth image, third layer level is the level that distance photographic head is farthest, corresponding
, the physical distance of the television set distance photographic head in the 3rd region of image A is farthest;The second of image A
Image on region is the physical distance the nearest not far image of distance photographic head, the most just can determine that image
The attribute in each region in the P=3 region divided in A.Certainly, for the P=3 region in image AA
In the attribute in each region can refer to the aforementioned process to image A, do not repeat.
Step 206: according to the attribute of the first level, each level of the first image is carried out corresponding edge
Fuzzy processing, obtains video data.
Further, this step can realize by the following method: according to the attribute of each level, determines phase
Answer the first parameter of level;Utilize the first parameter, the image information in the first corresponding level of image is carried out height
This filtering.Wherein, the first parameter is the variance of view data, believes the image in the first corresponding level of image
Breath carries out gaussian filtering can be considered as using the image information in the first corresponding level of image have certain variance
The Gaussian Blur core of value carries out corresponding edge blurryization and processes.
Concrete, it is aforesaid image A with the first image and image A is divided into includes the of a pattern
One region, both a pattern was not had not have the second area of television set pattern yet and include television set pattern
The explanation of this step is carried out as a example by 3rd region, if the 3D picture pick-up device of shooting 3D video is considered as human eye,
So human eye is thought most it is seen that the nearest object of distance eyes, be not desired to most it is seen that distance eyes
Remote object, so human eye is thought most it is seen that the pattern of pen in image A, is not desired to most it is seen that TV
The pattern of machine.In image A, it is the region that human eye is wanted most to see owing to including the first area of a pattern,
The the first parameter i.e. first variance then arranging first area corresponding is minima, includes the of television set pattern
Three regions are the region that human eye is least wanted to see, then the first parameter i.e. third party arranging the 3rd region corresponding is poor
For maximum, the first parameter i.e. second variance that second area is corresponding is intermediate value.In image A, to three
Individual region carries out different edge blurryizations and processes: had the Gaussian Blur verification figure of first variance by use
Carry out gaussian filtering as the first area of A and first area is carried out edge blurry process, by making apparatus
The 3rd region having the Gaussian Blur collecting image A of third party's difference carries out gaussian filtering and carries out the 3rd region
Edge blurryization processes, and the second area of the Gaussian Blur collecting image A having second variance by use enters
Row gaussian filtering and second area is carried out edge blurry process.Wherein, due to the size of Gaussian Blur core
Being directly proportional to side's extent, the obfuscation to image procossing that Gaussian Blur core is big is deep, Gaussian Blur
The obfuscation degree to image procossing that core is little is shallow, so in three regions of image A, including TV
3rd region of machine pattern be the deepest region of Fuzzy Processing degree be also the mostly undesired region seen of human eye,
Second area takes second place, include the first area of a pattern be the most shallow region of Fuzzy Processing degree be also human eye
The highly desirable region seen, say, that through minimizing processed as above or the image border alleviating image A
Sharpening degree.Wherein, about the definition of Gaussian Blur core, gaussian filtering and image border and/or processed
Journey refers to existing related description, does not repeats.Phase to the most each level in each region of image AA
Answer Fuzzy processing to refer to the aforementioned process to image A, no longer illustrate.To electronic equipment such as mobile phone same
Image A that one moment is play and image AA all after Fuzzy processing as above, due to
The vision difference of family right and left eyes, can close the image A and image AA through Fuzzy processing in human brain
Become, obtain a 3D rendering.
As can be seen here, in the present embodiment, to the every frame data in M frame data before in 3D video corresponding the
One image carries out the division in level (region), then according to shooting object distance 3D corresponding to each region
The far and near information of the photographic head of picture pick-up device, is carried out the image information in each region of the first image accordingly
Edge blurry process, with reduce or alleviate the first image border sharpening degree, and then solve due to figure
Human eye that is sharp-pointed as edge and that cause watches uncomfortable problem, promotes Consumer's Experience, highlights electronic functionalities
Multiformity.
Electronic equipment embodiment one
The electronic equipment first embodiment that the present invention provides, described electronic equipment can be intelligent glasses or mobile phone;
Described electronic equipment can play a 3D video, and a 3D video includes N frame data, and N is the most whole
Number.A described 3D video can be the video of short-sighted frequency such as 10s, it is also possible to for long video such as 120min
Video, be not specifically limited here.The preferred 3D video of the present embodiment is short-sighted frequency.
The composition structural representation of the first embodiment of the electronic equipment that Fig. 3 provides for the present invention;Such as Fig. 3 institute
Showing, described electronic equipment includes: the first extraction unit 301, first to determine that unit 302 and first processes single
Unit 303;Wherein,
First extraction unit 301, for extracting the front M frame data in N frame data, M is less than or equal to N
Positive integer;
Here, when a 3D video play by electronic equipment, the first extraction unit 301 extracts this 3D video
Front M frame data i.e. extract the initial frame data of video;Wherein, numerical value M can be that predetermined value can also be
All frame N with 3D video have the data of predetermined relationship, such as the video of a 10s, according to
For 1s plays 10 frame data, can be with M=N/10=10*10/10=10.
First determines unit 302, for determining the first image that in described front M frame data, every frame data are corresponding;
Here, in 3D video, every frame data (data being play sometime) are presented on use time played
A family actually image at the moment, first determines that unit 302 determines each frame data in M frame data
The first corresponding image.
First processing unit 303, for entering the first image that frame data every in described front M frame data are corresponding
Row Fuzzy processing, obtains video data.
Here, in the 3D video short-sighted frequency of especially 3D, the discomfort of human eye is owing to front M frame data are every
The image border that frame data are corresponding is sharp-pointed and causes, in the present embodiment, to the front M in short-sighted frequency first
Each first image of image all carries out corresponding Fuzzy processing, specifically edge blurryization and processes so that figure
As edge acuity reduces, bring and the most considerable view and admire picture.Wherein, it is impossible to by the limit in this programme
Edge Fuzzy processing is considered the fuzzyyest to image, the clear journey of the first image after Fuzzy processing
Degree does not change, but the sharpening journey of the image border of the first image after Fuzzy processing
Degree is alleviated or is reduced so that image texture is more suitable for the viewing of human eye.It addition, this programme have chosen video
Front M in flow data the first image, from the 1st image, the 2nd image ... m-th image, by
The individual Fuzzy processing that carries out, one image of every Fuzzy processing, image sharpening degree will have than before
Being reduced, image gradually becomes to be suitable for the viewing of human eye;Certainly, when after Fuzzy processing complete m-th image,
3D effect highlights the most.So, just can alleviate or reduce the sharpening degree of image border, so solve by
Human eye that is sharp-pointed in image border and that cause watches uncomfortable problem, promotes Consumer's Experience, highlights electronic equipment
Functional diversity.
Electronic equipment embodiment two
Electronic equipment the second embodiment that the present invention provides, described electronic equipment can be intelligent glasses or mobile phone;
Described electronic equipment can play a 3D video, and a 3D video includes N frame data, and N is the most whole
Number.A described 3D video can be the video of short-sighted frequency such as 10s, it is also possible to for long video such as 120min
Video, be not specifically limited here.The preferred 3D video of the present embodiment is short-sighted frequency.
The composition structural representation of the second embodiment of the electronic equipment that Fig. 4 provides for the present invention;Such as Fig. 4 institute
Showing, described electronic equipment includes: the first extraction unit 401, first determine unit the 402, second processing unit
403 and first processing unit 404;Wherein,
First extraction unit 401, for extracting the front M frame data in N frame data, M is less than or equal to N
Positive integer;
Here, when a 3D video play by electronic equipment, the first extraction unit 401 extracts this 3D video
Front M frame data i.e. extract the initial frame data of video;Wherein, numerical value M can be that predetermined value can also be
All frame N with 3D video have the data of predetermined relationship, such as the video of a 10s, according to
For 1s plays 10 frame data, can be with M=N/10=10*10/10=10.
First determines unit 402, for determining the first image that in described front M frame data, every frame data are corresponding;
Here, in 3D video, every frame data (data being play sometime) are presented on use time played
A family actually image at the moment, first determines that unit 402 determines each frame data in M frame data
The first corresponding image.
Second processing unit 403, for obtaining the depth information that in the first image, each pixel is corresponding;Depend on
According to described depth information, the first image is carried out level division;Obtain the attribute of each level of the first image;
Accordingly, the first processing unit 404, for according to the attribute of the first level, every to the first image
One level carries out corresponding edge blurryization and processes, and obtains video data.
Wherein, described second processing unit 403, according to described depth information, obtains described first image corresponding
Depth image;Obtain the first depth value in depth image and the second depth value;Foundation the first depth value,
Second depth value and the first predetermined value P, determine the level interval of described depth image;According between described level
Every, described depth image is divided into P level;Determine the image information in each level of depth image
It it is the image information in the corresponding level of the first image.
For such scheme, concrete, described depth information is the thing that in the first image, each pixel is corresponding
Physical distance between body (shooting object) and the photographic head of the 3D picture pick-up device that shoots this first image;
This physical distance can be calculated by Binocular Vision Principle the second processing unit 403.Such as, 3D picture pick-up device
Have taken an image A (the first image), image A includes one-pen and a television set, wherein,
In image A, the position of pen is positioned at the left hand edge region of image A, and the position of television set is positioned at image A
Right hand edge region, the second processing unit 403 can by Binocular Vision Principle calculate image A left hand edge region
Pen corresponding to pixel and the physical distance of photographic head, it is also possible to calculate image A right hand edge area pixel point
Corresponding television set and the physical distance of photographic head.Owing to 3D video is two-path video, playing 3D video
Time, the first image that is two the first image that the Shi Mei road video play in the same moment is corresponding, Qi Zhongyi
The first image is by captured by a photographic head of 3D picture pick-up device, and another image is imaged by another
Head is captured;So, when 3D video playback, the first extraction unit 401 can extract first via video and exist
First the image image as the aforementioned A play on this time point of 1s, meanwhile, is extracted in this time point
Another first image such as image AA that upper second road video is play.Owing to image A and image AA is 3D
The image that picture pick-up device collected for same scene in the same moment, so the thing in image AA
Body also includes pen and television set.Every figure in the second processing unit 403 obtains image A, image AA
After the depth information that each pixel in Xiang is corresponding, by the depth information at two image corresponding pixel points
Synthesize, obtain a depth image C.In depth image C, the second processing unit 403 obtains deeply
Degree maximum and deep minimum, i.e. should if such as obtaining the depth information of pen in image A, image AA
The physical distance of one photographic head of pen distance 3D picture pick-up device is 20cm, the depth information of television set i.e. this electricity
Regard the machine physical distance apart from this photographic head as 50cm, then degree of depth maximum=50cm, deep minimum
=20cm.When pre-setting P=3, depth image C just can be divided into 3 by the second processing unit 403
Level between individual level, and adjacent two levels is spaced apart (50cm-20cm)/3=10cm, the i.e. degree of depth
It is the first level (including pen) that 3 levels of image C are respectively as follows: 20cm-30cm, 30cm-40cm
Being the second level, 40cm-50cm is third layer level (including television set).Due to image A, image AA
It is plane picture, so the corresponding level of image A, image AA can be considered the respective regions of image.With
Time, owing to depth image C is the depth information synthesis at the corresponding pixel points of image A and image AA
One stereo-picture, so, when depth image C is divided into P=3 level by the second processing unit 403,
Also figure image A, image AA being divided in each level on P=3 region, and depth image C
As information be image A, the image information on respective regions on image AA, such as a example by image A, pen
The first level being positioned in depth image C, the pixel that can characterize this object of pen in image A should
It is positioned at the first area in tri-regions of image A.Wherein, definition and characteristic about depth image refer to
Existing related description, does not repeats.First predetermined value P is positive integer, and generally its value can be according to 3D
The shooting environmental of picture pick-up device and set flexibly, preferred P can take any one positive integer in 3~5.
Described second processing unit 403 is additionally operable to obtain the depth information of each level on depth image;Determine
On depth image, the depth information of each level is the depth information of corresponding level on the first image;According to first
The depth information of each level on image, determines the attribute of corresponding level;Accordingly, described first process
Unit 404, according to the attribute of each level, determines the first parameter of corresponding level;Utilize the first parameter, right
Image information in the first corresponding level of image carries out gaussian filtering.Wherein, the depth information of each level is
Physical distance between object and the photographic head of 3D picture pick-up device that image slices vegetarian refreshments in this level is corresponding;
The attribute of the corresponding level of the first image (region) is the thing that the pixel on certain region on this image is corresponding
The far and near information of body distance photographic head.First parameter is the variance of view data, to the first corresponding level of image
On image information carry out gaussian filtering and can be considered as the image information in the first corresponding level of image is made apparatus
The Gaussian Blur core having certain variance yields carries out corresponding edge blurryization and processes.
For such scheme, concrete, enter as a example by aforesaid depth image C, the first image such as image A
Row explanation, owing to depth image C is divided into 3 levels by the second processing unit 403, is respectively as follows:
20cm-30cm is the first level (including pen), and 30cm-40cm is the second level, and 40cm-50cm is
Three levels (include television set);So image A is also divided into three levels by the second processing unit 403
I.e. three regions, these three regions respectively: the left area (including the region of a pattern) of image A
Zone line for first area, image A is that second area (had not both had a pattern not have television set pattern yet
Region), the right area of image A is the 3rd region (including the region of television set pattern).By to deeply
The level dividing condition of degree image C understands, and in depth image, the first level is that distance photographic head is nearest
Level, corresponding, the physical distance of the pen distance photographic head in the first area of image A is nearest;Deeply
In degree image, third layer level is the level that distance photographic head is farthest, corresponding, in the 3rd region of image A
Television set distance photographic head physical distance farthest;Image on the second area of image A is distance shooting
The physical distance the nearest not far image of head, thus the second processing unit 403 just can determine that institute in image A
The attribute in each region in the P=3 region divided.Certainly, for each in the P=3 region in image AA
The attribute in region can refer to the aforementioned process to image A, does not repeats.
If the 3D picture pick-up device of shooting 3D video is considered as human eye, then human eye think most it is seen that
The distance nearest object of eyes, is not desired to most it is seen that the farthest object of distance eyes, so in image A
Human eye thinks most it is seen that the pattern of pen, is not desired to most it is seen that the pattern of television set.In image A, by
Be the human eye region wanting most to see in the first area including a pattern, then the first processing unit 404 is arranged
First parameter i.e. first variance corresponding to first area is minima, includes the 3rd region of television set pattern
Least want the region seen for human eye, then the first processing unit 404 arranges the first parameter that the 3rd region is corresponding
I.e. third party's difference is maximum, and the first parameter i.e. second variance that second area is corresponding is intermediate value.At image
In A, the first processing unit 404 carries out different edge blurryizations and processes three regions: second processes list
The first area of the Gaussian Blur collecting image A that unit 403 has first variance by use carries out gaussian filtering
And first area is carried out edge blurry process, be there is the Gaussian Blur verification figure of third party's difference by use
Carry out gaussian filtering as the 3rd region of A and the 3rd region is carried out edge blurry process, by making apparatus
The second area having the Gaussian Blur collecting image A of second variance carries out gaussian filtering and carries out second area
Edge blurryization processes.Wherein, owing to the size of Gaussian Blur core is directly proportional to side's extent, Gaussian mode
The obfuscation to image procossing sticking with paste core big is deep, the obfuscation journey to image procossing that Gaussian Blur core is little
Spend shallow, so in three regions of image A, the 3rd region including television set pattern is Fuzzy Processing
The deepest region of degree is also the mostly undesired region seen of human eye, and second area takes second place, and includes a pattern
First area be the most shallow region of Fuzzy Processing degree be also the highly desirable region seen of human eye, say, that
Sharpening degree through minimizing processed as above or the image border alleviating image A.Wherein, about Gauss
The definition of fuzzy core, gaussian filtering and image border and/or processing procedure refer to existing related description, herein
Do not repeat.The corresponding Fuzzy processing of the most each level in each region of image AA is referred to aforementioned to figure
As the process of A, no longer illustrate.The image A that electronic equipment such as mobile phone was play in the same moment and figure
As AA all after Fuzzy processing as above, due to the vision difference of user's right and left eyes, can by through
The image A and image AA of Fuzzy processing synthesize in human brain, obtain a 3D rendering.
As can be seen here, in the present embodiment, to the every frame data in M frame data before in 3D video corresponding the
One image carries out the division in level (region), then according to shooting object distance 3D corresponding to each region
The far and near information of the photographic head of picture pick-up device, carries out corresponding edge blurry process to the image in each region,
To reduce or to alleviate the sharpening degree of the first image border, and then solve owing to image border sharply causes
The uncomfortable problem of human eye viewing, promote Consumer's Experience, highlight electronic functionalities multiformity.
In several embodiments provided herein, it should be understood that disclosed equipment and method, can
To realize by another way.Apparatus embodiments described above is only schematically, such as, and institute
Stating the division of unit, be only a kind of logic function and divide, actual can have other dividing mode when realizing,
As: multiple unit or assembly can be in conjunction with, or it is desirably integrated into another system, or some features can be neglected
Slightly, or do not perform.It addition, the coupling each other of shown or discussed each ingredient or directly coupling
Close or communication connection can be the INDIRECT COUPLING by some interfaces, equipment or unit or communication connection, can
Be electrical, machinery or other form.
The above-mentioned unit illustrated as separating component can be or may not be physically separate, as
The parts that unit shows can be or may not be physical location, i.e. may be located at a place, it is possible to
To be distributed on multiple NE;Part or all of unit therein can be selected according to the actual needs
Realize the purpose of the present embodiment scheme.
It addition, each functional unit in various embodiments of the present invention can be fully integrated in a processing unit,
Can also be that each unit is individually as a unit, it is also possible to two or more unit are integrated in one
In individual unit;Above-mentioned integrated unit both can realize to use the form of hardware, it would however also be possible to employ hardware adds soft
The form of part functional unit realizes.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can
Completing with the hardware relevant by programmed instruction, aforesaid program can be stored in an embodied on computer readable and deposit
In storage media, this program upon execution, performs to include the step of said method embodiment;And aforesaid storage
Medium includes: movable storage device, read only memory (ROM, Read-Only Memory), magnetic disc or
The various media that can store program code such as person's CD.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited to
This, any those familiar with the art, in the technical scope that the invention discloses, can readily occur in
Change or replacement, all should contain within protection scope of the present invention.Therefore, protection scope of the present invention should
It is as the criterion with described scope of the claims.
Claims (10)
1. an information processing method, is applied in an electronic equipment, and described electronic equipment can play first
Three-dimensional 3D video, a 3D video includes N frame data, and N is positive integer;Described method includes:
Extracting the front M frame data in N frame data, M is the positive integer less than or equal to N;
Determine the first image that in described front M frame data, every frame data are corresponding;
The first image that frame data every in described front M frame data are corresponding is carried out Fuzzy processing, is shown
Data.
Method the most according to claim 1, it is characterised in that in determining described front M frame data
After the first image that every frame data are corresponding, described method also includes:
Obtain the depth information that in the first image, each pixel is corresponding;
According to described depth information, the first image is carried out level division;
Obtain the attribute of each level of the first image;
According to the attribute of the first level, each level of the first image is carried out the process of corresponding edge blurryization,
Obtain video data.
Method the most according to claim 2, it is characterised in that each pixel in obtaining the first image
After the depth information that point is corresponding, described method also includes:
According to described depth information, obtain the depth image that described first image is corresponding;
Obtain the first depth value in depth image and the second depth value;
According to the first depth value, the second depth value and the first predetermined value P, determine the level of described depth image
Interval;
It is spaced according to described level, described depth image is divided into P level;
Determine that the image information in each level of depth image is the image letter in the corresponding level of the first image
Breath.
The most according to the method in claim 2 or 3, it is characterised in that described acquisition the first image every
The attribute of one level, including:
Obtain the depth information of each level on depth image;
Determine that on depth image, the depth information of each level is the depth information of corresponding level on the first image;
According to the depth information of each level on the first image, determine the attribute of corresponding level.
Method the most according to claim 4, it is characterised in that the described attribute according to each level,
Each level carries out corresponding edge blurryization process, including:
According to the attribute of each level, determine the first parameter of corresponding level;
Utilize the first parameter, the image information in the first corresponding level of image is carried out gaussian filtering.
6. an electronic equipment, described electronic equipment can play the first three-dimensional 3D video, and a 3D regards
Frequency includes N frame data, and N is positive integer;Described electronic equipment includes:
First extraction unit, for extracting the front M frame data in N frame data, M is less than or equal to N's
Positive integer;
First determines unit, for determining the first image that in described front M frame data, every frame data are corresponding;
First processing unit, for carrying out mould to the first image that frame data every in described front M frame data are corresponding
Gelatinizing processes, and obtains video data.
Electronic equipment the most according to claim 6, it is characterised in that described electronic equipment also includes:
Second processing unit, is used for:
Obtain the depth information that in the first image, each pixel is corresponding;According to described depth information, to first
Image carries out level division;Obtain the attribute of each level of the first image;
Accordingly, the first processing unit, for the attribute according to the first level, each layer to the first image
Level carries out corresponding edge blurryization and processes, and obtains video data.
Electronic equipment the most according to claim 7, it is characterised in that described second processing unit, also
For:
According to described depth information, obtain the depth image that described first image is corresponding;
Obtain the first depth value in depth image and the second depth value;
According to the first depth value, the second depth value and the first predetermined value P, determine the level of described depth image
Interval;
It is spaced according to described level, described depth image is divided into P level;
Determine that the image information in each level of depth image is the image letter in the corresponding level of the first image
Breath.
9. according to the electronic equipment described in claim 7 or 8, it is characterised in that described second processing unit,
It is additionally operable to:
Obtain the depth information of each level on depth image;
Determine that on depth image, the depth information of each level is the depth information of corresponding level on the first image;
According to the depth information of each level on the first image, determine the attribute of corresponding level.
Electronic equipment the most according to claim 9, it is characterised in that described first processing unit,
It is additionally operable to:
According to the attribute of each level, determine the first parameter of corresponding level;
Utilize the first parameter, the image information in the first corresponding level of image is carried out gaussian filtering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510114547.9A CN106034233B (en) | 2015-03-16 | 2015-03-16 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510114547.9A CN106034233B (en) | 2015-03-16 | 2015-03-16 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106034233A true CN106034233A (en) | 2016-10-19 |
CN106034233B CN106034233B (en) | 2018-08-10 |
Family
ID=57150948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510114547.9A Active CN106034233B (en) | 2015-03-16 | 2015-03-16 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106034233B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101562754A (en) * | 2009-05-19 | 2009-10-21 | 无锡景象数字技术有限公司 | Method for improving visual effect of plane image transformed into 3D image |
CN103164868A (en) * | 2011-12-09 | 2013-06-19 | 金耀有限公司 | Method and device for generating image with depth-of-field (DOF) effect |
CN103181173A (en) * | 2010-10-27 | 2013-06-26 | 松下电器产业株式会社 | 3D image processing device, 3d imaging device, and 3d image processing method |
CN104349153A (en) * | 2013-08-06 | 2015-02-11 | 宏达国际电子股份有限公司 | Image processing methods and systems in accordance with depth information |
US20150054926A1 (en) * | 2012-05-09 | 2015-02-26 | Fujifilm Corporation | Image processing device and method, and image capturing device |
-
2015
- 2015-03-16 CN CN201510114547.9A patent/CN106034233B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101562754A (en) * | 2009-05-19 | 2009-10-21 | 无锡景象数字技术有限公司 | Method for improving visual effect of plane image transformed into 3D image |
CN103181173A (en) * | 2010-10-27 | 2013-06-26 | 松下电器产业株式会社 | 3D image processing device, 3d imaging device, and 3d image processing method |
CN103164868A (en) * | 2011-12-09 | 2013-06-19 | 金耀有限公司 | Method and device for generating image with depth-of-field (DOF) effect |
US20150054926A1 (en) * | 2012-05-09 | 2015-02-26 | Fujifilm Corporation | Image processing device and method, and image capturing device |
CN104349153A (en) * | 2013-08-06 | 2015-02-11 | 宏达国际电子股份有限公司 | Image processing methods and systems in accordance with depth information |
Also Published As
Publication number | Publication date |
---|---|
CN106034233B (en) | 2018-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8767048B2 (en) | Image processing method and apparatus therefor | |
CN106484116B (en) | The treating method and apparatus of media file | |
CN102638696A (en) | Display device and display method | |
EP3547672A1 (en) | Data processing method, device, and apparatus | |
CN108596106B (en) | Visual fatigue recognition method and device based on VR equipment and VR equipment | |
CN103415882B (en) | Video display devices | |
Jung et al. | Visual comfort improvement in stereoscopic 3D displays using perceptually plausible assessment metric of visual comfort | |
CN104967837A (en) | Device and method for adjusting three-dimensional display effect | |
CN102271262A (en) | Multithread-based video processing method for 3D (Three-Dimensional) display | |
CN102835117A (en) | Method and system of using floating window in three-dimensional (3d) presentation | |
US8872902B2 (en) | Stereoscopic video processing device and method for modifying a parallax value, and program | |
CN103248910B (en) | Three-dimensional imaging system and image reproducing method thereof | |
KR101797035B1 (en) | Method for converting overlaying area into 3D image and apparatus thereof | |
CN111164542A (en) | Method of modifying an image on a computing device | |
KR101121979B1 (en) | Method and device for stereoscopic image conversion | |
CN106358006A (en) | Video correction method and video correction device | |
CN106680996A (en) | Display method and display control system of head-mounted virtual reality display | |
CN102780900B (en) | Image display method of multi-person multi-view stereoscopic display | |
CN106034233A (en) | Information processing method and electronic device | |
CN106504063B (en) | A kind of virtual hair tries video frequency showing system on | |
JP5664356B2 (en) | Generation apparatus and generation method | |
US8817081B2 (en) | Image processing apparatus, image processing method, and program | |
CN107229340B (en) | Information processing method and electronic equipment | |
CN112634346A (en) | AR (augmented reality) glasses-based real object size acquisition method and system | |
Wang et al. | Study of center-bias in the viewing of stereoscopic image and a framework for extending 2D visual attention models to 3D |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |