CN103686122A - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
CN103686122A
CN103686122A CN201310026387.3A CN201310026387A CN103686122A CN 103686122 A CN103686122 A CN 103686122A CN 201310026387 A CN201310026387 A CN 201310026387A CN 103686122 A CN103686122 A CN 103686122A
Authority
CN
China
Prior art keywords
beholder
hunting zone
ken
tan
video camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310026387.3A
Other languages
Chinese (zh)
Inventor
泷本崇博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of CN103686122A publication Critical patent/CN103686122A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present invention provides an image processing device which can detect a viewer through small processing amount, and an image processing method. According to one embodiment, the image processing device comprises a searching range calculating part and a viewer searching part. The searching range calculating part calculates partial searching range of the image that is photographed by a video camera according to a distance between a display part and the viewer. The viewer searching part searches the viewer in the searching range.

Description

Image processor and image treatment method
Technical field
Embodiments of the present invention relate to image processor and image treatment method.
Background technology
In recent years, beholder does not use special spectacles and can bore hole watch the holographic display device (so-called bore hole 3D TV) of stereopsis universal gradually.This holographic display device shows the different a plurality of images of viewpoint.Then, these image lights are controlled outbound course by disparity barrier, biconvex lens etc., and import beholder's eyes.If beholder's position suitable, due to beholder's left eye from right eye, watch different anaglyphs, thereby can three-dimensionally identify image.
Yet there is the problem that causes seeing stereopsis because of beholder position in bore hole 3D TV.
So known have a kind of tracer technique, the place ahead that utilizes video camera shooting stereo image display unit is to detect beholder and can see that on beholder's position the mode of stereopsis carries out the control of the ken.But, if it is large to detect the processing load of beholder position, probably cannot successfully control the ken.
Summary of the invention
The invention provides and a kind ofly can detect with less treating capacity beholder's image processor and image treatment method.
According to an embodiment of the invention, image processor comprises hunting zone calculating section and beholder's search section.Above-mentioned hunting zone calculating section is according to the viewing distance of the distance as display part and beholder, and a part of calculating the image of taking with video camera is hunting zone.Above-mentioned beholder's search section is searched for beholder in above-mentioned hunting zone.
Accompanying drawing explanation
Fig. 1 is the outside drawing of the image display 100 that relates to of the 1st execution mode.
Fig. 2 is the block diagram that the brief configuration of the image display 100 that the 1st execution mode relates to is shown.
(a) of Fig. 3 is from top, to observe the figure of a part for liquid crystal panel 1 and biconvex lens 2 to (c).
Fig. 4 is the schematic diagram of the ken.
Fig. 5 is the figure that an example of the hunting zone of calculating is shown.
Fig. 6 is the figure of calculation method of the hunting zone of explanation vertical direction.
Fig. 7 is the figure of calculation method of the hunting zone of explanation horizontal direction.
Fig. 8 means the flow chart of the example that the processing of the controller 10 that the 1st execution mode relates to is moved.
Fig. 9 is the block diagram that the brief configuration of the image display 100 that the 2nd execution mode relates to is shown.
Figure 10 is the figure that an example of the ken of calculating is shown.
Figure 11 means the flow chart of the example that the processing of the controller 10 ' that the 2nd execution mode relates to is moved.
(a) of Figure 12 is the figure of the processing action of the beholder's search section 13 in explanation the 3rd execution mode to (d).
Figure 13 is the block diagram illustrating as the brief configuration of the image display 100 ' of the modified example of Fig. 2.
Embodiment
Below, with reference to accompanying drawing, execution mode is specifically described.
The first execution mode
Fig. 1 is the outside drawing of the image display 100 that relates to of the first execution mode, and Fig. 2 is the block diagram that its brief configuration is shown.Image display 100 comprises liquid crystal panel 1, biconvex lens 2, video camera 3, light accepting part 4 and controller 10.
Liquid crystal panel (display part) 1 shows a plurality of anaglyphs, and this anaglyph can be observed the beholder who is positioned at the ken as stereopsis.This liquid crystal panel 1 is for example 55 inches of big or small panels, has the pixel of 4K2K (3840*2160).To this, by being obliquely installed the skill of biconvex lens etc., can make it have and be equivalent to the effect that the along continuous straight runs for stereopsis purposes arranges 11520 (=1280*9) individual pixel, 720 pixels are vertically set.Below, utilize this model after the pixel count of horizontal direction is expanded to describe.In addition, in each pixel, be formed with in the vertical direction 3 sub-pixels, that is, and R sub-pixel, G sub-pixel and B sub-pixel.Light exposes to liquid crystal panel 1 from being arranged at the backlight apparatus (not shown) at the back side.Each pixel makes the light transmission corresponding to the brightness of the picture signal of being supplied with by controller 10.
Biconvex lens (opening control part) 2 be a plurality of anaglyphs in the upper demonstration of liquid crystal panel 1 (display part) to prescribed direction output.This biconvex lens 2 has a plurality of protuberances of along continuous straight runs configuration, its quantity be liquid crystal panel 1 horizontal direction pixel count 1/9.And the mode with 1 protuberance corresponding to 9 pixels of along continuous straight runs configuration, sticks on biconvex lens 2 on the surface of liquid crystal panel 1.See through the light of each pixel from thering is directive property near the summit of protuberance to specific direction output.
In the following description, for each protuberance corresponding to biconvex lens 2,9 pixels are set, also can adopt the example of many parallaxes mode of 9 parallaxes to describe.In many parallaxes mode, in 9 pixels corresponding to each protuberance, show respectively the 1st~9th anaglyph.The 1st~9th anaglyph refers to, from 9 viewpoints of the horizontal direction arrangement along liquid crystal panel 1, watches respectively the image of subject.Beholder, via biconvex lens 2, watches an anaglyph in the 1st~9th anaglyph with left eye respectively, watches another one anaglyph, thereby can three-dimensionally watch image with right eye.According to many parallaxes mode, along with increasing the quantity of parallax, can expand the ken.The ken refers to the region that can three-dimensionally watch image while watching liquid crystal panel 1 from the place ahead of liquid crystal panel 1.
In addition, liquid crystal panel 1, by showing same color in 9 pixels corresponding to each protuberance, also can show two dimensional image.
In addition, in the present embodiment, how the relative position relation according to the protuberance of biconvex lens 2 and shown anaglyph, show anaglyph in 9 pixels corresponding to each protuberance, can realize the variable control of the ken.Below, the ken is controlled and described.
Fig. 3 observes the figure of a part for liquid crystal panel 1 and biconvex lens 2 from top.In this figure, shadow region represents the ken, if watch liquid crystal panel 1 from the ken, can three-dimensionally watch image.Other region be produce contrary depending on or the region of crosstalking, be the region that is difficult to three-dimensionally watch image.In addition, although the offside centre in the ken of beholder just more can be experienced third dimension, even in the ken, if beholder is positioned at its edge, also can experience less than third dimension, or produce against looking etc.
Fig. 3 illustrates the relative position relation with biconvex lens 2 according to liquid crystal panel 1, more specifically, and the side-play amount according to liquid crystal panel 1 with the distance of biconvex lens 2 or the horizontal direction of liquid crystal panel 1 and biconvex lens 2, the situation that the ken changes.
In fact, because biconvex lens 2 is by sticking on liquid crystal panel 1 after aligned position accurately, thereby be difficult to liquid crystal panel 1 and the relative position of biconvex lens 2 to carry out physics change.
Therefore, in the present embodiment, the display position of the 1st~9th anaglyph by mobile display in each pixel of liquid crystal panel 1, thus the relative position relation of liquid crystal panel 1 and biconvex lens 2 changed on apparent, carry out thus the adjustment of the ken.
For example, compare with showing respectively the situation (Fig. 3 (a)) of the 1st~9th anaglyph in 9 pixels corresponding to each protuberance, when anaglyph integral body is moved to the right and shown (Fig. 3 (b)), the ken moves to the left.On the contrary, when anaglyph integral body is moved to the left and shown, the ken moves to the right.
In addition, near central authorities, anaglyph does not move in the horizontal direction, and the closer to the outside of liquid crystal panel 1, anaglyph more significantly moves and while showing (Fig. 3 (c)), the ken moves to the direction of close liquid crystal panel 1.In addition the pixel between the anaglyph that, mobile anaglyph is different with pixel, side-play amount between not mobile anaglyph can be according to the suitable interpolation of pixel around.In addition, with ((c) of Fig. 3 is contrary, and near central authorities, anaglyph does not move in the horizontal direction, and the closer to liquid crystal panel 1 outside, anaglyph more significantly moves and while showing, the ken moves to the direction away from liquid crystal panel 1 to central side.
Like this, by mobile anaglyph integral body or a part, show, can make the ken move along left and right directions or fore-and-aft direction with respect to liquid crystal panel 1.In Fig. 3, show for the purpose of simplifying the description and only a ken, but as shown in Figure 4, in fact in viewing areas P, have a plurality of kens, these kens move in linkage.The ken is controlled by the controller 10 of Fig. 2 described later.
Be back to Fig. 1, video camera 3 is arranged near the lower central of liquid crystal panel 1 with the elevation angle of stipulating, the prescribed limit in liquid crystal panel 1 the place ahead is taken.Captured image is provided for controller 10, for detection of beholder's position or beholder's face etc.Video camera 3 can be taken any in moving images and rest image.In addition, to the installation site of video camera 3 and also not restriction of angle, as long as the beholder who watches above that can take at liquid crystal panel 1 is just passable.
Light accepting part 4 is for example arranged on liquid crystal panel 1 lower left side.And light accepting part 4 receives the infrared signal of the remote controller transmission of being used by beholder.This infrared signal comprises that expression shows that stereopsis still shows the signal of bidimensional image or display menu etc.
Then, the detailed structural element of remote controller 10 is described.As shown in Figure 2, controller 10 comprises: tuning decoder 11, anaglyph converter section 12, beholder's search section 13, beholder's location estimating portion 14, ken parameter calculating section 15, Image Adjusting portion 16 and hunting zone calculating section 17.Controller 10 is for example installed as an IC (Integrated Circuit), is configured in the inner side of liquid crystal panel 1.Certainly, also a part for controller 10 can be installed with software mode.
Tuning decoder (acceptance division) 11 receives the broadcast wave being transfused to and carries out channel selection, and the input signal of video signal of coding is decoded.Overlapping while having the digital broadcast signals such as electric program guide (EPG) in broadcast wave, tuner decoder 11 is by its extraction.Or what tuner decoder 11 received is not broadcast wave, but from the input signal of video signal after the coding of the image output equipment such as optical disk reproducing apparatus or PC, and it is decoded.Signal after decoded, also referred to as base band signal of video signal, is supplied to anaglyph converter section 12.In addition, when image display 100 does not receive broadcast wave, and while only showing the input signal of video signal receiving from image output equipment, also can arrange only there is decoding function decoder as acceptance division, to replace tuner decoder 11.
The input signal of video signal that tuner decoder 11 receives can be both two-dimentional signal of video signal, can be also frame encapsulation (FP), side by side (SBS) or upper and lower (TAB) form etc. comprise left eye with and the 3-dimensional image signal of image for right eye.In addition, signal of video signal can be also the 3-dimensional image signal that contains images more than 3 parallaxes.
For show image three-dimensionally, anaglyph converter section 12 is converted to a plurality of anaglyph signals by base band signal of video signal.According to base band signal of video signal, be bidimensional image signal or 3-dimensional image signal, the contents processing of anaglyph converter section 12 is different.
When bidimensional image signal or the 3-dimensional image signal that contains the image below 8 parallaxes are transfused to, anaglyph converter section 12, according to the depth value of each pixel in signal of video signal (row I value difficult to understand), generates the 1st~9th anaglyph signal.Depth value mean each pixel be shown as look with respect to liquid crystal panel 1, have how forward or by the value of rear demonstration.Depth value both can be attached in input signal of video signal in advance, also can be according to the feature of input signal of video signal, and carry out the identification of detection of dynamic, composition and the mankind's face detection etc. and generate depth value.On the other hand, when the 3-dimensional image signal that contains 9 anaglyphs is transfused to, anaglyph converter section 12 is used this signal of video signal and generates the 1st~9th anaglyph signal.
The anaglyph signal of the input signal of video signal generating is as mentioned above provided for Image Adjusting portion 16.
Beholder's search section 13 is searched for beholder in the hunting zone of a part for the image as being taken by video camera 3.More specifically, beholder's search section 13 is stored in by use the face that inner facial dictionary detects beholder, and then search beholder.Face dictionary refers to the information of the facial characteristics such as eye, nose, mouth that represent people.By the processing of beholder's search section 13, calculate the facial positions (x, y) of beholder in image and as the wide w of face of the parameter corresponding with viewing distance.
Herein, the hunting zone of beholder's search section 13 is set by hunting zone described later calculating section 17, only using a part for the captured image of video camera 3 but not all as hunting zone.Can cut down thus search beholder's treating capacity.
Beholder's location estimating portion 14, according to the result of beholder's search section 13, infers the positional information of beholder in real space.Beholder's positional information is for example expressed as usings the central authorities of liquid crystal panel 1 as the position on X-axis (horizontal direction), Y-axis (vertical direction) and the Z axis (with the direction of liquid crystal panel 1 quadrature) of initial point.For example, the facial scope w that beholder's location estimating portion 14 bases are calculated etc. infers corresponding to the position on the Z axis of viewing distance, and infer the position in X, Y-axis according to the coverage (known) of the facial positions (x, y) of beholder in image and video camera 3.
In addition, the method for beholder's search section 13 and beholder's location estimating portion 14 detection beholder positions is not particularly limited, video camera 3 can be both infrared camera, also can utilize sound wave to detect beholder's position.
Beholder's positional information that ken parameter calculating section 15 utilizes beholder's location estimating portion 14 to supply with, calculates for setting the ken parameter of the ken of including the beholder who is detected in.This ken parameter is for example the amount of movement of anaglyph illustrated in fig. 3, is the combination of a parameter or a plurality of parameters.Then, ken parameter calculating section 15 offers Image Adjusting portion 16 by the ken parameter of calculating.
In order to control the ken, the ken parameter of calculating when Image Adjusting portion (ken control part) 16 is displayed on liquid crystal panel 1 according to stereopsis, after carrying out the adjustment of mobile anaglyph signal or interpolation, provide it to liquid crystal panel 1 and be presented on liquid crystal panel 1.
Hunting zone calculating section 17 is considered as the viewing distance of distance of beholder and liquid crystal panel 1 and the hunting zone of calculating beholder's search section 13.Fig. 5 means the figure of an example of the hunting zone of calculating.As shown in the figure, hunting zone calculating section 17 is set as hunting zone by the part in the image of being taken by video camera 3.More specifically, hunting zone calculating section 17 has vertical direction hunting zone calculating section 17a and horizontal direction hunting zone calculating section 17b.Therefore, as the scope of not searching in the captured image of video camera 3, vertical direction hunting zone calculating section 17a calculates the search omission scope Pvt of upside and scope Pvb is omitted in the search of downside, and horizontal direction hunting zone calculating section 17b calculates the search in left side and omits the search omission scope Phr on scope Phl and right side.
Its result, if be vertical direction I pixel and horizontal direction K pixel by the resolution setting of video camera 3, in vertical direction I pixel, removing the part of removing left side Phl pixel and right side Phr pixel in upside Pvt pixel and downside Pvb pixel and horizontal direction K pixel just becomes hunting zone.
Fig. 6 is the figure of calculation method of the hunting zone of explanation vertical direction.In this figure, each parameter-definition is as follows:
α: the vertical direction of video camera 3 the angle elevation angle of horizontal direction (with respect to) is set
β: the visual angle of the vertical direction of video camera 3
γ: the viewing angle of the vertical direction of imaginary liquid crystal panel 1
A: liquid crystal panel 1 center is to the distance of video camera 3
B: viewing distance
In above-mentioned parameter, α, β, γ and A are predetermined constants.And, before beholder's position is inferred by beholder's location estimating portion 14, B (is for example set as in advance imaginary viewing distance, 3 times of liquid crystal panel 1 vertical-direction length etc.), after inferring, B can adopt the distance corresponding to the position being pushed off, for example the position of beholder on Z axis.
Following formula for the length L v of the vertical direction of taking by video camera 3 as shown in Figure 6, (1) represents:
Lv=B*tan(β/2+α)+B*tan(β/2-α)...(1)
If consider viewing angle γ, the scope that beholder may exist is limited in the existence range Ev of this figure.In other words, higher than the upside of existence range Ev, the possibility of taking ceiling or the position higher than beholder's height is high, is the scope that can omit search.In addition, lower than the downside of existence range Ev, the possibility of shooting floor or beholder's underfooting is high, still can omit the scope of search.The length L vb that the length L vt of scope and the search of downside omission scope are omitted in the search of upside uses respectively following formula (2), formula (3) to represent:
Lvt=|B*tan(β/2+α)-{2*B*tan(γ/2)-tan(γ/2)*(B-A/tan(γ/2))|...(2)
Lvb=|B*tan(β/2-a)-tan(γ/2)*(B-A/tan(γ/2)|...(3)
So the pixel count Pvb that the pixel count Pvt of scope and the search of downside omission scope are omitted in the search of upside is updated to by above-mentioned formula (1)~formula (3) value obtaining in following formula (4), formula (5).
Pvt=I*Lvt/Lv...(4)
Pvb=I*Lvb/Lv...(5)
Fig. 7 is the figure of calculation method of the hunting zone of explanation horizontal direction.In this figure, each parameter-definition is as follows.
ζ: the horizontal direction of video camera 3 angle is set
θ: the visual angle of the horizontal direction of video camera 3
δ: the viewing angle of imaginary liquid crystal panel 1 horizontal direction
Above-mentioned parameter ζ, θ and δ are predetermined constants.
Following formula for the length L h of the horizontal direction of taking by video camera 3 as shown in Figure 7, (6) represents:
Lh=2*B*tan(θ/2)...(6)
If consider viewing angle δ, the scope that beholder may exist is limited in the existence range Eh of this figure.In other words, the left side of existence range Eh and right side are the scopes that can omit search.The length L hr that the length L hl of scope and the search on right side omission scope are omitted in the search in left side is represented by following formula (7), formula (8) respectively.
Lhl=|B*tan(θ/2)-B*tan(δ/2+ζ)|...(7)
Lhr=|B*tan(θ/2)-B*tan(δ/2-ζ)|...(8)
So the pixel count Phr that the pixel count Phl of scope and the search on right side omission scope are omitted in the search in left side is represented by following formula (9), formula (10) respectively.
Phl=K*Lhl/Lh=K*|tan(θ/2)-tan(δ/2+ζ)|/2*tan(θ/2)...(9)
Phr=K*Lhr/Lh=K*|tan(θ/2)-tan(δ/2-ζ)|/2*tan(θ/2)...(10)
By with above formula (4), formula (5), formula (9) and formula (10), can obtain pixel count Pvt, Pvb, Phl, the Phr of the search omission scope of Fig. 5.
Fig. 8 is the flow chart of an example that the processing action of the related controller 10 of the 1st execution mode is shown.
The image that video camera 3 is taken be transfused to controller 10 after ("Yes" of step S1), hunting zone calculating section 17 is calculated the hunting zone of beholder's search section 13.More specifically, scope Pvt, Pvb (step S2) are omitted in the search that calculating section 17a in vertical direction hunting zone calculates vertical direction according to above-mentioned formula (4), formula (5).Then, scope Phl, Phr (step S3) are omitted in the search that calculating section 17b in horizontal direction hunting zone calculates horizontal direction according to above formula (9), formula (10).In addition, initial owing to not yet inferring beholder's position, thereby can use predefined value as viewing distance B.
Then, in the hunting zone of beholder's search section 13 by the search omission scope corresponding to calculating, carry out face and detect, thereby detect beholder (step S4).If detect beholder's ("Yes" of step S5), beholder's location estimating portion 14 infers the position (step S6) of beholder in real space.
Herein, using the distance of the beholder position of inferring and liquid crystal panel 1 as viewing distance B inputted search scope calculating section 17.Then, for the processing (step S2, S3) of hunting zone calculating section 17 thereafter.
On the other hand, ken parameter calculating section 15 is calculated ken parameter (step S7) the ken is set in to the mode of the beholder position of deduction.Then, the anaglyph that 16 pairs of anaglyph converter sections of Image Adjusting portion 12 generate is adjusted, to can three-dimensionally see from the beholder position of inferring, inspects image (step S8).Anaglyph after adjustment is presented on liquid crystal panel 1.Then, beholder is presented at the anaglyph on liquid crystal panel 1 by watching via biconvex lens 2, can see stereopsis.
Carry out above step until handle final frame (step S9).In addition,, when video camera 3 is not inputted image to controller 10 ("No" of step S1), do not carry out beholder's Check processing until input image.
Like this, in the 1st execution mode, hunting zone calculating section 17, according to viewing distance B, is set as hunting zone by a part for the image of being taken by video camera 3.So, and using all the comparing as hunting zone of the image of being taken by video camera 3, can alleviate the treating capacity of beholder's search.
The second execution mode
In the first above-mentioned execution mode, according to viewing distance B, determine hunting zone, in the second execution mode of following explanation, further consider afterwards definite hunting zone, ken position.
Fig. 9 means the block diagram of brief configuration of the image display 100 of the second execution mode.In Fig. 9, the component part identical with Fig. 2 marked to identical symbol, below take difference and describe as main.
The controller 10 ' of Fig. 9 also has ken calculating section 18.Ken calculating section 18 is calculated the ken position of setting according to the ken parameter of being calculated by ken parameter calculating section 15.The ken, except depending on ken parameter, also depends on the design of the image displays such as distance 100 of liquid crystal panel 1 and biconvex lens 2.Although ken parameter calculating section 15 is calculated ken parameter the ken is set in to the mode of beholder position, in fact also has the ken of a plurality of settings except beholder position.Therefore, ken calculating section 18 is calculated a plurality of kens position separately.An example as the ken of calculating, can obtain a plurality of quadrangles as shown in figure 10.
Then, calculating section 17b in horizontal direction hunting zone determines the hunting zone of horizontal direction until calculate the ken according to above-mentioned formula (9), formula (10).After calculating the ken, use the angle δ ' of the ken that contains predefined quantity to replace the viewing angle δ as the fixed value in Fig. 7.Shown in Figure 10, use the example contain the such angle δ ' of 3 kens.This angle δ ' is calculated according to the ken position of calculating by horizontal direction hunting zone calculating section 17b.Then, horizontal direction hunting zone calculating section 17b, according to the viewing angle δ in above-mentioned formula (9), formula (10) is replaced with to the mathematical expression after the angle δ ' corresponding with the ken, determines the hunting zone of horizontal direction.
Figure 11 means the flow chart of the example that the processing of the controller 10 ' that the second execution mode relates to is moved.With the main difference of Fig. 8 be calculate ken parameter after, ken calculating section 18 is calculated the ken (step S11).That is, the position of the ken of calculating is imported into hunting zone calculating section 17, then for the processing (step S3) of horizontal direction hunting zone calculating section 17b thereafter.
Like this, the second execution mode is determined hunting zone to comprise the mode of the ken of predetermined quantity.So, further constriction hunting zone, and then can alleviate search beholder's treating capacity.
The 3rd execution mode
Below the execution mode of explanation is by changing according to each frame the treating capacity that hunting zone further alleviates search beholder.
Controller in present embodiment is due to controller 10 that can application drawing 2 or the controller 10 ' of Fig. 9, thereby omits diagram.
Figure 12 is the figure of the processing action of the beholder's search section 13 in explanation the 3rd execution mode.(a) of Figure 12 illustrates the hunting zone 13a being calculated by the hunting zone calculating section 17 of controller 10 (or 10 ').Beholder's search section 13 whole hunting zone 13a that not search is calculated, but wherein a part of according to each frame search.In this illustrated example, beholder's search section 13, for 3N (N the is positive integer) frame of the image of being inputted by video camera 3, is searched for (Figure 12 (b)) to the left field 13b in the 13a of hunting zone.In addition, beholder's search section 13, for (3N+1) frame of the image of being inputted by video camera 3, is searched for (Figure 12 (c)) to the middle section 13c in the 13a of hunting zone.And beholder's search section 13, for (3N+2) frame of the image of being inputted by video camera 3, is searched for (Figure 12 (d)) to the right side area 13d in the 13a of hunting zone.
By searching for like this, hunting zone that can each frame of constriction, and then can alleviate search beholder's treating capacity.In addition, in the situation that frame per second is 30fps (frame per second) etc. is enough fast with respect to beholder's movement, even if utilize the whole hunting zone of number frame search, beholder's accuracy of detection also declines hardly.
Herein, owing to also existing beholder to be positioned at the situation that each zone boundary is located, thereby preferably each region is overlapped.For example, in Figure 12 (b), the right side of a region 13b part is overlapping with a left side part of the middle region 13c of Figure 12 (c).
In addition the number of times region of search 13b~13d to equate only.For example, the possibility that is positioned at liquid crystal panel 1 front due to beholder is high, thereby also can improve the search rate of region 13c, and relatively reduces the search rate of region 13b, 13d.Certainly, hunting zone 13 can be divided into 2 regions, also can be divided into 4 regions.
Like this, the 3rd execution mode is only searched for a part for hunting zone in 1 frame.Thereby, can further alleviate search beholder's treating capacity.
Except the described above first to the 3rd execution mode, also can consider various modified examples.For example, also can determine hunting zone according to the load factor of memory and/or CPU (following, referred to as load factor).For example can, if load factor is below setting, in the whole image of taking at video camera 3, search for beholder, infer in advance beholder position simultaneously.Then, when load factor surpasses setting, according to viewing distance corresponding to the beholder position with inferring and using a part for image as hunting zone.
Or of course, in the whole image of taking at video camera 3 once for N frame, search for beholder, infer in advance beholder's position simultaneously.Then, the basis viewing distance corresponding with the beholder position of inferring in other frames, using a part for image as hunting zone.
In addition, hunting zone calculating section 17 also can only include any in vertical direction hunting zone calculating section 17a and horizontal direction hunting zone calculating section 17b.
In addition, thereby in each execution mode, illustrated by utilizing biconvex lens 2 to come mobile anaglyph to control the example of the ken, also can adopt other method to control the ken.For example, also can be set to opening control part 2 ' to replace biconvex lens 2 by disparity barrier.Figure 13 is the block diagram illustrating as the brief configuration of the image display 100 ' of the modified example of Fig. 2.As shown in the drawing, the controller 10 ' of image processor 100 ' comprises that ken control part 16 ' is to replace Image Adjusting portion 16.
This ken control part 16 ' is controlled opening control part 2 ' according to the ken parameter of being calculated by ken parameter calculating section 15.The in the situation that of this modified example, control the side-play amount of distance that parameter comprises liquid crystal panel 1 and opening control part 2 ', liquid crystal panel 1 and the horizontal direction of opening control part 2 ' etc.
In this modified example, the outbound course that is presented at the anaglyph on liquid crystal panel 1 by controlling with opening control part 2 ' is controlled the ken.Like this, can not carry out the processing of mobile anaglyph, and utilize ken control part 16 ' to control opening control part 2 '.
At least a portion of the image display system illustrating in above-mentioned execution mode both can consist of hardware, also can consist of software.When being formed by software, the program that realizes at least a portion function of image display system can be contained in the recording mediums such as floppy disk or CD-ROM, computer is read and carry out.Recording medium is not limited to the removably medium such as disk or CD, can be also the recording medium of the fixeds such as hard disk unit or memory.
In addition, also can by communication lines such as the Internets, distribute (also comprising radio communication) program of at least a portion function that realizes image display system.And, also can encrypt, modulate, compress under the state of this program, by the Wirelines such as the Internet or radiolink or be contained in recording medium and distribute.
According to the device of above-mentioned at least one execution mode or method, can detect beholder with less treating capacity.
Although described some execution modes of the present invention, these execution modes are only to exemplary illustration of the present invention, and the scope being not meant to limit the present invention.These execution modes can be presented as other various forms, under the prerequisite of spirit that does not depart from invention, can carry out various omissions, replacement and change.Appending claims and equivalent thereof are contained these form that falls into scope and spirit of the present invention or modifications.

Claims (9)

1. an image processor, is characterized in that, possesses:
Hunting zone calculating section, is viewing distance according to display part and beholder's distance, calculates the hunting zone of a part for the image photographing as video camera; And
Beholder's search section is searched for beholder in described hunting zone.
2. image processor according to claim 1, is characterized in that,
Described image processor also possesses beholder's location estimating portion of the position of inferring the searched described beholder who arrives,
Described hunting zone calculating section, according to the viewing distance of predefined hypothesis or the described viewing distance corresponding with the described beholder's who infers position, is calculated described hunting zone.
3. image processor according to claim 1, is characterized in that, described image processor also possesses:
Beholder's location estimating portion, infers the searched described beholder's who arrives position;
Ken parameter calculating section, calculates for the ken being set in to the ken parameter of the described beholder's who infers position; And
Ken calculating section, a plurality of kens of calculating setting according to described ken parameter position separately,
Described hunting zone calculating section is calculated described hunting zone in the mode that comprises the predefined ken in described a plurality of ken.
4. image processor according to claim 1, is characterized in that,
The 1st frame in the image that described beholder's search section photographs for described video camera is searched for beholder in the first area of the part as described hunting zone,
The 2nd frame in the image that described beholder's search section photographs for described video camera is searched for beholder in the second area of the part as described hunting zone,
A part for described second area and a part for described first area are overlapping.
5. image processor according to claim 1, is characterized in that,
Described hunting zone calculating section at least comprises in vertical direction hunting zone calculating section and horizontal direction hunting zone calculating section, wherein,
Described vertical direction hunting zone calculating section is calculated the hunting zone of the vertical direction of the image that described video camera photographs,
Described horizontal direction hunting zone calculating section is calculated the hunting zone of the horizontal direction of the image that described video camera photographs.
6. image processor according to claim 5, is characterized in that,
The part of described vertical direction hunting zone calculating section using the upside Pvt pixel of the image photographing except described video camera and downside Pvb pixel be as described hunting zone, wherein,
Pvt=I*Lvt/Lv...(1)
Pvb=I*Lvb/Lv...(2)
Lvt=|B*tan(β/2+α)-{2*B*tan(γ/2)-tan(γ/2)*(B-A/tan(γ/2))|...(3)
Lvb=|B*tan(β/2-α)-tan(γ/2)*(B-A/tan(γ/2)|...(4)
Lv=B*tan(β/2+α)+B*tan(β/2-α)...(5)
I is the pixel count of the vertical direction of described video camera, α be described video camera vertical direction angle is set, β is the visual angle of the vertical direction of described video camera, γ is the viewing angle of the display part vertical direction of hypothesis, B is described viewing distance.
7. image processor according to claim 5, is characterized in that,
The part of described horizontal direction hunting zone calculating section using the left side Phl pixel of the image photographing except described video camera and right side Phr pixel be as described hunting zone, wherein,
Phl=K*|tan(θ/2)-tan(δ/2+ζ)|/2*tan(θ/2)...(6)
Phr=K*|tan(θ/2)-tan(δ/2-ζ)|/2*tan(θ/2)...(7)
K is the pixel count of the vertical direction of described video camera, ζ be described video camera horizontal direction angle is set, θ is the visual angle of the horizontal direction of described video camera, δ is the viewing angle of horizontal direction of the display part of hypothesis.
8. image processor according to claim 1, is characterized in that, possesses:
Display part, shows a plurality of anaglyphs;
Beholder's location estimating portion, infers the searched described beholder's who arrives position; And
Ken control part, is set in the ken the searched described beholder's who arrives position.
9. an image treatment method, is characterized in that, possesses:
It according to display part and beholder's distance, is the step of the hunting zone of a viewing distance part of calculating the image photographing as video camera; And
In described hunting zone, search for beholder's step.
CN201310026387.3A 2012-08-31 2013-01-24 Image processing device and image processing method Pending CN103686122A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012191702A JP5395934B1 (en) 2012-08-31 2012-08-31 Video processing apparatus and video processing method
JP2012-191702 2012-08-31

Publications (1)

Publication Number Publication Date
CN103686122A true CN103686122A (en) 2014-03-26

Family

ID=50112326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310026387.3A Pending CN103686122A (en) 2012-08-31 2013-01-24 Image processing device and image processing method

Country Status (2)

Country Link
JP (1) JP5395934B1 (en)
CN (1) CN103686122A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105263014A (en) * 2015-10-12 2016-01-20 四川长虹电器股份有限公司 Implementation method of naked eye 3D UI (user interface) controls

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791230A (en) * 2004-12-13 2006-06-21 三星电子株式会社 Three dimensional image display apparatus
KR20080040931A (en) * 2006-11-06 2008-05-09 삼성전자주식회사 Display apparatus and control method thereof
CN101491108A (en) * 2006-09-26 2009-07-22 株式会社东芝 Apparatus, method and computer program product for three-dimensional image processing
US20100118118A1 (en) * 2005-10-21 2010-05-13 Apple Inc. Three-dimensional display system
US20100182409A1 (en) * 2009-01-21 2010-07-22 Sony Corporation Signal processing device, image display device, signal processing method, and computer program
CN101799584A (en) * 2009-02-11 2010-08-11 乐金显示有限公司 Method of controlling view of stereoscopic image and stereoscopic image display using the same
JP2011071898A (en) * 2009-09-28 2011-04-07 Panasonic Corp Stereoscopic video display device and stereoscopic video display method
CN102300111A (en) * 2010-06-24 2011-12-28 索尼公司 Stereoscopic display device and control method of stereoscopic display device
WO2012002018A1 (en) * 2010-06-30 2012-01-05 富士フイルム株式会社 Stereoscopic image playback device, parallax adjustment method of same, parallax adjustment program, and image capture device
WO2012060182A1 (en) * 2010-11-05 2012-05-10 富士フイルム株式会社 Image processing device, image processing program, image processing method, and storage medium
WO2012073336A1 (en) * 2010-11-30 2012-06-07 株式会社 東芝 Apparatus and method for displaying stereoscopic images

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1791230A (en) * 2004-12-13 2006-06-21 三星电子株式会社 Three dimensional image display apparatus
US20100118118A1 (en) * 2005-10-21 2010-05-13 Apple Inc. Three-dimensional display system
CN101491108A (en) * 2006-09-26 2009-07-22 株式会社东芝 Apparatus, method and computer program product for three-dimensional image processing
KR20080040931A (en) * 2006-11-06 2008-05-09 삼성전자주식회사 Display apparatus and control method thereof
US20100182409A1 (en) * 2009-01-21 2010-07-22 Sony Corporation Signal processing device, image display device, signal processing method, and computer program
CN101799584A (en) * 2009-02-11 2010-08-11 乐金显示有限公司 Method of controlling view of stereoscopic image and stereoscopic image display using the same
JP2011071898A (en) * 2009-09-28 2011-04-07 Panasonic Corp Stereoscopic video display device and stereoscopic video display method
CN102300111A (en) * 2010-06-24 2011-12-28 索尼公司 Stereoscopic display device and control method of stereoscopic display device
WO2012002018A1 (en) * 2010-06-30 2012-01-05 富士フイルム株式会社 Stereoscopic image playback device, parallax adjustment method of same, parallax adjustment program, and image capture device
WO2012060182A1 (en) * 2010-11-05 2012-05-10 富士フイルム株式会社 Image processing device, image processing program, image processing method, and storage medium
WO2012073336A1 (en) * 2010-11-30 2012-06-07 株式会社 東芝 Apparatus and method for displaying stereoscopic images

Also Published As

Publication number Publication date
JP2014049951A (en) 2014-03-17
JP5395934B1 (en) 2014-01-22

Similar Documents

Publication Publication Date Title
US9451242B2 (en) Apparatus for adjusting displayed picture, display apparatus and display method
US20180167605A1 (en) Apparatus and method for adjusting stereoscopic image parallax and stereo camera
CN103595987B (en) Stereoscopic display device, image processing apparatus and image processing method
KR101763592B1 (en) Method for processing image of display system outputting 3 dimensional contents and display system enabling of the method
US9049435B2 (en) Image providing apparatus and image providing method based on user's location
US20120060177A1 (en) Perspective display systems and methods
US20120293489A1 (en) Nonlinear depth remapping system and method thereof
CN103595988A (en) Stereoscopic image display device, image processing device, and image processing method
CN102802014B (en) Naked eye stereoscopic display with multi-human track function
CN102186023B (en) Binocular three-dimensional subtitle processing method
CN102075776B (en) Stereo display controlling method and device
US20160337640A1 (en) Method and system for determining parameters of an off-axis virtual camera
CN102970561A (en) Video processing apparatus and video processing method
CN102970565A (en) Video processing apparatus and video processing method
CN102970567B (en) Video processing apparatus and video processing method
CN103167311A (en) Video processing device, video processing method and recording medium
CN102970568A (en) Video processing apparatus and video processing method
JP6377155B2 (en) Multi-view video processing apparatus and video processing method thereof
KR101873076B1 (en) Method for generation of three-dimensional images encrusting a graphic object in the image and an associated display device
CN103686122A (en) Image processing device and image processing method
CN103969836A (en) View angle expanding method used for multi-viewpoint auto-stereoscopic display
US10326976B2 (en) Method and apparatus for providing personal 3-dimensional image using convergence matching algorithm
CN103428457A (en) Video processing device, video display device and video processing method
JP5032694B1 (en) Video processing apparatus and video processing method
JP2014053782A (en) Stereoscopic image data processor and stereoscopic image data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140326