WO1997000575A1 - Dispositif de reconnaissance d'objets et dispositif de prise d'images - Google Patents

Dispositif de reconnaissance d'objets et dispositif de prise d'images Download PDF

Info

Publication number
WO1997000575A1
WO1997000575A1 PCT/JP1996/001700 JP9601700W WO9700575A1 WO 1997000575 A1 WO1997000575 A1 WO 1997000575A1 JP 9601700 W JP9601700 W JP 9601700W WO 9700575 A1 WO9700575 A1 WO 9700575A1
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation value
circuit
signal
value
window
Prior art date
Application number
PCT/JP1996/001700
Other languages
English (en)
Japanese (ja)
Inventor
Yujiro Ito
Original Assignee
Sony Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation filed Critical Sony Corporation
Publication of WO1997000575A1 publication Critical patent/WO1997000575A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the present invention relates to a subject recognition device for automatically recognizing the position of a subject photographed by a video camera or the like, and to automatically track a subject by using the image recognition device, and to focus on the subject.
  • the present invention relates to an image pickup apparatus that adjusts the focus. Background art
  • the focus is high when the contrast is high, and the focus is off when the contrast is low.
  • This integrated data is data indicating how many high-frequency components exist in the setting error, and this data is generally called an evaluation value. Therefore, the autofocus can be realized by driving the focus lens such that the evaluation value is maximized (that is, the contrast is maximized).
  • the evaluation value extracted in this way is not an accurate evaluation value that represents the contrast of the subject. There are several possible reasons for this.
  • One of the factors is to detect the evaluation value Must be set for the target subject, but when the target subject is moving, etc., there is a problem that the evaluation window cannot be set accurately for the subject. Was. For this reason, accurate evaluation values could not be obtained, and it took a very long time to adjust the focus force.
  • a photographed video may be transmitted to a home through live broadcasting. If it is not possible to obtain an accurate evaluation value as described above during such live broadcasting, the autofocus operation takes a long time. As a result, the blurred image signal is transmitted to the home. Therefore, video cameras used for broadcasting stations and business use do not require any simple, inexpensive, or compact autofocus devices, such as consumer video cameras, and have high precision. Focus control and high-speed focus control are required.
  • an imaging apparatus having a function of automatically following a specific subject photographed by a video camera that is, a so-called automatic tracking function
  • these conventional imaging concealment methods first, the color components of the subject to be automatically tracked are stored, and the imaging data obtained for each field or each frame is analyzed, and the subject to be dynamically tracked is analyzed. Searches for the position of the color in the screen where the image was captured. At this time, it is necessary to analyze all the imaged pixel data in order to find out where in the imaged screen the color of the subject to be automatically tracked exists.
  • An object of the present invention is to accurately and real-time grasp the position of a target subject and obtain an evaluation value corresponding to the position of the subject. Disclosure of the invention
  • a feature of the present invention is that in an object recognizing device for recognizing the position of a target object, an image pickup means for outputting an electric image pickup signal, and a plurality of image pickup images constituted by the image pickup signal from the image pickup means.
  • Area searching means for selecting an area in which pixel data having the same color component as the target object exists from the divided areas, and the imaging means Storage means for storing pixel data corresponding to the imaging signal from the apparatus; and pixel data corresponding to the area selected by the subject-to-be-searched search means are read from the storage means, and the read pixel data is stored in the storage means.
  • a processing means for calculating the position of the target object based on the object recognition device.
  • a further feature of the present invention is that, in an imaging apparatus for imaging a target object, an imaging device that outputs an electrical imaging signal, and an imaging screen that is configured from the imaging signal obtained from the imaging device.
  • an imaging device that outputs an electrical imaging signal
  • an imaging screen that is configured from the imaging signal obtained from the imaging device.
  • To set the detection window detect the evaluation value of the imaging signal in the set detection window, and control the focus based on the evaluation value.
  • Storage means for storing all pixel data corresponding to the imaging signal from the imaging means, and calculating the position of the target subject based on the pixel data read from the storage means.
  • An imaging device including processing means for controlling a force control means such that the position of the detection window coincides with the calculated position of the target object.
  • FIG. 1 is a drawing showing a full rest configuration of an image pickup apparatus composed of a video camera.
  • FIG. 2 is a drawing showing a specific configuration of the auto-force control circuit 4. '
  • FIG. 3 is a drawing showing a specific configuration of the horizontal direction evaluation value generation circuit 62.
  • FIG. 4 is a diagram showing a specific configuration of the vertical direction evaluation value generation circuit 63.
  • FIG. 5 shows a filter count and a window size set in each of the horizontal direction evaluation value generation circuit 62 and the vertical direction evaluation value generation circuit 63.
  • FIG. 6 is a drawing for explaining each window size.
  • FIG. 7 is a drawing for showing weight data W set for each evaluation value E, respectively.
  • FIG. 8 is a drawing showing a divided area divided by the area search circuit 38.
  • FIG. 9 is a drawing showing a specific circuit configuration of the area search circuit 38.
  • FIG. 10 to FIG. 5 are drawings for explaining the processing operation for focusing.
  • FIG. 16 is a diagram for explaining a processing operation for determining a target subject.
  • FIG. 17 is a drawing for showing the movement of the lens when determining the direction in which the lens is moved to adjust the focus.
  • FIG. 18A and FIG. 18B are drawings showing a state when a non-target object enters the window.
  • FIG. 19 is a diagram for illustrating a change in the evaluation value stored in the RAM 66 when the moving direction of the lens is determined.
  • FIG. 20 is a diagram showing data stored in the RAM 66 during an autofocus operation.
  • FIG. 21 is a drawing for showing changes in the evaluation value obtained by the autofocus operation.
  • FIG. 22 is a drawing for showing an imaging state of an object B and an object B having the same color as the target object.
  • FIG. 23 is a diagram showing an object information table.
  • FIG. 24 is a diagram showing a target object history table. Best mode for carrying out the invention
  • the video camera device has a lens block 1 for optically condensing the incident light on the front of the imaging device, and an imaging device for converting the incident light from the lens block into an RGB electrical imaging signal.
  • Block 2 a signal processing block 3 for performing a predetermined ⁇ processing 1 on the imaging signal, a lens block 1, an imaging block 2 and a ⁇ processing block.
  • CP IJ 4 is provided.
  • Lens block 1 can be attached to and detached from the video camera Is provided.
  • This lens block 1 is an optical element that moves along the optical axis to continuously change the focal length without changing the position of the image point, thereby zooming the image of the subject.
  • the lens block 1 further includes a position detection sensor 11a for detecting the lens position of the zoom lens 11 in the optical axis direction, and a drive for moving the zoom lens 11 in the optical axis direction. Detects the position of the zoom lens drive circuit 11 for providing a drive control signal to the motor 11 and the drive motor 11 b, and the lens position of the focus lens 12 in the optical axis direction.
  • the detection signals of the position detection sensors 11a, 12a and 13a are always
  • the control signal from the CPU 4 is supplied to the zoom lens drive circuit 1] (;, the focus lens drive circuit 12 c, and the iris mechanism drive circuit 13 c, while being sent to the CPU 4. It is electrically connected so that
  • Lens block 1 contains the focal length data and aperture ratio data of zoom lens 1] and the focal length data and aperture ratio data of focus lens 12. It has an EEROM 15 for storing the manufacturer's name and the number of the lens. Each data stored in the PROM 15 is connected to the CPU 4 so as to be read based on a read command from the CPU 4.
  • the direction of the optical axis of the lens is controlled by a pan-Z tilt drive mechanism 16.
  • the control signal from CPU 4 is
  • the pan-tilt driving mechanism 16 is not limited to the one provided in the lens block as described above, but may be provided as a mechanism for driving the entire bidet talent camera. o
  • the imaging block 2 is a color separation premix for separating the incident light of the lens block into three primary colors of red (R), green (G) and blue (B).
  • the light of the R component, the G component, and the B component separated by the color separation mechanism 21 is imaged on the imaging surface, and the imaging light of each of the formed color components is Image sensors 22 I, 22 G, and 22 B that convert and output the image signals (R), (G), and (B) respectively.
  • the imaging devices 22 R, 22 G, and 22 C i C C D (C a r g e C ⁇ 1 ed D e vi r, o j force).
  • the image pickup block 21 amplifies the levels of the image pickup signals (R), (G), and (B) output from the image pickup elements 221, 220.22B, respectively, and resets them by Preamplifier for performing censoring to remove set noise 2 3 I? , 23 G, and 23 B.
  • the imaging block 2 uses the reference clock from the internally provided reference clock generation circuit to control each of the components in the video camera.
  • Timing signal generation circuit 24 for generating VD signal, HD signal, and CLK signal, which are basic clocks when the circuit operates, and VD signal, HD supplied from timing signal generation circuit
  • a CCD drive circuit 25 for giving a drive clock to the image sensor 22 R, the image sensor 22 G, and the image sensor 22 B based on the signal and the CLK signal is provided.
  • the VD signal is a clock signal representing one vertical period
  • the HD signal is a clock signal representing one horizontal period
  • the CLK signal is a clock signal representing one pixel clock.
  • a timing clock composed of the VD signal, the HD signal, and the CLK signal is supplied to each circuit of the video camera device via the CPU 4, though not shown, although not shown. .
  • the signal processing block 3 is provided inside the main unit of the video camera, and is a predetermined signal for the imaging signals (R), (G), and (B) supplied from the imaging block 2. Block for processing.
  • the signal processing block 3 converts the imaging signals (R), (G), and (B) from analog to digital video signals (R), (G), and (B), respectively.
  • R, 31G, 31B and a gain control signal for controlling the gain of the digital video signals (R), (G), and (B) based on the gain control signal from the CPU 4.
  • the signal processing circuits 33 R, 33 G, 33 B are, for example, two circuits for compressing a video signal at a certain level or more, and a circuit 33 1 R, 33 G, 33 1 B, Compensation circuit that compensates the level according to the set curve. 3 3 2 R, 3 3
  • This signal processing circuit 33R, 33G, 33B may include a known black ⁇ correction circuit, contour enhancement circuit, linear matrix circuit, and the like, in addition to the knee circuit, the correction circuit, and the circuit BZW clip circuit.
  • the signal processing puck 3 receives the video signals (R), (G), and (B) output from the signal processing circuits 33R, 33G, and 33B, and outputs the video signals (R), (G). ), (B), an encoder 37 for generating a luminance signal (Y) and a color difference signal (R-Y) (B-Y).
  • the signal processing block 3 further receives the video signals (R), (G), and (B) output from the gain control circuits 32R, 32G, and 32B, and outputs the video signals.
  • a force control circuit 34 for generating evaluation data E and direction data Dr for controlling force based on the signals (R), (G) and (B), and signal processing; Receives video signals (R), (G), and (B) output from circuits 33R, 33G, and 33B, and based on the signal levels, image sensors 22R, 22G
  • Iris control circuit 3 that controls the iris so that the amount of light incident on the 2B and B light is appropriate, and the output from the processing II circuit 3 3R, 3G, and 3B.
  • a white balancer for receiving a video signal (K), (G), and (B), and performing white balance control based on the signal level; and a balance control circuit 3 (;).
  • the iris control circuit 35 includes an N ⁇ M circuit for selecting a signal having the highest signal level among the supplied video signals (K), (G), and (B), and By dissecting the error in both areas of the signal, the video and ⁇ in each area are fully integrated.
  • the iris control circuit 35 determines all lighting conditions, such as backlighting, normal lighting, flat lighting, spot lighting, etc., of the subject based on the integrated data for each area. Judgment is made, an iris control signal for controlling the iris is generated, and this iris control signal is sent to the CPU 4.
  • C The PU 4 sends a control signal to the iris drive circuit 13c based on the iris control signal.
  • the CPU 4 supplies a gain control signal to the gain control circuits 32R, 32G, and 32B based on the white balance control signal.
  • the signal processing block 3 includes an error search circuit 38 and a frame memory 33 ( ).
  • the relay search circuit 38 removes the luminance signal (Y) from the encoder 37 and the chrominance signals (R-Y) and (B-Y), and based on the luminance signal and the chrominance signal, This is a circuit for selecting, from among the areas set for the entire screen, an area having pixel data that matches the color of the subject specified as the target object. Details will be described later.
  • the frame memory 39 receives the luminance signal (Y) and the chrominance signals (R-Y) and (B-Y) from the encoder 37, and temporarily stores the luminance signal and the chrominance signal.
  • the luminance signal and color signal stored in each frame memory are read out based on the door address data supplied from the CPU 4, and the read luminance signal and color signal are read out by the CPU 4. Supplied to
  • the focus control circuit 34 includes a luminance signal generation circuit ⁇ , a horizontal direction evaluation value generation circuit 62, a vertical direction evaluation value generation circuit 63, and a microcomputer. It consists of a black computer 64.
  • the luminance signal generation circuit 61 is a circuit that generates a luminance signal from the supplied video signals R, G, and B. In order to determine whether the force is correct or not, it is necessary to determine whether the contrast is high or low. Therefore, since the change in contrast is irrelevant to the change in the level of the color signal, by detecting only the change in the level of the luminance signal, whether the contrast is high or low can be determined. You can judge.
  • the luminance signal generation circuit 61 converts the supplied video signals R, G, B into
  • the luminance signal Y can be generated.
  • the horizontal direction evaluation value generation circuit G2 is a circuit for generating a horizontal direction evaluation value.
  • the evaluation value in the horizontal direction is data indicating how much the level of the luminance signal changes when the luminance signal is sampled in the horizontal direction, in other words, how much the horizontal direction This data indicates whether there is trust.
  • Horizontal evaluation value generating circuit 6 2 generates a first horizontal evaluation value generating circuit 6 2 a for generating a horizontal evaluation value E, the ⁇ , the second horizontal evaluation value E 2 A second horizontal evaluation value generation circuit 62 for generating a third horizontal evaluation value E 3 and a fourth horizontal evaluation value circuit for generating a third horizontal evaluation value E 3 A fourth horizontal evaluation value generation circuit 62d for generating E4, a fifth horizontal evaluation value generation circuit 62e for generating a fifth horizontal evaluation value E5, A sixth horizontal evaluation value generating circuit 6 2 f for generating a sixth horizontal evaluation value E 6 and a seventh horizontal evaluation value generating circuit for generating a seventh horizontal evaluation i E 7 6 2 g and the second Eighth horizontal evaluation value generation circuit 6 2 for generating eight horizontal evaluation values E 8 and ninth horizontal evaluation value generation circuit 6 for generating a ninth horizontal evaluation value E 3 2 i and the 10th horizontal evaluation value E,.
  • the horizontal direction evaluation value generation circuit 62a of the horizontal direction evaluation value generation circuit 62, 1 has a high-pass filter 621, which extracts high frequency components of the luminance signal, and an absolute value of the extracted high frequency component. Therefore, the absolute value conversion circuit 622 which makes data all have a positive value, and the data of the high frequency component in the horizontal direction are cumulatively added by integrating the absolute value data in the horizontal direction.
  • the filter 62 1 filters the high-frequency component of the luminance signal in response to one sample clock CLK from the path 62, which is a 2 pulse generator. It is composed of dance filters. This highpass filter ⁇ 2 1
  • the window pulse generating circuit 625 includes a plurality of clocks operating based on VD representing one vertical period, HD representing one horizontal period, and CLK representing one sample clock supplied from the CPU 4.
  • VD representing one vertical period
  • HD representing one horizontal period
  • CLK representing one sample clock supplied from the CPU 4.
  • the window pulse generation circuit G 25 supplies an enable signal to the horizontal integration circuit 62 3 every one sample clock CLK based on the count value of the counter.
  • the enable signal is supplied to the vertical integration circuit 6 2 4 in one horizontal period.
  • the counter size is set so that the size of the window becomes 92 pixels ⁇ 60 pixels.
  • Initial counter 3 ⁇ 4 is set. Therefore, the horizontal direction evaluation value E, output from the horizontal direction evaluation value generation circuit 62, is the data obtained by integrating all the high-frequency components existing in the window of 192-pixel X 60-pixel. It shows that.
  • the counter is connected so that an offset value is supplied from the CPU 4. In the initial count i, a count value is set so that the center of the window coincides with the center of the imaging screen.
  • the OH Fuse' bets value supplied from the CP to 1 4 it is meant mosquito window down bets ⁇ to be added to the initial Ca window down bets value. Therefore, when the offset value is supplied from the CPU 4, the counter value of the counter is changed, and accordingly, the center position of the window is changed.
  • the other second horizontal evaluation value generation circuits 6 2 b to 12 th horizontal evaluation value generation circuits 62 1 h are similar to the above-described ⁇ 1 horizontal evaluation value generation circuit 62 a, and are high pass filters.
  • the first horizontal evaluation value generation circuit 62 a to the 12th horizontal evaluation value generation circuit 62 1 determine what value of the filter coefficient and the window size are set. The reason why such different filter coefficients are set is described below.
  • a hi-no filter with a high cut-off frequency is very suitable near just focus (meaning that the focus is on). This is because the evaluation value has a large change rate with respect to the movement of the lens near the just focus. Also, even if the lens is moved where the focus is greatly deviated, the rate of change of the evaluation value is small, so a high-pass filter with a high cut-off frequency has a low focus. It is not suitable where there is a large gap.
  • a high-pass filter with a low cut-off frequency is suitable where the power is significantly shifted. This is because when the lens is moved where the focus is largely shifted, the rate of change of the evaluation value is large. Even if the lens is moved in the vicinity of the just-force, the change rate of the evaluation value is small, so that the high-pass filter with a low cut-off frequency is not close to the just-focus. However, it is not suitable. In other words, both high-pass filters with a high cut-off frequency and high-pass filters with a high cut-off frequency have advantages and disadvantages, and which filter is the most suitable. Cannot be said in one word.
  • window W1 is a 192-pixel X60-pixel window
  • window W2 is a 13-pixel X60-pixel window
  • window W3 is a 384-pixel
  • window W4 is a window of 2G4 pixels X120 pixels
  • window W5 is 768 pixels of X120 pixels.
  • window W6 is
  • the second horizontal evaluation value generation circuit ⁇ 2 a By setting a plurality of windows in this way, different evaluation values corresponding to the respective windows can be generated. Therefore, regardless of the size of the subject whose focus is to be adjusted, the second horizontal evaluation value generation circuit ⁇ 2 a
  • An appropriate evaluation value can be obtained from any one of the horizontal evaluation value generation circuits 6 2 1.
  • the vertical direction evaluation value generation circuit 63 is a circuit for generating a vertical direction evaluation value.
  • the evaluation value in the vertical direction is data indicating how much the level of the luminance signal changes when the luminance signal is sampled in the vertical direction, in other words, how much the vertical control value is. This data indicates whether there is trust.
  • First 1 vertical evaluation value generating circuit for generating a vertical evaluation value generating circuit 6 3 j of the first 60, the first 1 in the vertical direction evaluation value E 2 3 for generating the value E 2 2 and 6 3 k, and the first and second vertical evaluation value E 2 4 and a vertical evaluation value generating circuit 6 3 1 of the first 2 to generate.
  • the first vertical evaluation value generation circuit 63 a of the vertical evaluation value generation circuit 63 is a horizontal average value generation circuit 631 that generates -average value data of the level of the horizontal luminance signal, A high-pass filter 632 for extracting the high-frequency component of the average value data of the luminance signal, and an absolute value conversion circuit for converting the extracted high-frequency component to an absolute value, so that all the data have a stop value.
  • 6 3 3 a vertical integration circuit 6 3 4 that integrates the absolute value data in the vertical direction to accumulate and add high frequency component data in the vertical direction, and a horizontal average value generation circuit 6 3 1
  • a window pulse generation circuit (] 35) for sending an enable signal enabling integration operation to the vertical integration circuit 63 A.
  • the high-pass filter 32 is a window pulse regeneration circuit G 25 It consists of a one-dimensional finite impulse response filter that filters the high-frequency component of the luminance signal in response to one horizontal period signal HD from.
  • This high-pass filter 632 has the same cutoff frequency as the high-pass filter 62 1 of the first horizontal direction evaluation value generation circuit 62a.
  • the window pulse generation circuit 635 operates based on the VD signal supplied from the CPU 4 representing one vertical period, HD representing one horizontal period, and CLK representing one sample clock. It has multiple counters. This Windo ,.
  • the zero generation fi35 is one sample clock for the horizontal average value generation circuit 631, based on the count value of the count value.
  • a signal is supplied, and an enable signal is supplied to the vertical integration circuit 634 every one horizontal period.
  • the window pulse generating circuit 63 5 has the initial power of the printer so that the size of the window is 120 pixels x 80 pixels. ⁇ 3 ⁇ 4: is set.
  • the evaluation value E, 3 output from the vertical evaluation value generation path f indicates the data obtained by integrating the high frequency component in the vertical ifi direction in the window of 120 pixels X 80 pixels. Will be.
  • This counter is connected so that the offset 4 is supplied from the CPU 4.
  • the initial count value is set so that the center of the window coincides with the center of both sides of the imaging.
  • the offset value supplied from the CPU 4 neglected the count 3 ⁇ 4: added to the initial count value. Therefore, C
  • the other second vertical evaluation value generation circuits 63 b to the 12th vertical evaluation value generation circuits 63 3 h are similar to the first vertical evaluation value generation circuits 63 a described above.
  • a horizontal average value generation circuit 631, a high-pass filter 632, an absolute value conversion circuit 633, a vertical integration circuit 634, and a window pulse generation circuit 6353. have. What is different in each circuit is that, as in the horizontal direction evaluation value generation circuit 62, the combination of the filter coefficient and window size set in each circuit is different. Has become.
  • E 12 The values of E 12 will have different values.
  • Window W7 has a window of 120 pixels X 80 pixels
  • window W8 has a window of 120 pixels X 60 pixels
  • window W9 has 24 windows.
  • window W 10 is a window of 240 pixels X 120 pixels
  • window W 1 1 is 480 pixels X 3
  • window W12 has a window of 480 pixels X 240 pixels.
  • Microphone loco computer 6 4 E generated in the horizontal direction evaluation value generating circuit 6 2 ⁇ beauty vertical evaluation value generating circuit 6 3, both receive 2 four evaluation values of ⁇ E 2 4, the 2 4 Based on the evaluation values, the direction in which the lens moves and the lens position where the evaluation value is maximum
  • the micro computer 64 has R 0 065 storing a program for calculating 24 evaluation values according to a predetermined flow. As shown in FIG. 7, this R ⁇ 65 is output from each of the 24 evaluation value generation circuits (62a to 62) and 63a to 63 3, respectively. Twenty-four weight data W i are stored so as to correspond to the twenty-four evaluation values E i (i-1, 2, 2,..., 24).
  • the weight data w; is data for giving a priority to the 24 evaluations E; The higher the value of the weight data W i, the higher the priority of the corresponding evaluation value E; This means that there is high quality. Note that the weight data W i is a fixed value set in advance at the time of factory shipment.
  • E i 1, 2,..., 24
  • the evaluation values ⁇ , (X,) to E 24 (X,) generated when the lens is at X are stored in the RAM 66.
  • Et al is, when the lenses are moved to the position of X 2 from X, the position, the generated evaluated values when the lenses are moved in the X 2 E, is (X 2) ⁇ ⁇ 24 ( X 2) R It is stored in AM66.
  • the error search circuit 38 divides the image capturing area into 12 areas, and the pixel data having the same color as the object set as the target object is output to which division error. It is a circuit to search for existence
  • a determination process is performed by a logic circuit in the area detection circuit 38 on all pixel data supplied from the encoder. Specifically, as shown in Fig. 8, if one G) area is defined to be 481 elements X 30 ⁇ i elements, it is divided into 16 elements in the horizontal direction, and 3 ⁇ 4 Since it is divided into eight in the ifi direction, 128 areas are eventually determined. As shown in FIG. 8, these 128 areas are defined in the application with the area numbers ⁇ n (,. To A, 27 . First, a specific configuration of the error search circuit 38 will be described with reference to FIG.
  • the luminance signal Y, the color signal i R — ⁇ I and the color signal IB — YI are supplied from the encoder 37 to the error search circuit 38 for each pixel data. Also, the upper search signal IY from the CPU 4 is sent to the relay search circuit 38. And the lower limit luminance signal i Y
  • the upper limit luminance signal 1 Y supplied from the CPU 4. And the lower limit luminance signal I ⁇ . And the upper color signal IR D — Y. iu and the lower color signal IR o — ⁇ . And the upper limit color signal I ⁇ . ⁇ »and the lower limit color signal I ⁇ .
  • One Y. IL and will be described.
  • One Y D I is a signal subject force main Raman has set aimed object is obtained on the basis of the luminance signal and color signals ⁇ . Therefore, once the target object is set, this value will not be changed.
  • the luminance signal of a certain pixel data is an upper luminance signal i Y U and a lower luminance signal IY. If the value is between I and., The upper limit luminance signal 3 ⁇ 4 ⁇ 0 I ⁇ and the lower limit luminance signal IY. It is set so as to have a value close to the luminance signal II of the object rest, and similarly, the color signal IR-YI of a certain pixel data is the upper limit color signal IR. Lower limit color signal IR-Y. If the Ri of I is ⁇ , the color signal IR of a certain pixel data- ⁇ I is almost the same level as the color signals IR0 to ⁇ 0I of the set target object as can be determined that the upper limit Iroshin ⁇ IR.
  • the color signal IB-Yi of a certain pixel data is a value between the upper limit color signal IBo-Yi ⁇ and the lower limit color signal I ⁇ YIL, the color signal iB-YI of the certain pixel data becomes The upper limit color signal IBY Q I u and the lower limit color signal i BYI are the second color signal i ⁇ ⁇ i of the target object so that it can be determined that the color signal i BY i of the target object is at substantially the same level as the color signal i BY i. It is set to have a value close to.
  • the multiplier search circuit 38 includes a multiplication circuit 71 a for multiplying the luminance signal Y supplied from the encoder 37 by a multiplication coefficient ⁇ : 4 , and a multiplication coefficient “ 3 ” for the luminance signal Y.
  • a multiplication circuit 7 1 multiplies the luminance signal Y by a multiplication coefficient ⁇ : s
  • a multiplication circuit 7 1 (: and a multiplication circuit multiplies the luminance signal Y by a multiplication coefficient 5 7 1 d and the in and.
  • Sui tree et re ⁇ search circuit 3 for selecting one of the multiplication circuit 7 1 th power calculating force from a and upper color signal IR ⁇ ⁇ I u a latch circuit 7 2 a, multiplication output from the multiplier circuit 7 1 b and lower color signal IR 0 -. Y 0 I, a switch circuit 7 2 b for selecting either of the multiplication circuit 7 1 c Multiplied output and upper limit color signal i B.-Switch circuit 7 2 c for selecting one of YI ⁇ and multiplied output from multiplier circuit 7 1 d and switch circuit for selecting one of lower limit color signal IBY u !
  • the error detection circuit 38 includes a comparator 73a supplied with the luminance signal Y and the upper limit luminance signal IY, a comparator 73a supplied with Iu, and a luminance signal Y with the lower limit.
  • the area search circuit 38 includes a gate circuit 74 a to which the output of the comparator 73 a and the output of the comparator 73 b are supplied, a con- A gate circuit 74b supplied with the output of c and the output of the comparator and the ⁇ 3 d, and a gate circuit supplied with the output of the comparator 73e and the output of the comparator 73f. And a gate circuit 75 to which the output of the gate circuit 74a, the output of the gate circuit 74b, and the output of the gate circuit 7c are supplied.
  • the area search circuit 38 has a flag signal generation circuit 76 'composed of 128 chip circuits.
  • 1 2 8 Chi-up circuit, Figure 8 lambda D 0 shown in, from lambda, of 2 7: is provided so as to correspond to the 1 2 8 E Li ⁇ .
  • the output of the gate circuit 75, the pixel clock CLK, and the chip select CS are supplied to each of the chip circuits.
  • the pixel clock CL # and the chip select CS are supplied from the CPU 4 so as to correspond to the luminance signal and the color signal supplied from the encoder 37 to the pixel data ⁇ .
  • the pixel clock CL ⁇ is a clock corresponding to the processing timing of each pixel data, and is used for processing pixel data and pixel data processed by the preceding logic circuit. If they match, a “Low” signal is provided to the chip circuit, and at other times
  • the “Low” signal is supplied only to the selected chip circuit out of the 128 chip circuits, and the “High” signal is supplied to the other unselected chip circuits. .
  • Each chip 0 path provided in the flag signal generation circuit 7G has a gate circuit 76a and a counter 6b, respectively. Therefore, the flag signal generation circuit 76 has 128 gate circuits 7 (; a and 128 counters 76 b.
  • This gate circuit 76 a Supply The gate circuit 75 outputs “Low” only when all of the output of the gate circuit 75, the pixel clock CLK and the chip select CS are “Low”.
  • the counter 76b is a counter that counts up only when “Low” is supplied from the gate circuit 76a in response to the clock timing of the pixel clock CLK.
  • a flag signal is generated when the count value exceeds a predetermined number (5 or more in this embodiment). The generated flag signal is supplied to a multiplexer 77.
  • the multiplexer 77 receives the flag signal output from each chip circuit of the flag generation circuit ⁇ G, and supplies the signal to the CPU 4. At this time, the multiplexer 77 sends to the CPU 4 a signal of the chip circuit from which the flag signal has been output.
  • the CPU 4 can select an area in which pixel data having the same color component as the target object exists based on the examination.
  • the switch circuits 72a, 72b.72c and 7c provided in the area search circuit 38 before the area search processing is performed. Since the 2d switching operation must be performed, this switching operation will be described.
  • the switch circuits 72a, 72b,? In order to switch 2c and 72d, the luminance signal I Y of the object set by Cameraman as the target object. I, color number I R «-Y n! And signal IB. — Y. It is necessary to select the subject mode based on I. In this subject mode, four modes are set, and mode 0 to mode 3 will be sequentially described below.
  • Mode 0 is the mode selected when the subject set as an EHl object has some color information. That is, i R representing the color component of the object. -Y n I ⁇ ⁇ and I Y Y. I ⁇ ⁇ means that 0 is above a certain level. Then, when the color of the selected target object is strong u
  • the CPU 4 sets the luminance signal IY of the set object rest. I, color signal i R. One ⁇ . I and color signal i B.
  • the relationship of one Y n I is ⁇ 3 XIY 0 R 0-Y x Y
  • -Mode 0 is selected when the condition indicated by (70) is included.
  • the CPU supplies a control signal to the switch circuits 72a, 72b, 72c and 72d.
  • the switching states of the switch circuits 72a, 72b, 72c and 72d are "1JP", "UP”, “UP” and “UP”, respectively. Once switched, this switching state does not change until the subject mode is changed.
  • Mode 1 is a mode selected when the color component of the subject set as the zero-like object has a red component equal to or higher than a predetermined level but does not have a color component equal to or higher than the predetermined level.
  • IR that represents the color component of the subject.
  • One Y. ⁇ ⁇ is above a certain level, but I ⁇ .
  • the relationship of one Y D I is
  • Mode 1 is selected when the conditions shown in (71) are included.
  • the CPU 4 When the mode 1 is selected as the subject mode, the CPU 4 supplies a control signal to the switch circuits 72a, 72b, 72c and ⁇ 2d, and The switching states of the circuits 72a, 72b, 72c and 72d are "UP”, “UP-
  • Mode 2 is a mode selected when the blue component of the color component of the subject set as the target object is higher than the predetermined level but the red component is not higher than the predetermined level.
  • IB that represents the color component of the subject.
  • One Y. IXYO is above a certain level, but i R. One ⁇ .
  • the relationship of the color signal IB o-Y o I is
  • Mode 2 is selected when the conditions shown by are included.
  • CPU 4 supplies a control signal to switch circuits 72a, 72b, 72c and 72d.
  • the switching states of the switch circuits 72a, 72b. 72c and ⁇ 2d are set to "DOWN”, “D ⁇ WN”, "UP” and "UP”, respectively.
  • Mode 3 is a mode selected when both the red component and the blue component of the color component of the subject set as the target object do not exceed a predetermined level.
  • IR that represents the color component of the subject.
  • One Y. Mode 3 is selected when the relation of I does not apply to the above equations (70), (71) and (72).
  • the CPU 4 supplies a control signal to the switch circuits 72a, 72b, 72c and 72d, and
  • the switching states of the switch circuits ⁇ 2 a, 72 b. 72 c and 72 d are referred to as “D OWN”, “D ⁇ WN”, “D OWN” and “D OWN”, respectively.
  • the area search circuit 38 performs an object search processing operation. Next, this search processing operation will be described in order so as to correspond to each subject mode with reference to FIG.
  • the comparator 73a outputs the upper limit luminance signal IY.
  • the gate circuit 74a is When the output signal from the comparator 73 a and the comparator ⁇ 3 b is received, and both the output signals from the comparators 73 a and 73 b are “High”, “Lo” wj is output to the subsequent gate circuit 75.
  • the operations performed by the comparators 73a and 73b and the gate circuit 74a are as follows.
  • the switching states of the switch circuits # 2a and # 72b are "UP” and "UP", respectively.
  • the comparator 7 3 c is supplied with YX 4 and i R—Y i, and the comparator 7 3 d receives YX " 3 and IK-Y
  • the luminance signal Y and the color signal IR-YI are data supplied from the encoder 37.
  • Comparator 73c compares YX " 4 with IK-YI,
  • the gate circuit 74b receives the output signals from the comparator 73c and the comparator 73d, and When the output signals from the comparators 73c and 73d are both "High”, “L0w” is output to the gate circuit 75 at the subsequent stage.
  • the switching states of 2c and 72d are "UP” and "UP”, respectively.
  • YX " G and IB-YI are supplied to the comparator 73e, and YX” 5 and IB-YI are supplied to the comparator 73I ". Note that the luminance signal Y and the color signal I — ⁇ I
  • the gate circuit 74c receives the output signals from the comparator ⁇ 3e and the comparator ⁇ 3f, and outputs the signals HHig If “h”, “Low” is output to the gate circuit 75 at the subsequent stage.
  • the gate circuit 75 receives the output signals from the gates 74a, 74b, and 74c, and outputs all the signals from the gates 74a, 74b, and 74c. Only when “High”, “L 0 w” is supplied to each chip circuit of the flag generation circuit 76.
  • the mode 0 is selected as the subject mode when the conditions of the equations (70a), (70b) and (70c) are satisfied.
  • the fact that the condition of the equation (700) is satisfied means that the luminance signal 3 ⁇ 4Y and the color signal II of the pixel data supplied from the encoder 37? I ⁇ I and the color signal i ⁇ — ⁇ I are the brightness of the subject set as the target object Signal Y. , Color signal IR. One Y. I & color signal IR. ⁇ It means that it almost matches YDI. Then, only when the color of the target object and the color of the pixel data match, “L 0 w” is output from the gate circuit 75.
  • mode 1 When mode 1 is selected, the operation is exactly the same as when mode 0 is selected, so a detailed description is omitted.
  • comparators 73a to 73f and gate circuit 74a ⁇ 74c and the operation performed by the gate circuit 75 are
  • the fact that the condition of this (701) expression is satisfied means that the pixel data supplied from the encoder 37 is satisfied.
  • the luminance signal Y, the color signal IR-YI and the color signal IB-Yi are the luminance signal Y of the subject set as the target object.
  • color signal i R. - ⁇ It means that it almost matches I.
  • the gate circuit 75 outputs “Low” only when the color of the target object and the color of the pixel data match.
  • Mode 2 When Mode 2 is selected, Mode 0 and Mode Since the operation is the same as when node 1 is selected, detailed description is omitted.
  • the comparators 73 a to 73 f and the gate circuit 74 a To 74 c and the operation performed by the gate circuit 75 are:
  • the fact that the condition of the expression (702) is satisfied means that the luminance signal Y, the color signal IR— ⁇ I, and the color signal IR of the pixel data supplied from the encoder 37 are satisfied.
  • Color signal i B — YI force Brightness signal Y of the subject set as the target object.
  • "Low" is output from the gate circuit 75.
  • mode 3 is selected is exactly the same as the operation when mode ⁇ , mode 1 and mode 2 are selected, and a detailed description thereof will be omitted.
  • the fact that the condition of the expression (703) is satisfied means that the luminance signal Y, the color signal IR— ⁇ I, and the luminance signal Y of the pixel data supplied from the encoder 37 are satisfied.
  • Color signal I ⁇ The luminance signal ⁇ ⁇ ⁇ of the subject whose I is set as the target object. , Color signal IR. One ⁇ . I and color signal IR. One ⁇ . It means that it almost matches I. Then, as in the case of the mode 0, the gate circuit 75 outputs “Low” only when the color of the target object and the color of the pixel data match.
  • mode 0 is selected as the subject mode, and a plurality of elementary data having the same color as the object rest are displayed.
  • e Li Ryo lambda 3 , Oh, an example in which only the Gaotti, describing the overall ⁇ of e re a search circuit 3 8.
  • a luminance signal ⁇ , a chrominance signal IR- ⁇ I, and a chrominance signal I ⁇ —Yi are sequentially supplied from the encoder 37 to the encoder search circuit 38 so as to correspond to the raster scan. .
  • all pixel data from the encoder 37 is supplied to the relay search circuit 38, and it is determined whether or not each pixel data is included in the condition of the equation (700). Done. Note that all pixel data is supplied to the area search circuit 38, but the determination as to whether or not the data is included in the condition defined by the equation (700) is performed by the switch circuit.
  • the gate circuit 75 outputs "I igh".
  • the gate circuit 75 first outputs “L 0 w
  • the chip select CS supplies “Lowj” only to the 36th chip circuit corresponding to the area No. 03, and supplies “Highg to the other chip circuits.
  • the pixel clock supplies “Low” to the chip circuit at a timing at which pixel data having the same color as the S-like object is supplied. Therefore, the gate circuit 76'a in the 36th chip circuit. 3-5, gate circuit 7 Five et gamma L ow "is supplied, as a pixel click locked CLK is supplied" L 0 w ", and the chip select is supplied" L ow " Only at this time, the gate circuit 76a. 3 5 sets “Low” to the counter 7
  • the multiplexer 77 outputs the plug signal output from each chip circuit to the CPU 4 so as to correspond to the end of the plug signal. In this case, it supplies the flag signal is output from the 3 sixth chip circuit corresponding to the e Li A AD 3 5 to CPU 4.
  • the operation of the end search circuit 38 composed of hardware in this way allows the CPU 4 to determine in which of the pixels the pixel having the same color as the set target pixel exists. Can be recognized in real time.
  • the operation of the video camera device will be described with reference to the flowchart of FIG. 10 to FIG.
  • the transition from manual focus to auto focus can be performed by pressing the auto focus button provided on the operation unit 5 by the cameraman.
  • the focus mode is set.
  • Autofocus mode is a continuous mode that, once pressed, continues in autofocus mode until commanded to switch to manual focus.
  • the auto-focus mode is stopped when the focus is adjusted, and the non-continuous mode in which ⁇ automatically shifts to the manual mode.
  • the following description of the flowchart is for the continuous mode.
  • steps S100 to S131 a process for determining in which direction the focusing lens is to be moved is performed.
  • steps S201 to S221 Processing is performed to determine the lens position at which the evaluation value is maximized.
  • the focus lens is moved to the lens initial position X ( , At a distance of D / 2 in the Far direction Go to position X, that, X, from the N ear direction moves to position X 2 at a distance and D, i.e. the position from chi 2 to the distance DZ 2 in F ar direction, the lens initial position X D Perform a return operation.
  • the Near direction means a direction approaching the image sensor
  • the Far direction means a direction away from the image sensor.
  • D represents the depth of focus.
  • Microcomputer 64 has lens position X.
  • Depth of focus is data that indicates the range where the focus is at the center of focus. Therefore, even if the focus lens is moved within the depth of focus, the shift of the focus cannot be recognized by human eyes.
  • the lenses X when moving to a pressurized et X 2, is moved over the focal depth, off O over mosquito scan of deviation that will appear in the image pickup signal by the mobile. In other words, by setting the maximum movement of the lens within the depth of focus, the displacement of the force cannot be recognized.
  • Step S 1 0 microphones ⁇ co down computer 6 '4, horizontal direction toward evaluation value generating circuit 6 2 and the evaluation value generated in the vertical direction evaluation value generating circuit 6 ⁇ , ( ⁇ 0) ⁇ evaluation value ⁇ stores 24 (chi 0), the.
  • the microphone ⁇ -computer G 4 instructs the CPU 4 to move the focus lens by a distance of DZ 2 in the Far direction.
  • step S101 the CPU 4 outputs a command to the focus lens motor drive circuit 12c, and outputs the focus lens. Move in the Far direction by a distance of D2.
  • step S 102 the micro computer 64 receives the evaluation values ⁇ , (,) newly generated in the horizontal evaluation value generation circuit 62 and the vertical evaluation value generation circuit 63.
  • X,) to the evaluation value ⁇ 24 (X,) are stored in RAM 66.
  • the micro computer 64 issues a command to the CPU 4 to move the focus lens by the distance D in the Near direction.
  • step S103 the CP IJ 4 outputs a command to the focus lens motor drive circuit 12c to move the focus lens by the distance D in the Near direction. Move to
  • step S104 the microphone ⁇ computer 64 outputs the evaluation values E, (), which are newly generated in the horizontal evaluation value generation circuit 62 and the vertical evaluation value generation circuit 63.
  • X 2 ) to evaluation value E 24 (X 2 ) are stored in RAM 66.
  • the microphone ⁇ computer 64 instructs the CPU 4 to move the focus lens by a distance of D / 2 in the Far direction.
  • step S104 the lens position X is stored in the RAM 66 of the micro computer 64.
  • Steps S105 to S115 are tips for selecting an inappropriate evaluation value from the 24 evaluation values.
  • step S105 to step S115 will be described in a comprehensive manner.
  • FIGS. 18A and 18B show an image of a target object A whose window force is to be adjusted within window W2, which is outside window W2.
  • target object A This shows a state in which the non-target object B having a high contrast existing in front of the object is being imaged.
  • the first horizontal direction evaluation value generation circuit 62 a in which the value of the window W 1 is set is obtained.
  • the generated evaluation value E is combined with the high-frequency component of the object B, and is not appropriate as the evaluation value of the object A.
  • the value of the evaluation value E is considerably larger than the value of the evaluation value E 2 generated by the second horizontal evaluation furnace generation circuit G 2 b in which the value of the window W 2 is set. Let's get together. Similarly, the seventh horizontal evaluation value generating circuit 6 evaluation value E 7 generated by 2 g of the value of c fin dough W 1 is set, the object
  • the evaluation value E 7 is rather considerably larger than the value of the evaluation value E 8 the value of the windowing W 2 is generated by the eighth horizontal evaluation value generating circuit 6 2 h of which is set Natsute I will.
  • FIG. 18B shows a case where the lens is moved so that the force is adjusted to the object ⁇ .
  • the more the focus is adjusted for the vacation A the more the focus for the object B is shifted. If the focus of the object B is largely displaced, the image of the object B is largely blurred, and the blurred image enters the window W2. Therefore, in the case of the states of FIGS.
  • the evaluation TE 2 generated by the second horizontal evaluation value generation circuit 62b in which the value of the window W2 is set is set. 3 ⁇ 4 is by no means appropriate.
  • the value of the eighth horizontal evaluation t generator 6 2 evaluation value generated by h E 8 the value of the windowing W 2 is set can not be said to be in any way appropriate.
  • steps S105 to S5 will be specifically described with reference to FIGS. 10 and 11 in consideration of the above-described basic idea.
  • step S105 the lens position is X. ⁇ ⁇ obtained at the time of
  • Evaluation value E,, E 2, E 7 and E 8 is, if the (1 0 5) than a value that applies to expression evaluation value E,, E 2, E 7 and E 8 are the appropriate values It proceeds to step S117. Conversely, if the evaluation values ⁇ , ⁇ 2, ⁇ 7, and ⁇ 8 are values that do not apply to this (1 0 5) equation, at least the evaluation values ⁇ ,, ⁇ 2 , ⁇ 7 and E 8 proceeds to stearyl-up S 1 eta 6 it is determined that the inappropriate value.
  • Step S 1 0 6 thus evaluation value calculation result in step S 1 0 5 E, and E 2, the evaluation value E 7 and E 8 is determined to be inappropriate, following windowing W 1
  • the evaluation values ⁇ 3 and ⁇ 3 obtained based on the larger window W 3 are used, and the evaluation values ⁇ 4 and 4 obtained based on the next larger window W after the window W 2 are used. ⁇ ,.
  • step S106 the lens position is X as in step S105. Using ⁇ , (X 0) E 24 (X.) obtained at the time of
  • evaluation value E 3 E 4. E s and E,. But if it's the value that applies to the (1 0 G) expression evaluation value E 3, E 4, E n and E,. Is determined to be an appropriate value, and the process proceeds to step S107. Conversely, the evaluation value E 3, E 4, E 3 and E l 0 is, if the (1 0 6) of a applies should not be a value in the expression, at a minimum, the evaluation value E 3, E 4, E 3 and E,. Is determined to be an inappropriate value, and the flow advances to step S108.
  • the reason why the larger windows W3 and W4 are used will be described.
  • the evaluation value E, and E 2 the evaluation value E 7 and E 8 are improper, the target object A or the non-target and off O carcass It cannot match any of the objects B.
  • the larger windows W3 and W4 are used for the windows W1 and W2, the non-target rest B may enter the window W4. You could think so. If completely when the non-target object B ever fall within the ⁇ Lee down dough W 4, the difference is with small Kunar between evaluation values E 4 and the evaluation value E 3, the evaluation value E 3 E,. The difference between is smaller. In other words, the evaluation values E 3 , E 4 , E 3 and E, 0
  • the auto focus control circuit 34 repeats the control loop many times, and keeps moving the focus lens for a long time. Therefore, the problem that the loop is repeated and the blurred imaging signal must be continuously output. However, by focusing on the non-target object rest B, the control loop is repeated for a long time to prevent the output of the blurred imaging signal from being continued. Can be.
  • step S107 the evaluation value determined in step S10 E,, E 2 , E 7, and E 8 are inappropriate values, and the evaluation values E 3 , E 4, E a, and E 10 determined in step S 106 are appropriate values.
  • the process proceeds to step S117.
  • I 1 the unused number in Step S 1 0 7 This, 2, 7, because it was 8 Gajo defined, at step S 1 0 7 subsequent step, the evaluation value E,, E 2, E 7 and it is not the E 8 is used.
  • step S 108 the evaluation values E 3 and E 4 , the evaluation values E 3 and E, based on the calculation result in step S 106.
  • the evaluation values E 5 and E, obtained based on the next largest window W 5 after the window W 3 were used, and the window W 4 then use a large Huy down de W evaluation value E 6 was obtained based on the 6 and E, 2.
  • step S108 the lens position is X as in step S106.
  • step S 110 If the values satisfy the expression, it is determined that the evaluation values E, f;, EI 1 and E, 2 are appropriate values, and the flow advances to step S 109. Conversely, if the evaluation values E 5 , E G , E,, and E 12 are values that do not apply to the expression (108), at least the evaluation values E 5 , E 6, E,, And E and 2 are determined to be inappropriate values, and the process proceeds to step S110.
  • Step S 1 0 8 Step S 1 0
  • evaluation value calculation results at 6 E 3 and E 4 the evaluation value E: 3 and E, because ⁇ is determined to be inappropriate, Huy down dough W 3 evaluation value obtained based on the large Ui down dough W 5 next £ 5 and with using the E, evaluation value obtained based on a large windowing W 6 in the following Ui down de W 4 E 6 and to use the ⁇ 12.
  • step S11 the lens position is X, as in step S11.
  • (1 1 0) is determined. If the evaluation values ⁇ , 3 , ⁇ 14 , ⁇ 13, and ⁇ 20 are values that apply to this (1 110) equation, the evaluation values ⁇ 13 , ⁇ , 4 ,, 9, and ⁇ 20 are appropriate. The value is determined to be a value, and the process proceeds to step S111. Conversely, the evaluation value E l 3, E l 4, E l 3 , and E 20 is, in this (1 1 0) Formula If it's a true no value, at a minimum, the evaluation value E l 3, E l 4, E, 3 , and E 2. Judge to be an inappropriate value and proceed to step S112? _ 0
  • step S112 the lens position is X as in step S110. Using E, (X.) to (X.) obtained at the time of
  • step S 1 1 3 evaluation value E, it is determined at step S 1 0 5, the result of E 2, E 7 and E 8 are improper values, it is determined by the scan Tetsupu S 1 0 6
  • the evaluation values E 3 , E 4, E 9, and E ⁇ are inappropriate values, and the evaluation values E s , E 6, E,, and E 12 determined in step S 108 are incorrect.
  • step S114 the lens position is X as in step S110. Using E, (X.) to E 24 (X.) obtained at the time of
  • step S 1 ⁇ 5 If the a value applicable to the expression evaluation value ⁇ , 7, ⁇ 1 ⁇ , ⁇ 23 and E 24 are determined to be a suitable value the process proceeds to step S 1 ⁇ 5. Conversely, if the evaluation values E
  • step S115 the evaluation value determined in step S105
  • the step S 1 0 evaluation value is determined in the 8 E 5, E 6, E and E 12 are improper values, determined in step S 1 1 0 evaluation value E l3, E l 4, E , 3 , and
  • E 20 is a result of an unsuitable value, a result that would have Step S 1 1 2 in the determined evaluation value E, 5, E 16, E 21 and E 22 is an inappropriate value, stearyl Tsu
  • Steps S ⁇ 17 to S1331 shown in FIGS. 12 and 13 are specific operation steps ⁇ - for determining the lens moving direction. Steps SI 17 to S 13] are steps performed by the micro-computer 64.
  • step S117 i is set to 1 and the up force value Ucnt, the downcount value Dcnt, and the flat count value Fcnt are reset.
  • step S118 i is defined as an unused number It is determined whether it is a number. If i is not defined as a non-use waiver, go to step S120. If i is a number defined as an unused ban, i is incremented in step S119 to proceed to the next i number.
  • E i ( X 0) is, rather than the value of the same extent as E i (X 2), has a value greater than a certain degree E i (X 2), one ⁇ , E
  • This is a judgment when i (X,) has a value that is not the same as Ei (X.) but is somewhat larger than Ei (Xo).
  • the evaluation value becomes E, (X 2 ), E i (X 0 ), E i (X,).
  • the fact that the condition of the expression (120) is met means that the focal lens is ⁇ 2 , ⁇ . , X, and the like, means that the valuation increases in order, and the process proceeds to the next step S122. If the condition of the equation (120) is not satisfied, the weight data Wi is added to the upcount value Ucnt in step S122 in step S122, and the value in step S126 is calculated. move on.
  • E i (X 0) is not the same value as E i (X,), but has a value that is somewhat larger than E i (X,).
  • i (X 2 ) is not the same value as E i (X o), This is the case when the value has a value larger than the degree E i (X.). Further understanding easily explained, off O carcass lenses, X 2, X 0
  • step S123 the weight data Wi is added to the downcount value Dcnt, and the process proceeds to step S126.
  • step S 124 E i (X.) does not have the same value as E i (X,), but has a depression that is somewhat larger than E i (X,).
  • i (chi 0) is, rather than the value of the same extent as E i (X 2), which is determined when it has a value greater than a certain degree E i () 2). Further understanding easily explained, off O over mosquito scan lens, X 2, X 0
  • the peak of the evaluation value is at E i (X o).
  • step S125 the weight data Wi is added to the flat count value Fcnt, and the process proceeds to step S26.
  • step S 1 i is incremented and step S 1
  • step S127 since two evaluation values E are generated in the horizontal evaluation value generation circuit 62 and the vertical evaluation value generation circuit 63, it is determined whether i is 24. If the power is 24, it is determined that the calculation for all the evaluations has been completed, and the process proceeds to step S128. If i is not 24, the loop composed of steps S118 to S127 is repeated until i becomes 24.
  • step S1208 the upcount value Ucn, the downforce value Dcnt, and the flatcount value Fcnt are compared to determine which It is determined whether the default value is the largest value. If the upcount value Ucnt is the largest value, the process proceeds to step S129, and if the downcount value Dent is the largest value, the process proceeds to step S129.
  • step S30 if the flat count value Fcnt is the largest value, proceed to step S131.
  • step S129 the micro computer 64 determines that the direction of X, is the hill-climbing direction of the evaluation value, that is, the direction in which the focus is in conformity. And then follow the direction of Far. .
  • Step S 1 3 microphones b Computer 6 4, chi 2 hill-climbing direction of the direction evaluation value, ie, in the direction of full Okasu fit Judgment is made, and the CPU 4 specifies the lens movement direction as the Near direction.
  • step S131 the microcomputer 64 is X.
  • the position is determined to be the position where the focus is adjusted, and the process proceeds to step S2188.
  • FIG. 7 is a diagram showing, as an example, a transition of changes in evaluation values E i (X 2 ), E i (X 0 ), and E i (X,) in (,, ().
  • step S118 it is determined whether or not i is an unused ⁇ number.
  • i is an unused ⁇ number.
  • step S122 the operation of Ucnt-0-W, is performed.
  • step S 120 Since E 2 (X 2 ) and E 2 (X o) ⁇ E 2 (X,), the condition of step S 120 is met, and the process proceeds to step S 12 ⁇ .
  • Step S 1 2 1 - W the operation of the -I- W 2 is carried out.
  • next third, fourth, and fifth loops the same operations as in the first and second loops described above are performed, and in step S12 of the 50th loop,
  • step S 124 a determination is made on ⁇ 2, Since E 2 (X 2 ) ⁇ E 2 (X o)> E 2 (X,), the condition of step S 124 is met, and the flow proceeds to step S 125.
  • step S1208 the upcount value Ucnt has the largest value, and in the example shown in FIG. 9, the process proceeds to step S129. Therefore, the force direction is determined to be the X, direction.
  • steps S200 to S222 are steps for determining the lens position at which the evaluation value is maximized.
  • This operation flow is an operation flow executed by the microphone computer 64. Steps S200 to S221 are performed, and FIG. 3 to FIG. 15 are referred to. I will explain in detail.
  • the distance indicated by ⁇ X is the distance that the focal lens moves between one field. Is defined. Therefore, ⁇ separation ⁇ ⁇ represents the distance the lens travels in one field period.
  • the distance ⁇ X not only represents the distance the lens travels in one field period no, but also is based on the lens movement direction obtained from step S100 to step S13 ⁇ .
  • the polarity of ⁇ X is determined. For example, if the lens movement direction is the Far direction, ⁇ ⁇ is set to have a positive polarity, and if the lens movement direction is the Near direction, ⁇ X is the negative polarity. Is set to have
  • step S 2 0 Mai Croc Npyu Ichita G 4 are instructs to move the lens to the position of the X k to the CPU.
  • the lens position X k is given by the following equation (2 0 0).
  • step S202 the microcomputer 64 receives the evaluation values E, (X ⁇ ) to be newly generated in the horizontal evaluation value generation circuit 62 and the vertical evaluation value generation circuit 63.
  • the evaluation value E 24 (X ⁇ ) is stored in the RAM 66.
  • the 24 evaluation values E i are stored in a table as shown in FIG.
  • step S204 it is determined whether or not i is a ban which is defined as an unused number. If i is not defined as an unused number, go to step S206. If i is a number defined as an unused number, step S 2
  • step S209 It is determined by performing the operation of (206).
  • the fact that the condition of the equation ( 206 ) is met indicates that the evaluation value E i (X k ) is more than the evaluation value E i (X,) to some extent. In this case, the next step S 2 0 7 Proceed to. When the condition of the equation (206) is not satisfied, the process proceeds to step S209.
  • step S207 the evaluation value E i (X k ) force ⁇ Since the evaluation value E i (X k —,) has been increased to some extent or more, the UZ D information (up Z down information) is used. Then, the 2-bit data “01” indicating that the data is up is stored in the RAM 66 in association with the evaluation value E i (X,).
  • step S 208 similarly to step S 122, weight data W i is added to up force event value U cnt, and the flow advances to step S 214.
  • step S 2 ⁇ 9 the evaluation value E i (X k ) obtained when the focus lens moves from X k —, to the position indicated by X becomes the evaluation value E i ( ⁇ ,- A judgment is made as to whether or not,) is down to some extent.
  • step S210 since the evaluation value E i (X k ) is down to a certain extent with respect to the evaluation value E i (X k- >), the IJZ I) information (up / down information) Then, 2-bit data “10” indicating down is stored in R ⁇ 66 together with the evaluation value E i (X k ).
  • step S211 as in step S123, the down-force value Dcnt! ! Step S2 1 4 Proceed to.
  • step S212 means that the focus lens is represented by X k —, to X k , considering the conditions of the step S 206 and the conditions of the step S 209. It means that the evaluation value E i (X k ) obtained when moving to the position where the evaluation value E i (X k _,) has not changed by more than a certain degree .
  • step S212 2-bit data “00” indicating a flat is used as UZD information (up-Z down information) as an evaluation value E i (X k ). Relate and store it in R 6 66.
  • step S213 similarly to step S125, the weight data W i is added to the flat count value Fcnt, and the process proceeds to step S2J4.
  • step S2114 i is incremented, and the process proceeds to step S215.
  • step S215 it is determined whether or not the i force is 24. If i is 24, it is determined that the calculations for all the evaluation values have been completed, and the flow proceeds to step S216. When i is not 2/1, the loop composed of steps S204 to S21 ⁇ ) is repeated until i becomes 24.
  • Step S216 is a step for determining whether or not the downcount value Dcnt has the largest value.
  • FIG. 20 is a diagram showing a storage state of each evaluation data and each up-Z down information at R ⁇ 66.
  • the micro computer (; 4) associates each evaluation i with each up / down information so as to correspond to the moved lens position ⁇ ⁇ . 6 to memorize.
  • step S 2 0 4 crows Tetsupu S 2 1 5 loops are repeated, Appuka window down preparative value U cnt, Dow linker window down preparative value Dent and off rats
  • the count value F cnt is as follows.
  • step S216 is a step for judging whether or not comprehensive evaluation ⁇ is down. Can be expressed as
  • step S216 the process proceeds to step S217.
  • step S217 j is incremented, and the process proceeds to step S218.
  • This j is a value indicating how many times the judgment result in step S216 has become YES, that is, a value indicating how many times the total evaluation has been reduced.
  • step S 218 let ⁇ ⁇ + ⁇ be the first lens position of the lens position at which the comprehensive evaluation value starts to be continuously reduced.
  • X ⁇ Judge whether the moving distance of the lens from (X ⁇ + j is greater than DXn. The formula for making the actual judgment is
  • step S218 will be described with reference to FIG.
  • the axis of ⁇ in FIG. 21 represents the position X of the lens, and the axis of ordinate represents the evaluation value E (X) for the position of the lens.
  • the overall evaluation value is the lens position when the first down was performed. Therefore, the right side ( ⁇ XX j) of the equation (2 18) is the lens position ⁇ ⁇ ⁇ + at which the overall evaluation value first starts to decrease from the lens position XK before the overall evaluation value starts to decrease. Represents the distance between and. However, as can be seen from FIG. 21, the judgment result in step S218 is 8 ⁇ .
  • step S218 Represents twice lenses position distance X K 4 2 began to down continuously. However, as can be seen from FIG. 21, the judgment result of step S218 is ⁇ 0.
  • step S218 is NO.
  • step S216 the downcount value Dcnt is If it is determined that the total evaluation value does not have the largest value, it is determined that the overall evaluation value has not decreased, and the process proceeds to step S219.
  • the reason why j is reset is that j is a value indicating how many times the overall evaluation value has been reduced. More specifically, reaching step S219 means that the overall evaluation value was determined to be not down in the determination result of step S216 ' Therefore, in the determination in step S216, the continuous down of the total evaluation value is stopped. Therefore, j is reset in this step S219.
  • j When the continuous down of the total evaluation value is interrupted, j is reset.
  • a certain evaluation value E (X ⁇ ) is a local maximum value due to mere noise. Even if this is the case, j is reset in the operation loop of the evaluation value E ( ⁇ + ,) or ⁇ (X ⁇ .2) or ⁇ (X ⁇ , 3), so that the evaluation value E (X ⁇ ) is not recognized as the maximum.
  • step S220 k is incremented in order to further move the focus lens, and the process returns to step S221.
  • step S2 When the determination result of step S2 18 becomes Y E S, step S2
  • step S22 it means that the total evaluation has been continuously downed from the lens position X ⁇ for a predetermined number of times (j times). Therefore, the micro computer 64 receives the lens position X ⁇ . Is determined to be the lens position X g at which the evaluation value is maximized.
  • the overall evaluation value and the up-down state near the lens position X ⁇ and the up-down of Ei stored in the RAM 66 Select the i that matches the Just do it. Assuming that the selected weight i has the highest weight data Wi is W g , the maximum evaluation value is defined as E g ( ⁇ ⁇ )
  • the corresponding lower limit evaluation value is defined as E g ( ⁇ ⁇ + ,).
  • the maximum evaluation value ⁇ 9 ( ⁇ ⁇ ) is updated every field even after focusing has been achieved by fixing the lens to ⁇ ⁇ wide, but the lower limit evaluation value is E g (X ⁇ + .) is fixed.
  • the overall evaluation value is increased based on the judgment of step S 2 16, and when the lens position is X ⁇ + , step S 2] 6 Based on this judgment, the overall evaluation value is down. Therefore, huh.
  • step S223 it is determined whether or not the CPU 4 has received a command to follow the subject.
  • the subject tracking command is used to control the tilt and pan movements of the video camera so as to follow the movement of the subject, and to adjust the position of the evaluation value detection window of the video camera's unique focus. This is a command to make it variable.
  • the following command is supplied to the CPU 4 when the camera man presses a following command button provided on the operation unit 5. If the following command has been supplied from the operation unit 5, the process proceeds to step S300. If there is no following command, the process proceeds to step S224.
  • step S224 it is determined whether or not an autofocus stop command has been issued. If the autofocus mode has been released by the operation of the camera, the flow advances to step S225 to shift to the manual focus mode.
  • step S224 If there is no instruction to stop the autofocus mode in step S224, the process proceeds to step S226, where the maximum evaluation value E g (X ⁇ ) and the lower limit evaluation A comparison of the values E g ( ⁇ ⁇ + ,) is made. If the value of the maximum evaluation value E g (X ⁇ ) decreases due to a change in the subject and becomes lower than the lower limit evaluation value ⁇ 9 ( ⁇ ⁇ + 1 ), the process proceeds to step S 2 27. Restart the to focus operation. Talent
  • step S300 shows the processing operation performed by the CPU 4. Further, in order to more specifically I3 1 solution the description of this flow, at the same time, also the reference example shown in the second 2 FIG.
  • FIG. 22 shows a state in which a round object ⁇ and a square object ⁇ are being imaged.
  • the color of the object ⁇ and the color of the object rest match the color set as the target object.
  • the origin is the raster scan start point (upper left in the imaging screen), and the horizontal scanning direction is defined as the X-axis direction, and the vertical scanning direction is defined as the Y-axis direction. Therefore, on the imaging screen, the coordinates of the raster scan start point are (0, 0), and the coordinates of the raster scan end point are (768, 240). The center of (3
  • step S223 the CPU 4 receives the motion tracking instruction of the subject from the operation unit 5, and receives the command from the operation unit 5 in step S3 shown in FIG. 0 Move to 0.
  • step S300 the camera man operates the operation unit 5 to determine whether or not a subject as a target object has been set.
  • a method of setting a subject as a target object will be described.
  • Cameraman first captures a desired subject as a target object so that it is located at the center of the imaging screen.
  • the target object rest determination button provided on the operation unit 5
  • the CPU 4 causes the object located at the center of the imaging screen to move. Is recognized as the desired target object of the force camera, and the color information of this object is loaded into the RAM 4a.
  • the setting of the target object is not limited to this method
  • an object having a preset color may be set as a target object.
  • the flow shifts to step S301.
  • step S301 CP IJ4 determines the object set in step S300 from the four object modes (modes 0, 1, 2, and 3) described in. Select the most suitable subject mode for the object.
  • the CPU 4 controls the switching of the switch circuits 72a.72b.72c and 72d according to the selected subject mode.
  • the search circuit 38 searches for an area in which rain element data that matches the color component of the subject set as the target object exists. This search processing is not the processing performed by the CPU 4, but an area search circuit provided as a hardware circuit.
  • the process 38 is performed.
  • the area detection circuit 38 as a hardware circuit in this way, all pixel data from the encoder 37 can be processed. This means that condition judgment processing can be performed in real time. ⁇
  • the operation of the relay search circuit 38 has already been described with reference to FIG.
  • step S302 the CPU 4 determines which area has the pixel data of the same color as the target object based on the chip circuit number supplied from the end search circuit 38. recognize. Thus, the CPU 4 can select only an area having the same color as the subject set as the target object. In the example shown in the second 2 figure as a d re A present the same color as the object object, lambda 0 6 8 ⁇ A 0 6 9, A 0 8 4, A 0 8 5 ⁇ A 0 8 6 , A 0 ⁇ 7 ⁇ ⁇ 10 2 and A 10 3 will be selected.
  • step S303 the CPU 4 reads out all pixel data present in the area selected in step S302 from the frame memory 39 in the last scan order.
  • the pixel data existing in the area not selected in step S302 is not read at all.
  • the pixel data is read out into the pixel data composed of ⁇ data, R- ⁇ data and ⁇ - ⁇ data.
  • CPU 4 has eight areas A 0 fi 8 , A 0 6 9 A 0 8 4 A 0 8 5 A 0 8 6 ⁇ ⁇ 0 B 7 ⁇ ⁇ ] 0 2 and ⁇ ⁇ 0 3 ⁇ Only the existing pixel data is read from the frame memory 39.
  • the CPU 4 determines the area from which the pixel data is read from the frame memory 39 based on the result searched by the area search circuit 38. The number of pixel data that! Can receive from the frame memory 39 can be reduced. Therefore, CPU 4 is in real time, and the entire image supplied from frame memory 39 is Processing can be performed on raw data.
  • step S304 the CPU 4 first selects the mode 0 as the subject mode based on the read pixel data including the Y data, the R-Y data, and the B-Y data. If so, the condition is determined based on the equation (700). Mode as subject mode
  • a program for performing the case judgment based on the equation (702) or the equation (703) is described in advance in RA A4a of CP1J4.
  • condition determination processing is performed on all pixel data stored in the frame memory 39, the number of processing becomes too large, and this condition is determined in real time. Judgment processing cannot be performed. However, in the present embodiment, since the above-described condition determination is performed only on the pixel data existing in the selected area, the CPU 4 can perform this condition determination processing in real time.
  • the CPU 4 is defined by the formula (700), (701), (7 () 2) or (7 ⁇ 3) for each of the ginseng data in the selected area.
  • the result of the determination as to whether the condition is met is obtained.
  • the fact that this pixel data power matches the condition defined by this equation (700), (701), (702) or (703) means that Picture This means that the color of the raw data matches and matches the color set as the target object.
  • step S304 the CPU 4 creates an object information table, which will be described later, in parallel with the condition determination processing, and stores it in a predetermined area of the RAM 4a.
  • the coordinate of the object which is set as the target object, indicates which line from which pixel position to which pixel position exists, and matches the color of the target object.
  • an object identification number indicating the number of the object having the desired color.
  • the line position indicates the number of the line on which an object having the same color as the target object exists by the Y coordinate
  • the start pixel position indicates the line indicated by this line position.
  • the coordinate position of the first pixel data of the holiday that has the same color as the target holiday is indicated by the X coordinate
  • the end pixel location is defined as 1 ⁇ 1 in the line indicated by this line location.
  • the X-coordinate indicates the coordinate position of the last pixel data of the object having the same color as the target object
  • the object identification IJ number is the number of objects recognized as objects having the same color as the target object. A number indicating whether the object is an eye.
  • the object information table contains the information ⁇ ⁇ as shown in FIG. Then, “16 1” is stored as the line identification, “2 45” is stored as the start pixel position, and “2 4 6” is stored as the end pixel position. Further, “1” is stored as the object identification number indicating the object ⁇ . The same applies to the following 162-line to 190-line, and a description thereof will be omitted.
  • the object A exists between 221 pixels and 258 pixels
  • object B exists between 318 pixels and 319 pixels.
  • a minimum window including an object having the same color as the target object is set.
  • the minimum window containing the object ⁇ is defined as a window defined in the range of 2 16 ⁇ X ⁇ 27 3, 16 1 ⁇ Y ⁇ 20 2 Do W A is set, and as the smallest window that includes vacation B, 3 09 ⁇ X ⁇
  • a window W B defined in the range of 3 5 8, 1 9 1 Y ⁇ 2 3 1 is set.
  • step S306 m is initialized to the minimum object identification number stored in the physical information table.
  • m is a mere variable and has a value from the smallest object identification number stored in the object information table to the largest object identification number.
  • step S307 the m-th pin set in step S305 based on the window center vector stored in the target object history table described later. It is determined whether or not the expected position coordinates described later exist in the do. This is to determine which of the object ⁇ and the object B is the target object-C.
  • Fig. 24 shows an example of the target object history table FIG.
  • the target object history table stores information on the coordinate position of the object determined to be the target object in each field.
  • the field number is a temporary number that is reset every 30 fields, and is a sequential number that is sequentially assigned to each field.
  • the window X coordinate is the
  • the window range set for the target object is 3 1 2 X ⁇ 3 62 and 1 86 ⁇ Y ⁇ 2 28 It also indicates that the center position of the window is displaced from the center position of the imaging screen in the direction and distance indicated by the window center vector (147 + 87). .
  • each of the field center 17 and the field number 18 and the field number 19 are respectively stored in the window center vectors stored at the respective time points. Looking at the data, there is no significant change in the values of the data representing these three window center vectors. This is not because the target object is not moving, but this window center vector This is because the torque indicates the movement vector of the target object from the previous field.
  • the CPU 4 controls the pan-Z tilt drive mechanism 16 so that the center position of the window indicating the moved target object is positioned at the center of the imaging screen every field. Is controlling. Therefore, by setting the window center vector as a vector indicating the direction and distance of deviation from the center of the imaging screen, the window center vector is shifted from the front field. It represents the movement vector of the target object.
  • Step S307 will be described again with reference to the target object history table described above.
  • the predicted position coordinates are position coordinates obtained from the window center vector one field before, stored in the target object history table described above. For example, full I one field number is set at the time of ⁇ 9 the windowing central base-vector ( ⁇ ) (, 3 delta Y 13) so is a base-vector (one 4 9 + 8 9), off Even if we look at the window center vector obtained at field number 20, we can expect it to be a vector near the vector (1 49 + 89) .
  • the window center vector stored in the target object history table indicates the amount and direction of the coordinate deviation from the imaging center coordinates (384, 120), the window center vector is displayed.
  • the window center position target set for the target object at the time of field No. 20 can be considered to be (335, 209).
  • the center position coordinate of this window is the expected position coordinate.
  • step S307 the smallest window including the object A is 2 16 ⁇ X ⁇ 27 3 and
  • step S308 m is incremented, and the process returns to step S307 again.
  • step S307 as the smallest window including the object B, the range of 309 ⁇ X ⁇ 358 and 19.1 ⁇ Y ⁇ 2 3 1
  • the expected position ⁇ coordinates (3 3 5 2 0 9) determines whether there. Oite to this stearyl-up, in the windowing W B, the expected position anonymous coordinates (3 3 5, 2 0 9
  • CPU 4 determines that the object B is the set target object, and proceeds to step S309.
  • step S309 the CPU 4 converts the coordinates of the window WB defined in the range of 3109 ⁇ X ⁇ 358 and 191 ⁇ Y ⁇ 231 into the target object of the RAM 4a.
  • the window X coordinate and the ⁇ ⁇ window Y coordinate are stored in the area indicated by 0. Also, the CPU 4 calculates the center coordinates of the window W B from the coordinates of the window W B , and uses the center coordinates of the window as the window center vector in the RAM 4 a.
  • the center vector of the window is a solid
  • step S310 the CPU 4 moves the center of the window W2 to the center of the imaging screen based on the window center vector newly stored in step S309.
  • the tilt Z pan drive mechanism 16 is controlled so as to match. Specifically, the CPU 4 outputs a control signal to the motor drive circuit 16b based on the window center vector.
  • step S311 CPU 4 is the window center vector
  • the offset value is supplied to the evaluation value generation circuit 62 of the force control circuit 34 on the basis of the value.
  • the offset value is an offset value supplied to each counter provided in the window pulse generation circuits 625 and 635 shown in FIGS. 3 and 4.
  • each window The center coordinates of W1 to W11 coincide with the coordinates of the center of the imaging screen.
  • this offset value is supplied from the CPU 4 to the respective window counters of the window pulse generation circuits 625 and 6335, the counter value of each counter is based on this offset value. Changed. Therefore, the center coordinates of each window W1 to W11 are changed based on this offset value.
  • the offset value is supplied to the focus control circuit 34 in step S311, the process returns to step S100.
  • the present invention has the following effects.
  • the area search circuit 3 selects an area in which pixel data of the same color as the target object exists, and performs a condition determination process only on the pixel data existing in the selected area.
  • the position of the target object can be grasped in real time without placing a processing burden on the CPU 4.
  • the object mode is set according to the set color of the target object, and the condition judgment calculation by the area detection circuit 38 and the condition judgment by the CPU 4 are performed according to the set object mode. Since the calculation is variable, the subject can be accurately recognized regardless of the color of the set subject rest.
  • condition determination processing performed in the area search circuit 38 is determined by the hardware circuit, all the pixel data supplied from the encoder 37 are processed in real time. Condition determination can be performed.
  • an object information table having position information on each object and a target object history table having information on a movement history of the target object are provided. Since the target object is created, the target object can be accurately recognized.
  • the center deer of each window W 1 to W 11 is used. Changed to correspond to the object. Therefore, even if the target object moves, each window can be set accurately for the moved target object, and an accurate evaluation value can be obtained for the moved target object. Therefore, automatic focus control can be performed.
  • a plurality of evaluation values can be obtained by combining a plurality of filter coefficients with windows of a plurality of sizes, so that various types of subjects can be handled.
  • weight data is added to the evaluation / generation circuit, and a total evaluation value is obtained based on a plurality of evaluation values and ffi data corresponding to the evaluation values.
  • accuracy of the finally obtained evaluation value is improved.
  • the evaluation value curve draws a clear parabola in the vicinity of the force point, so that the maximum point of the evaluation value can be determined quickly. Therefore, the operation of the auto-force becomes faster.
  • an evaluation value determined to be inappropriate when calculating the overall evaluation ⁇ is selected from a plurality of evaluation values and is not used, the accuracy of the evaluation value is further improved. For example, if a small window does not provide a good rating, use the rating corresponding to a larger window than the smaller window to combine the focus. Since it is made to focus, at least it is possible to focus on some kind of subject, and it is possible to prevent the auto-focus operation from continuing for a long time.
  • a majority decision method using weight data is used for changes in a plurality of evaluation values, so that a small number of sample points and Small movements within the focal depth of the lens can judge right in the force direction.
  • the lens When determining whether or not the maximum point of the evaluation value is the maximum point, the lens is moved from the maximum point to a distance of a predetermined multiple of the depth of focus, so that, for example, the peak of the evaluation value is flat. If so, it can be determined whether the lens is at its maximum when it moves a certain distance. Therefore, there is an effect that the focus point can be determined at high speed. For example, in order to determine whether the maximum point is the maximum point, it is necessary to prevent the focus from burning and the image signal to be large and fuzzy, thereby preventing an unnatural image from being output. it can.
  • the evaluation state ⁇ matches the gap state of the overall evaluation value and the gap Z information stored in the R ⁇ GG, and the weight data Since the highest evaluation value of is selected as the maximum evaluation value, there is an effect that the value of the maximum evaluation value can be obtained accurately.

Abstract

Un circuit (38) de détection de zone reçoit toutes les données de pixels relatives aux signaux de prise d'image à partir d'un codeur (37) et sélectionne la zone dans laquelle se trouvent les données de pixels coïncidentes aux couleurs d'un objet cible prédéfini, à partir de 128 zones séparées. Une UC (4) reçoit les données de pixels se trouvant dans la zone sélectionnée à partir d'une mémoire d'images (39) et prépare une table d'informations d'objets donnant la position des coordonnées de l'objet dont les données de pixels de couleur coïncident avec celles de l'objet. Ensuite l'UC (4) détermine la position actuelle des coordonnées de l'objet sur la base des données se rapportant au champ précédent stockées dans une table à historique d'objets cibles. L'UC (4) transmet une valeur de décalage à un circuit (34) de réglage de netteté utilisé pour changer la position centrale d'une fenêtre pour la détection d'une valeur évaluée à partir des données représentant la position des coordonnées de l'objet. Le circuit (34) calcule une pluralité de valeurs évaluées à partir d'une fenêtre nouvellement définie et génère un signal de commande destiné à régler la netteté.
PCT/JP1996/001700 1995-06-19 1996-06-19 Dispositif de reconnaissance d'objets et dispositif de prise d'images WO1997000575A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP7/151395 1995-06-19
JP15139595 1995-06-19

Publications (1)

Publication Number Publication Date
WO1997000575A1 true WO1997000575A1 (fr) 1997-01-03

Family

ID=15517655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1996/001700 WO1997000575A1 (fr) 1995-06-19 1996-06-19 Dispositif de reconnaissance d'objets et dispositif de prise d'images

Country Status (1)

Country Link
WO (1) WO1997000575A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU739832B2 (en) * 1999-06-10 2001-10-18 Canon Kabushiki Kaisha Object camera description
US8035721B2 (en) 2004-08-05 2011-10-11 Panasonic Corporation Imaging apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS614011A (ja) * 1984-06-18 1986-01-09 Canon Inc カメラにおける自動追尾装置
JPH02117276A (ja) * 1988-10-27 1990-05-01 Canon Inc 追尾制御方法及び装置並びにぶれ補正方法及び装置
JPH042281A (ja) * 1990-04-19 1992-01-07 Mitsubishi Electric Corp 自動合焦装置
JPH04158683A (ja) * 1990-10-22 1992-06-01 Matsushita Electric Ind Co Ltd 自動焦点調整装置
JPH06165016A (ja) * 1992-09-28 1994-06-10 Sony Corp ビデオカメラシステム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS614011A (ja) * 1984-06-18 1986-01-09 Canon Inc カメラにおける自動追尾装置
JPH02117276A (ja) * 1988-10-27 1990-05-01 Canon Inc 追尾制御方法及び装置並びにぶれ補正方法及び装置
JPH042281A (ja) * 1990-04-19 1992-01-07 Mitsubishi Electric Corp 自動合焦装置
JPH04158683A (ja) * 1990-10-22 1992-06-01 Matsushita Electric Ind Co Ltd 自動焦点調整装置
JPH06165016A (ja) * 1992-09-28 1994-06-10 Sony Corp ビデオカメラシステム

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU739832B2 (en) * 1999-06-10 2001-10-18 Canon Kabushiki Kaisha Object camera description
US8035721B2 (en) 2004-08-05 2011-10-11 Panasonic Corporation Imaging apparatus

Similar Documents

Publication Publication Date Title
EP2563006B1 (fr) Procédé pour l'affichage d'informations de caractère et dispositif de prise d'images
JP3791012B2 (ja) フォーカス制御装置
EP2793457B1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et support d'enregistrement
US7456898B2 (en) Video camera apparatus including automatic focusing
US9996907B2 (en) Image pickup apparatus and image processing method restricting an image stabilization range during a live view operation
US7847855B2 (en) Image capturing apparatus and focusing method with focus evaluation
CN105100594A (zh) 摄像装置和摄像方法
US20050162540A1 (en) Autofocus system
JP2006258944A (ja) オートフォーカスシステム
KR101398475B1 (ko) 디지털 영상 처리장치 및 그 제어방법
US20040130648A1 (en) Electronic camera and focus controlling method
US9503650B2 (en) Zoom-tracking method performed by imaging apparatus
JP2010287919A (ja) 撮像装置
KR20100079832A (ko) 지능형 셀프 타이머 모드를 지원하는 디지털 카메라 및 그 제어방법
WO1997000575A1 (fr) Dispositif de reconnaissance d'objets et dispositif de prise d'images
US6275262B1 (en) Focus control method and video camera apparatus
US6222587B1 (en) Focus control method and video camera
JP3747474B2 (ja) フォーカス制御方法及びビデオカメラ装置
KR101411912B1 (ko) 디지털 영상 처리장치 및 그 제어방법
JP3774962B2 (ja) オートフォーカス装置
JP2002277730A (ja) 電子カメラの自動焦点制御方法、装置及びプログラム
JP6246705B2 (ja) フォーカス制御装置、撮像装置及びフォーカス制御方法
KR101109593B1 (ko) 디지털 이미지 처리장치의 자동초점조정 방법
KR101480401B1 (ko) 디지털 영상 처리장치 및 그 제어방법
JP2869370B2 (ja) ビデオカメラのレンズオフセット制御方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

ENP Entry into the national phase

Ref country code: US

Ref document number: 1997 809094

Date of ref document: 19970310

Kind code of ref document: A

Format of ref document f/p: F