WO2006028247A1 - Video shooting system, video shooting device and video shooting method - Google Patents

Video shooting system, video shooting device and video shooting method Download PDF

Info

Publication number
WO2006028247A1
WO2006028247A1 PCT/JP2005/016727 JP2005016727W WO2006028247A1 WO 2006028247 A1 WO2006028247 A1 WO 2006028247A1 JP 2005016727 W JP2005016727 W JP 2005016727W WO 2006028247 A1 WO2006028247 A1 WO 2006028247A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
shooting
photographing
sub
imaging
Prior art date
Application number
PCT/JP2005/016727
Other languages
French (fr)
Japanese (ja)
Inventor
Masayuki Hosoi
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Publication of WO2006028247A1 publication Critical patent/WO2006028247A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • Video shooting system video shooting device, and video shooting method
  • the present invention relates to a technical field of a video shooting system, a video shooting device, and a video shooting method.
  • the first camera force is transmitted to the second camera based on the exposure value in the first camera to which the first camera power is also transmitted.
  • the exposure value is controlled. Therefore, it is said that the exposure of two or more cameras at remote locations can be controlled arbitrarily by operating one camera.
  • Patent Document 1 Japanese Patent Laid-Open No. 2001-281717
  • the present invention has been made in view of the above-described problems, for example, and provides a video shooting system, a video shooting device, and a video shooting method capable of obtaining a high-quality video. Let it be an issue.
  • a video shooting system of the present invention is accommodated in a network, and at least one main shooting device and at least one sub-shooting for shooting a video.
  • the main photographing device includes a first photographing means for photographing the video, a first control means for controlling the photographing condition of the video in the first photographing means, and among the photographing conditions, Control information generating means for generating control information for causing the sub-photographing device to follow in accordance with conditions defining the composition of the video to be photographed, and the generation for the sub-photographing device via the network.
  • network means, for example, a wired communication network such as a WAN (Wide Area Network) or a LAN (Local Area Network) compliant with USB (Universal Serial Bus) or IE EE1394, or the like.
  • WAN Wide Area Network
  • LAN Local Area Network
  • USB Universal Serial Bus
  • IE EE1394 Universal Serial Bus
  • the “main photographing device” and the “sub photographing device” represent one of the photographing devices that constitute the video photographing system according to the present invention.
  • the “photographing device” is a concept including all devices capable of photographing a subject and includes, for example, a DV (Digital Video) camera, a video camera, a camera, a digital camera, and the like. Or it refers to a digital camera or video camera mounted on a mobile terminal such as a mobile phone.
  • DV Digital Video
  • video is captured by the first imaging means in the main imaging apparatus and by the second imaging means in the sub-imaging apparatus.
  • the image capturing conditions of the first and second imaging means are controlled by the first and second control means, respectively.
  • the “video shooting conditions” in the present invention is a concept including all conditions having some relationship with the video to be shot. Specifically, for example, conditions relating to exposure such as shutter speed, aperture value, or subject depth, conditions relating to shooting methods such as fade-in or out, or various image processing, and zoom-in or out, left-right direction Rotation of the first and second imaging means (hereinafter referred to as “pan” as appropriate), or rotation of the first and second imaging means in the vertical direction (hereinafter referred to as “tilt” as appropriate), etc. This refers to all or part of the compositional conditions.
  • the first and second control means control such shooting conditions, and the mode of such control is, for example, full auto control, semi-auto control, or a user via some input device.
  • Various forms such as simple control based on powerful instructions can be adopted.
  • full auto control even if these shooting conditions are controlled without any user operation during shooting according to a control program stored in a ROM (Read Only Memory) or some recording medium, for example. Good.
  • ROM Read Only Memory
  • some recording medium for example. Good.
  • the main or sub-photographing device is installed via a fixing means such as a tripod, the attachment portion between the fixing means and the main or sub-photographing device is driven by a motor or the like.
  • the pan or tilt may be controlled by.
  • semi-automatic control for example, a part of shooting conditions such as panning or tilting may be controlled through an operation by a user.
  • the main shooting device when the main shooting device is shooting while following the movement of one athlete as a subject at an athletic meet or the like, the sub-shooting device installed at another position may It is difficult to follow the shooting action and take a picture of the competitor from another direction. Therefore, the images taken by the main and sub-photographing devices tend to be low-quality images that lack sense.
  • control information generating means of the main image capturing device further generates control information for causing the sub image capturing device to follow in accordance with the conditions defining the composition of the image to be captured. It is transmitted to the secondary imaging device via the network by the first communication means.
  • the "imaging conditions for defining the composition” are, for example, the position of the main imaging device, the first imaging operator The direction in which the stage is shooting the image (hereinafter referred to as “shooting direction” as appropriate), the distance between the main shooting device and the subject, the zoom magnification, etc., and the composition of the image shot by the first shooting means.
  • shooting direction The direction in which the stage is shooting the image
  • the distance between the main shooting device and the subject hereinafter referred to as “shooting direction” as appropriate
  • zoom magnification etc.
  • the composition of the image shot by the first shooting means are, for example, the position of the main imaging device, the first imaging operator The direction in which the stage is shooting the image (hereinafter referred to as “shooting direction” as appropriate), the distance between the main shooting device and the subject, the zoom magnification, etc., and the composition of the image shot by the first shooting means.
  • This is a concept that broadly represents the shooting conditions that can define the figure to a great extent.
  • Such control information generated according to the first imaging condition is, for example, an operation accompanied by a composition change on the main imaging device side, for example, a change in installation position, a state in which the installation position is maintained ( For example, when panning or tilting in a fixed state with a tripod or the like, or when changing the zoom magnification, etc., it is information that causes the sub-photographing device to follow the main photographing device.
  • the mode of the control information may be a command signal that positively controls the operation of the sub-photographing device, and the sub-photographing device side transfers to the main photographing device according to some control program or control algorithm. If follow-up control is possible, the information may simply represent the amount of change in shooting conditions.
  • this control information is received by the second communication means via the network. Based on the received control information, the second control means of the sub-photographing device controls the video shooting conditions in the second photographing means described above.
  • the control of the photographing conditions by the second control means can take various forms depending on the form of the control information transmitted from the main photographing apparatus. For example, as described above, when the control information is a type of command signal, the zoom magnification may be changed according to the command signal, or the sub-shooting device may be panned or tilted.
  • the second control means estimates the current operation of the main photographing apparatus and contacts the main photographing apparatus. Control may be performed so that the video is shot under shooting conditions that best match the video being shot. Note that the basis for such an estimation may be given in advance by experiment, empirical, simulation, or the like.
  • the shooting condition may be controlled by providing information prompting the user to change the condition. That is, as compared with the case where no such control is performed, as long as the shooting conditions on the sub-photographing device side can be improved following the shooting conditions on the main-photographing device side.
  • the manner of controlling the photographing conditions performed by the second control means is not limited at all.
  • such "information for prompting the change of the second imaging condition" is provided through such a display means when the secondary imaging apparatus is provided with some display means such as a liquid crystal display. May be information. For example, if you want to pan the sub-photographing device by 30 ° to the right, display an arrow mark to the right on the display means, and when the user pans the sub-photographing device by 30 degrees, The second control means may control the photographing conditions by terminating the display. Alternatively, such information may be a kind of audio information when the sub-photographing device is provided with audio output means such as a speaker.
  • a sequence of control occurs between a plurality of video imaging devices during video imaging, but the configuration of the video imaging device itself constitutes all of the video imaging system. It may be equivalent among the video photographing apparatuses. For example, such a control order may be determined each time a video shooting system is constructed and a video is shot.
  • the secondary imaging device by making the secondary imaging device follow the imaging operation of the primary imaging device, for example, when the primary imaging device captures a subject up.
  • the imaging device can shoot wide-angle images that look down on the subject.
  • the main photographing device follows the movement of the subject and frequently performs panning or tilting, it is possible to pan or tilt the sub-photographing devices installed at different positions in exactly the same way. . In other words, it is possible to shoot extremely high quality images.
  • At least one of the first and second control means is based on a preset imaging pattern, and imaging conditions corresponding to the at least one To control.
  • the “preset shooting pattern” described here refers to a pattern optimized for each of various shooting purposes such as an athletic meet, a music recital, or an outdoor event.
  • the "Athletic meet" shooting pattern is set as a shooting pattern that frequently pans the shooting apparatus at a predetermined timing, assuming a subject moving over a wide range.
  • the shooting pattern for “music recital” is set as a shooting pattern that uses a lot of zoom, assuming a fixed subject.
  • the main or sub-shooting device can easily shoot a video in the full auto mode.
  • the panning range may be set on the user side in advance.
  • the timing for panning (or tilting) or zooming may be specified.
  • the first or second control means may control the photographing condition in accordance with such abstract designation information input by the user.
  • the shooting conditions are controlled based on a preset shooting pattern, a plurality of video shooting devices are used while the sub-shooting device follows the main shooting device. It is possible to shoot higher quality video.
  • At least one of the first and second control means includes the main imaging device and the secondary imaging device as the preset imaging pattern.
  • the photographing conditions are controlled based on the photographing pattern corresponding to the relative positional relationship between the two.
  • the "relative positional relationship" in this aspect may be a positional relationship in a strict sense, but as one of the simplest aspects, for example, an abstraction such as "close” or "far” It may be a similar distance relationship.
  • an abstraction such as "close” or "far” It may be a similar distance relationship.
  • the photographing orientation of the first and second photographing means is the direction of the subject. It may be an angle difference with respect to the subject. In such cases The difference in angles may also be defined including abstract concepts such as when facing forces or when they are adjacent.
  • the photographing conditions in each photographing apparatus are controlled based on the photographing pattern corresponding to such a relative positional relationship. For example, when the positional relationship between the two is “facing”, when the main shooting device pans to the right, the sub-shooting device pans to the left so that the same subject is shot. You may go. Alternatively, when “the distance difference of the subject power is large”, the zoom may be compensated within an appropriate range so that the size of the subject in the image is adjusted.
  • the main imaging device includes: (i) a distance to a subject; (ii) a current position of the main imaging device; and (iii) the first imaging unit.
  • First acquisition means for acquiring first position information including at least one of the shooting directions is further provided.
  • the first position information acquired by the first acquisition unit can add various supplementary information to the video imaged by the main imaging device, further improving the video quality. It becomes possible to make it.
  • the distance to the subject in the first position information may not be strictly specified as a distance value.
  • the distance to the subject may be acquired with an ambiguity of “far” or “near” as viewed from a general standard.
  • the quality of the single layer video can be improved.
  • the "current position" in the first position information may be an accurate current position such as latitude information, longitude information, and altitude information power, and a certain area including the current position is It may be a specified degree.
  • a current position may be obtained using positioning technology such as GPS (Global Positioning System)! Or may be specified by a base station via a wireless network or the like.
  • the “shooting azimuth” in the first position information may be a simple azimuth such as east, west, south, or north, or an accurate absolute azimuth obtained via a geomagnetic sensor or the like.
  • a simple azimuth such as east, west, south, or north
  • an accurate absolute azimuth obtained via a geomagnetic sensor or the like.
  • the sub photographing device includes: (i) a distance to a subject; (ii) a current state of the sub photographing device. And (iii) second acquisition means for acquiring second position information including at least one of the imaging directions of the second imaging means.
  • the first communication means sends the sub-photographing apparatus to the sub-photographing apparatus via the network.
  • the acquired first position information is further transmitted
  • the second communication means further receives the transmitted first position information
  • the second control means is configured to receive the acquired second position information and the reception The imaging conditions are controlled based on the first position information.
  • the shooting conditions are controlled based on the first position information in the main shooting device, so that there is a sense of unity between the main and sub-shooting devices. It becomes possible to shoot extremely high quality images.
  • the main photographing device captures the desired subject, even if the subject is lost on the side of the secondary photographing device, the subject is quickly found and captured. This is preferable because it can be performed.
  • the second communication unit is configured to transmit the main image capturing apparatus to the main image capturing apparatus via the network.
  • the acquired second position information is transmitted
  • the first communication means receives the transmitted second position information
  • the first control means includes the acquired first position information and the reception.
  • the imaging condition is controlled based on the second position information.
  • each of the main and secondary imaging devices further includes authentication means for performing mutual authentication via the first and second communication means.
  • control information is transmitted to at least the main imaging device and the auxiliary imaging device.
  • wireless communication between these imaging devices installed at locations separated from each other is performed.
  • interference may occur between the imaging device to which the control information is to be transmitted and another imaging device.
  • the authentication is performed between the main and secondary imaging devices in advance through the authentication means, the possibility of such interference is remarkably reduced, and highly reliable data transmission and reception are possible. Video can be taken.
  • the aspect of the authentication means is not limited as long as mutual authentication is possible between the main and secondary imaging devices, such as an ID that is unique between the imaging devices in advance. Authentication may be done by exchanging identification information.
  • At least one of the primary and secondary imaging devices includes audio information acquisition means for acquiring audio information corresponding to an image captured in at least one of the main and auxiliary imaging devices.
  • audio information acquisition means for acquiring audio information corresponding to an image captured in at least one of the main and auxiliary imaging devices.
  • the at least On the other hand since it is possible to acquire audio information corresponding to the video imaged, the video quality is further improved.
  • the first video imaging device of the present invention is accommodated in a network.
  • the same effects as those of the main video imaging apparatus in the video imaging system described above can be realized by each means. It is possible to shoot a video.
  • the second video imaging device of the present invention is a video imaging device that is accommodated in a network and shoots video together with other video imaging devices accommodated in the network, and captures the video. And receiving the control information generated via the network and transmitted via the network in accordance with the conditions defining the composition of the video to be shot in the imaging unit and the other video imaging device. And communication means for controlling the image capturing conditions of the image capturing means based on the received control information.
  • the second video imaging apparatus of the present invention during the operation, the same effects as those of the sub-imaging apparatus in the video imaging system described above can be realized by each means. It is possible to shoot a video.
  • the video shooting method of the present invention is accommodated in a network, and includes a video shooting system including at least one main shooting device and at least one sub-shooting device for shooting images.
  • Video for shooting the video in conjunction with each other (Ii) a first control step of controlling the video shooting conditions in the first shooting step in the main shooting device; (ii) a first control step of controlling the shooting conditions of the video in the first shooting step; )
  • a control information generating step for generating control information for causing the sub-photographing device to follow in accordance with a condition that defines the composition of the video to be photographed among the photographing conditions; and (iv) in the sub-photographing device.
  • a second shooting step of shooting the video in the sub-shooting device in the transmission step of transmitting the generated control information via the network, (i) a second shooting step of shooting the video in the sub-shooting device, (ii) the transmission of the image via the network.
  • a receiving step for receiving the control information and (ii) a second control step for controlling the shooting conditions of the video in the second shooting step based on the received control information.
  • the video imaging method of the present invention it is possible to capture a high-quality image in the same manner as in the video imaging system, by the operation in each process corresponding to each means in the video imaging system described above. It becomes.
  • the video photographing system includes the first photographing means, the first control means, the control information generating means, the first communication means, the second photographing means, the second communication means, and the second control means. It is possible to shoot high-quality images. Since the first video imaging apparatus includes the imaging means, the control means, the control information generation means, and the communication means, it is possible to take a high-quality image. Since the second video imaging apparatus includes the imaging means, the communication means, and the control means, it is possible to take a high quality video.
  • the video shooting method includes a first shooting process, a first control process, a control information generation process, a transmission process, a second shooting process, a reception process, and a second control process. Is possible.
  • FIG. 1 is a conceptual diagram of a video shooting system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a main camera in the video imaging system of FIG.
  • FIG. 3 is a flowchart relating to the overall operation of the video imaging system of FIG.
  • FIG. 4 is a flowchart of a photographing process in the flowchart of FIG.
  • FIG. 5 Model showing the positional relationship between the subject and each camera in the video shooting system of Fig. 1.
  • FIG. 6 is a schematic diagram of a recording process in the flowchart of FIG.
  • FIG. 7 is a flowchart of video data processing in the recording processing of FIG.
  • FIG. 8 is a flowchart of audio data processing in the recording processing of FIG.
  • FIG. 9 is a flowchart of additional information processing in the recording process of FIG.
  • FIG. 10 is a schematic diagram of a recording format in the recording process of FIG.
  • FIG. 11 is a schematic diagram of the header and additional information in FIG.
  • FIG. 12 is a schematic diagram of sub camera follow-up control according to a modification of the present invention.
  • 10 Image shooting system, 20... Subject, 30... Network, 40 ⁇ Tripod, 100 ⁇ Main camera, 100a ⁇ User, 110 ⁇ Control unit, 111--CPU, 112 ---ROM, 113- --RAM, 120 ... Sound collecting unit, 130 ... Imaging unit, 140 ... Camera rotation unit, 150 ... Lens driving unit, 160 ... Communication unit, 170 ... Position information acquisition 171 ⁇ Distance measurement unit, 172 ⁇ Position detection unit, 173 ⁇ Location detection unit, 180 ... recording unit, 190 ... input unit, 200 ... sub camera.
  • FIG. 1 is a conceptual diagram of the video photographing system 10.
  • the video imaging system 10 includes a main camera 100 and a sub camera 200 housed in a network 30, and the main camera 100 and the sub camera 200 shoot the object 20 in conjunction with each other. It is structured as follows.
  • Each of the main camera 100 and the sub camera 200 is fixed by a tripod 40 !.
  • Each camera is fixed to a tripod 40 via an attachment 41, and the attachment 41 is configured to be able to freely rotate in the vertical and horizontal directions. Therefore, each camera can also be rotated three-dimensionally while being fixed to the tripod 40.
  • the installation state of the main camera 100 is adjusted in advance by the user 100a so that a desired composition including the subject 20 can be obtained. In this embodiment, this installation state is referred to as “standard state” as appropriate.
  • the network 30 is an example of a "network" according to the present invention including a mobile communication network and a wired communication network.
  • the network 30 includes various lines such as an ADSL line, optical fiber line, or telephone line, and It includes base stations and access points corresponding to them.
  • FIG. 2 is a block diagram of the main camera 100.
  • the internal configurations of the main camera 100 and the sub camera 200 are the same, and therefore, the configuration of the main camera 100 will be described as a representative in FIG.
  • the reference numerals of the parts corresponding to the sub camera 200 are shown in parentheses in FIG.
  • the main camera 100 includes a control unit 110, a sound collection unit 120, an imaging unit 130, a camera rotation unit 140, a lens driving unit 150, a communication unit 160, a position information acquisition unit 170, a recording unit 180, And an input unit 190.
  • the control unit 110 faces a CPU (Central Processing Unit) 111, a ROM 112, and a RAM (Random Access Memory) 13.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • CPUl l l is a control unit that controls the operation of the main camera 100.
  • the ROM 111 is a non-volatile memory, and stores a unique ID number assigned in advance to the main camera 100 and a video shooting program to be described later that is executed by the CPU 11.
  • the CPUll is configured to function as an example of each of the “first control unit”, “control information generation unit”, and “authentication unit” according to the present invention by executing the video shooting program. ing.
  • the ROM 212 in the sub camera 200 also stores an ID number and a video shooting program in the same manner as the ROM 112, and the CPU 211 executes the “second” according to the present invention by executing the program.
  • Each of the “control means” and the “authentication means” is configured to function as an example.
  • the RAMI 13 is a volatile memory, and the CPU ll executes a video shooting program. It is configured to function as a buffer for temporarily storing various data generated in the process.
  • the sound collection unit 120 is used to acquire sound around the main camera 100, and a microphone (not shown) and a sound signal acquired by the microphone are converted into sound information of a predetermined format. It is configured to function as an example of the “voice information acquisition unit” according to the present invention, with the help of a voice conversion unit (not shown).
  • the imaging unit 130 has a CCD (Charge Coupled Diode) (not shown) that photoelectrically converts an image formed by a camera lens (not shown) for each pixel, and is formed by being condensed by the camera lens.
  • CCD Charge Coupled Diode
  • the imaging unit 230 in the sub camera 200 is an example of the “second imaging unit” according to the present invention.
  • the camera rotation unit 140 is a drive mechanism including a motor (not shown) for panning and tilting the main camera 100.
  • the camera rotation unit 140 is configured to be able to three-dimensionally rotate the attachment 41 of the tripod 40 according to an instruction from the CPU 111 using the rotation angle and the rotation speed as parameters.
  • the lens driving unit 150 is a mechanism that drives a lens to control focus and zoom.
  • the lens driving unit 150 is configured to be able to drive the lens according to an instruction from the CPU 111 with the zoom speed and zoom distance as parameters.
  • the communication unit 160 is connected to the network 30 via an antenna (not shown), and is configured to be capable of transmitting and receiving data communication with the sub camera 200. It is an example of “communication means”.
  • the communication unit 260 in the sub camera 200 is configured to function as an example of the “second communication unit” according to the present invention.
  • the position information acquisition unit 170 includes a distance measurement unit 171, a position detection unit 172, and an orientation detection unit 173, and is configured to be able to acquire an example of "first position information” according to the present invention. It is an example of the “first acquisition means” according to the invention.
  • the position information acquisition unit 270 in the sub camera functions as an example of the “second acquisition unit” according to the present invention.
  • the distance measuring unit 171 includes an infrared sensor (not shown) and the like, The distance from the subject 20 can be measured.
  • the position detection unit 172 is a known position detection system that uses GPS or a quasi-sky satellite, and is configured to be able to specify the current position of the main camera 100.
  • the direction detection unit 173 is configured to be able to specify the absolute direction of the main camera 100 by using a force such as a geomagnetic sensor.
  • the recording unit 180 is a recording medium for recording video and audio data, additional information, and the like.
  • the additional information refers to information indicating, for example, the distance to the subject 20, the current position of the main camera 100, the orientation of the main camera, the zoom size, the pan angle (direction), and the tilt angle (direction).
  • video data is, for example, data compressed by a data compression format such as MPEG2, MPEG4, or H.264
  • audio data is data that conforms to a format such as linear PCM or AC-3, respectively. To be recorded.
  • the input unit 190 is configured so that the user 100a can give various instructions to the CPU 111.
  • the input unit 190 is a part of a touch panel device, an operation button, an operation dial, or an operation lever (knob) or It consists of the whole.
  • main camera 100 and the sub camera 200 are provided with a display unit configured with, for example, a liquid crystal display so that the user 100a can appropriately check the video being captured, for example.
  • a display unit configured with, for example, a liquid crystal display so that the user 100a can appropriately check the video being captured, for example.
  • FIG. 3 is a flowchart relating to the overall operation of the video imaging system 10.
  • step A10 when the main camera 100 and the sub camera 200 are powered on, mutual authentication is first performed (step A10).
  • the CPU 111 and the CPU 211 exchange the ID numbers of the main camera 100 and the sub camera 200 stored in the ROMs 112 and 212 via the communication unit 160 and the communication unit 260, respectively.
  • data is transmitted from the main camera 100 to the sub camera 200, or from the sub camera 200 to the main camera 100, data each having its own ID number is transmitted.
  • CPUll and CPU211 each determine whether or not mutual authentication has been successfully completed (step Al1).
  • step Al 1 If mutual authentication is not successful (step Al 1: YES), CPU 111 and CPU 211 repeat the authentication process, and if authentication is successful (step Al l: YES), CPU 111 and CPU 211 are the main camera.
  • 100 and sub camera 200 are set to idling mode (step A12).
  • the idling mode is a mode that waits for an instruction from the user 100a such as shooting, playback, or various settings.
  • the sub camera 200 is controlled to perform an operation following the main camera 100 by setting the idling mode.
  • the CPU 11 determines whether or not there is an operation input from the user 100a via the input unit 190 (step A13).
  • the determination related to step A13 is constantly executed based on a fixed clock.
  • an operation input is an instruction for starting a shooting mode for shooting a subject and recording the shot video, an instruction for starting a playback mode for playing back a previously shot video, or the main camera 100.
  • step A13 If no operation input is detected (step A13: NO), CPUll continues the idling mode and if any operation input is detected (step A13: YES), It is determined whether or not the operation input is a shooting mode start instruction (step A14). When activation of the shooting mode is instructed (step A14: YES), the CPU 111 executes shooting processing and recording processing (step A15). This shooting process and recording process will be described later.
  • step A14 if the operation input from the user 100a is not an instruction to start the shooting mode (step A14: NO), the CPUll then receives the operation input force from the user 100a as an instruction to stop the main camera 100. It is determined whether or not there is a certain force (step A16). If it is a stop instruction (step A16: YES), the CPU 11 turns off the main camera 100 and stops the main camera 100.
  • CPUll determines whether it is another operation input (step A17).
  • the other operation input described here means that the above-described playback mode or setting mode is started. Indicates a movement instruction.
  • step A17: NO If these are not other operation inputs (step A17: NO), the CPU 111 returns the processing to step A13 again as a detection error, and if it is another operation input (step A17: YES) ), The CPU 111 performs control corresponding to other operation inputs (step A18).
  • the CPU 111 determines whether or not the force corresponding to the other operation input has ended (step A19). When the control is not finished (step A19: NO), the CPU 111 continues the control and, when the control is finished, returns the process to step A13 again and controls the main camera 100 to the idling mode. .
  • the control corresponding to the other operation input is the same as the control related to video reproduction or various settings in a normal video camera, and thus detailed description in this embodiment will be omitted.
  • FIG. 4 is a flowchart of the photographing process.
  • Note that such shooting processing is realized by the CPUs 111 and 211 executing the video shooting programs stored in the ROMs 112 and 212 (that is, an example of a computer program according to the present invention). Therefore, at the time when this imaging process is started, the CPU 111 has already transmitted a command signal for instructing the CPU 211 to start the imaging process via the communication unit 160, and based on this command signal.
  • the CPU 211 reads the video shooting program stored in the ROM 212 and waits for the next instruction waiting state from the CPU 211.
  • step B10 position information is acquired in each of the main camera 100 and the sub camera 200.
  • the CPU 111 instructs the position information acquisition unit 170 to acquire position information regarding the main camera 100. Based on this instruction, the distance measurement unit 171, the position detection unit 172, and the direction detection unit 173 determine the distance between the main camera 100 and the subject 20, the current position of the main camera 100, and the shooting direction of the main camera 100, respectively. Three types of information to be represented are acquired and temporarily stored in the RAM 113. On the other hand, in parallel with giving the position information acquisition unit 170 an instruction to acquire position information regarding the main camera 100, the CPU 111 sends a command signal for requesting acquisition of position information in the sub camera 200 to the communication unit 160. To the sub camera 200.
  • the CPU 111 first obtains temporary position information regarding the sub camera 200. Send the requested command signal.
  • the CPU 211 instructs the position information acquisition unit 270 to acquire position information related to the sub camera 200 based on the command signal.
  • the position detection unit 272 and the direction detection unit 273 acquire two types of information representing the current position of the sub camera 200 and the shooting direction of the sub camera 200, respectively, and are temporarily stored in the RAM 213.
  • the acquired position information related to the sub camera 200 is transmitted to the main camera 100 via the communication unit 260.
  • the transmitted position information regarding the sub camera 200 is temporarily stored in the RAMI 13 of the main camera 100 as temporary position information regarding the sub camera 200. In this state, the process related to step B10 ends.
  • the CPU 111 sets the sub camera 200 to a state corresponding to the standard state of the main camera 100, that is, the standard state of the sub camera 200 (step Bl).
  • the CPU 111 compares the position information of the main camera 100 stored in the R AMI 13 with the temporary position information. Then, it is determined whether or not the sub-camera 200 is accurately facing the subject 20.
  • the CPU 111 When the sub camera 200 is not directed toward the subject 20, the CPU 111 newly generates a command signal for panning and tilting the sub camera 200 so as to face the subject 20, and the communication unit 160 To the sub camera 200. In the sub camera 200 that has received this command signal, the CPU 211 controls the camera rotation unit 240 based on the command signal, and pans or tilts the sub camera 200 as instructed.
  • the CPU 211 in the sub camera 200 controls the position information acquisition unit 270 to acquire the position information of the sub camera 200 again.
  • the subject 20 and sub The distance information with respect to the camera 200 is measured and transmitted to the main camera 100 via the communication unit 260 as the true position information of the sub camera 200.
  • the CPU 111 when the sub camera 200 faces the direction of the subject 20, the CPU 111 generates a command signal that requests only acquisition of information indicating the distance between the subject 20 and the sub camera 200, and sets the communication unit 160. To the sub camera 200. In response to this command signal, the sub camera 200 acquires distance information in the same manner as described above, and transmits it to the main camera 100 as information for complementing the temporary position information.
  • the CPU 111 When information indicating the distance between the sub camera 200 and the subject 20 is acquired by any of the above, the CPU 111 further determines the distance between the subject 20 and the main camera 100 and the subject 20 and the sub from the distance information. A difference in distance from the camera 200 (hereinafter referred to as “ ⁇ d” as appropriate) is detected. The CPU 111 generates a command signal for zoom compensation on the sub camera 200 side from the detected Ad so as to be equivalent to the main camera 100 in the standard state of the image composition in the sub camera 200. Send to sub camera 200. The value of Ad is temporarily stored in the RAM 113.
  • the subject 20 is captured by varying the zoom distance and further focusing based on the command signal related to the lens driving unit 250 force.
  • the standard state in the sub camera 200 is set.
  • a part of the processing related to step B11 is necessarily required when the composition on the sub camera 200 side is set in advance so as to capture the subject 20!
  • information corresponding to true position information may be transmitted from the sub camera 200, and only zoom compensation in the sub camera 200 may be performed. Furthermore, such zoom compensation does not always have to be done! /.
  • the CPU 111 of the main camera 100 based on the positional information of both stored in the RAMI 13, (Hereinafter referred to as “ ⁇ 0” as appropriate) is acquired (step ⁇ 12). Further, the acquired value of ⁇ is temporarily stored in the RAMI 13.
  • FIG. 5 is a schematic diagram showing the positional relationship between the subject 20, the main camera 100, and the sub camera 200. is there.
  • the distance between the main camera 100 and the subject 20 and the distance between the sub camera 200 and the subject 20 are represented as “(1 &)” and “(1 (31))”, respectively. . Therefore, Ad is defined as the absolute value of “d (main) —d (sub)”.
  • the distance between each camera and the subject 20 is expressed with reference to the lens end surface of each camera. As long as the distance is defined based on a common standard between the camera 100 and the sub camera 200, the definition of the distance may be freely determined.
  • is the smaller of the angles formed by the line segment connecting the subject 20 and the main camera 100 and the line segment connecting the subject 20 and the sub camera 200. Refers to an angle.
  • the CPU 113 determines whether the force is within the range of ⁇ 0 force “120 ° ⁇ 0 ⁇ 18 ⁇ °” (hereinafter referred to as “first range” as appropriate) (step ⁇ 13). If it is within the first range (step B13: YES), the CPUll controls the imaging unit 130 to perform imaging in the first imaging mode (step B14). The first shooting mode will be described later.
  • CPUlll determines whether or not the power of the video shooting in the first shooting mode is finished (step B15). If shooting has not ended (step B15: NO), CPUlll continues shooting in the first shooting mode, and when shooting ends (step B15: YES), the shooting process ends. Note that the end of shooting refers to, for example, a case where the user 100a gives an instruction to stop via the input unit 190 or a preset shooting time elapses, and the modes may vary! ,.
  • step B16 It is determined whether or not ⁇ is within a range of “45 ° ⁇ ⁇ 120 °” (hereinafter referred to as “second range” as appropriate) (step B16). If it is within the second range (step B16: YES), the CPUll controls the imaging unit 130 to perform imaging in the second imaging mode (step B17). The second shooting mode will be described later.
  • CPUlll determines whether or not the power of the video shooting in the second shooting mode has ended (step B18). If shooting is not finished (step B18: NO), CPUlll The shooting in the shadow mode is continued, and when the shooting is finished (step B18: YES), the shooting process is finished.
  • step ⁇ 19 It is determined whether it is within the range of 0 force “ ⁇ 0 ⁇ 45 °” (hereinafter referred to as “third range” as appropriate) (step ⁇ 19). If it is within the third range (step B19: YES), the CPU 111 controls the imaging unit 130 to perform imaging in the third imaging mode (step B20). The third shooting mode will be described later.
  • CPU 111 determines whether or not the power of the video shooting in the third shooting mode has ended (step B21). If shooting has not ended (step B21: NO), the CPU 111 continues shooting in the third shooting mode, and when shooting ends (step B21: YES), the shooting process ends.
  • the first range corresponds to the case where the main camera 100 and the sub camera 200 are relatively distant from each other in terms of angle. Examples of situations include shooting of track competitions at sporting events.
  • the first shooting mode is performed, for example, by repeating the shooting routines (1) to (12) below.
  • the "standard state” is the above-described standard state.
  • the main camera 100 also pans this standard state force by the action of the camera rotation unit 140.
  • the CPU 111 transmits a command signal to the sub camera 200 via the communication unit 160, and the sub camera 200 pans in phase in response to the command signal.
  • in-phase means, for example, that the main camera 100 and the sub camera 200 are close in angle.
  • the directions represented by “left” or “right” are equal to each other.
  • panning in-phase is a concept that refers to panning in a direction that captures the same subject, not an absolute panning direction.
  • the CPU 111 of the main camera 100 moves the sub camera 200 left and right each time based on the relative positional relationship between the main camera 100 and the sub camera 200. It calculates how much to pan in which direction, generates a command signal based on the calculation result, and transmits it to the sub camera 200.
  • the main camera 100 and the sub camera 200 each return to the standard state.
  • the CPU 111 controls the camera rotation unit 140
  • the command signal is generated by the CPU 111 of the main camera 100 and transmitted via the communication unit 160.
  • the CPU 211 controls the camera rotation unit 240 to return to the standard state described above. Since the return to the standard state is basically the same in the following description, the description is omitted as appropriate.
  • the main camera 100 is tilted in (6). Also in this case, the tilt is realized by the CPU 111 controlling the camera rotation unit 140. Further, the sub camera 200 is tilted in phase with the main camera 100 by the action of the camera rotation unit 240 in accordance with the transmitted command signal.
  • tilt in phase is different from panning, and the direction represented by “up” or “down” is independent of the angle between the main camera 100 and the sub-power camera 200. Tilt to be equal to each other.
  • tilt in reverse phase refers to tilting so that the directions represented by “up” and “down” are different from each other.
  • the main camera 100 performs zoom photography of the subject 20.
  • the CPU 111 controls the lens driving unit 150 to adjust the zoom magnification and focus.
  • the CPU 111 generates a command signal for causing the sub camera 200 to perform zoom shooting in the same phase and transmits the command signal to the sub camera 200 via the communication unit 160, and the CPU 211 of the sub power camera 200 responds to the command signal.
  • the lens driving unit 250 is controlled to execute in-phase zoom shooting.
  • in-phase zoom photography means that the zoom directions represented by “telephoto (zoom in)” and “wide angle (zoom out)” are mutually different between the main camera 100 and the sub camera 200. It is a concept that points to equality. That is, when the main camera 100 performs telephoto shooting of the subject 20, the same telephoto shooting is used, and when the main camera 100 performs wide-angle shooting, the same wide-angle shooting is performed.
  • the main camera 100 and the sub camera 200 After returning to the standard state again in (10), in (11), the main camera 100 and the sub camera 200 perform zoom photography of the subject 20 in mutually opposite phases.
  • “photographing in reverse phase” means that, for example, if the main camera 100 captures a subject on the telephoto side, the sub camera 200 captures a subject on the wide-angle side.
  • the second range corresponds to the case where the main camera 100 and the sub camera 200 are in a standard positional relationship in terms of angle, and does not have a particularly characteristic situation. For example, the following (1) to (1) This is done by repeating the shooting routine up to (14).
  • Standard state (1) Standard state ⁇ (2) Left and right pan (In-phase) ⁇ (3) Standard state ⁇ (4) Left and right pan (Reverse phase) ⁇ (5) Standard state ⁇ (6) Zoom (In-phase) ⁇ (7) Zoom (reverse phase) ⁇ (8) Standard condition ⁇ (9) Up and down Tilt (in-phase) ⁇ (10) Vertical tilt (reverse phase) ⁇ (11) Zoom (in-phase) ⁇ (12) Standard state ⁇ (13) Zoom (reverse phase) ⁇ (14) Standard state.
  • pan, tilt, zoom, and the like are the same as those in the first shooting pattern described above, and thus the description thereof is omitted.
  • shooting in which pan and tilt and zoom operations such as zoom in and out are mixed on average is executed.
  • the third range corresponds to the case where the main camera 100 and the sub camera 200 are relatively close to each other in terms of angle. Examples of situations include indoor presentations such as pianos and plays. Is mentioned.
  • the third shooting mode is performed, for example, by repeating the shooting routines (1) to (12) below.
  • the individual shooting states are the same as those in the first shooting pattern described above, and thus the description thereof is omitted.
  • shooting is performed mainly focusing on zoom operations such as zooming in and out.
  • the CPU 111 and the CPU 211 may control each part of the main camera 100 and the sub camera 200 so that such shooting is performed.
  • video grammar refers to a universal law that exists between video material, video effects, and concepts to be expressed. For example, if the concept to be expressed as video material has been decided, the video effect that is required to some extent is specified. Alternatively, if the video material and video effects are determined, the intention of the photographer represented by the video can be conveyed to the viewer with a certain degree of accuracy. For general users who are not accustomed to shooting video, Since there are many cases where there is no known image grammar, it is possible to improve the quality of the image easily when shooting based on such image grammar is performed.
  • shooting conforming to the video grammar means that, for example, in situations where there is a relatively large amount of movement of the subject such as an athletic meet, the speed in zoom operations such as panning and tilting, zooming in and out, etc. can be increased or cut off. This refers to shooting lively images by switching frequently. Also, in situations where there is relatively little movement of the subject, such as music recitals, the panning, tilting, and zooming speeds are slowed down, and the subject's up-video is shot with various angular forces. For example, taking high-quality images.
  • Such a shooting algorithm that complies with the video grammar may be reflected in the video to be shot by being adopted in a video shooting program stored in the ROM 112 and the ROM 212, for example.
  • An imaging routine that pans at least one camera force, for example, 180 degrees or more, may be provided as appropriate.
  • a shooting routine at a sports day, a competition, a concert, or a presentation! Look at the subject (competitor, performer, etc.) at the bleachers! It is also possible to take pictures of parents' expressions easily. In this case, it is also possible to add a sense of reality, drama, V, and other elements to the video.
  • FIG. 6 is a flowchart of the recording process.
  • the recording process is performed in parallel with the photographing process.
  • 6 is a recording process executed in the main camera 100, and the processing on the sub camera 200 side is the same as that of the main camera 100, and thus the description thereof is omitted.
  • the CPU 111 determines whether or not one video frame (hereinafter, “one frame”) has elapsed based on a clock signal or the like (step C10).
  • the “video frame” is a minimum unit of video, and corresponds to “one frame” in a video, for example.
  • step C10 NO
  • the CPU 111 waits until one frame has passed, and if one frame has passed (step CIO: YES), the video data processing (step C 11), audio data processing (step C 12), and additional information processing (step C 13) are executed in parallel.
  • FIG. 7 is a flowchart of video data processing
  • FIG. 8 is a flowchart of audio data processing
  • FIG. 9 is a flowchart of additional information processing.
  • step Clll video data sampling and digital input are first performed (step Clll).
  • the digitized video data is encoded and temporarily stored in RAMI 13 (step C 112).
  • the video data is stored in the RAMI 13, the video data processing ends.
  • step C121 audio data sampling and digital input are first performed (step C121).
  • the digitized audio data is encoded and temporarily stored in the RAMI 13 (step C122).
  • step C122 When the audio data is stored in the RAMI 13, the audio data processing ends.
  • step C131 distance information, azimuth information, position information, zoom distance information, pan angle information, tilt angle information, and speed information in pan, tilt, and zoom, etc. Obtained (step C131).
  • the acquired attached calorie information is stored in the RAM 113 (step C132).
  • step C132 When the additional information is stored in the RAMI 13, the additional information processing ends.
  • the video data and audio data stored in the RAMI 13 are multiplexed (step C14) and stored in the RAMI 13 together with additional information.
  • CPU 111 determines whether or not these data stored in RAM 113 are stored in 1 GOP (step C15).
  • 1GOP is a video unit composed of video data, audio data and additional information of about 1 to 15 frames.
  • step C15 NO
  • the CPU 111 returns the process to step C10, and executes the processing of video data, audio data, and additional information of the next video frame.
  • step C15 YES
  • the CPU 111 controls the recording unit 180 to record these data (step C16).
  • FIG. 10 is a schematic diagram of a recording format
  • FIG. 11 is a schematic diagram of a header and additional information in the recording format.
  • a video stream composed of continuous still images has a configuration in which data is sequentially arranged in GOP units.
  • One GOP is header, video Z audio data
  • FIG. 11 (a) shows the state of the header.
  • the header includes the number of frames included in the GOP, the GOP number, the address on the recording medium of the video Z audio data, the video
  • It consists of the size of the Z audio data, the address of the additional information on the recording medium, and the size of the additional information.
  • Fig. 11 (b) shows the state of the additional information.
  • Additional information includes information such as distance, direction, position, zoom size, pan direction and angle, and tilt direction and angle.
  • the CPU 111 determines whether there is a video to be recorded (step C17). If there is still an image to be recorded (step C17: NO), the CPU 111 returns the process to step C10 again, and processes the video data, audio data, and additional information relating to the next video frame. If there is no more video to record (step C17: YES), the recording process ends.
  • the main camera 100 and the sub camera 200 are interlocked with each other, and the subject 20 is effectively captured based on the mutual positional relationship. Is possible. Therefore, it is possible to shoot extremely high quality images.
  • the sub camera 200 is configured to operate by transmitting a command signal for controlling the sub camera 200 in the main camera 100 side force. How the CPU 211 of the camera 200 should move according to the position information transmitted from the main camera 100, or according to the pan / tilt operation amount or zoom operation amount of the main camera 100, etc. You may decide.
  • the current position, the shooting direction, and the force configured to be able to acquire the distance to the subject. For example, in a room or the like, the position search signal such as GPS does not reach the current position. In some cases, the position cannot be specified. Even in such a case, if the consensus of photographing the subject 20 is obtained in advance between the main camera 100 and the sub camera 200, no problem will arise!
  • each means for obtaining the position information is not necessarily required!
  • the sub camera 200 can operate based on a command signal from the main camera 100 or a signal conveying the operation of the main camera 100, etc.
  • the effects of the present invention can be enjoyed unchanged.
  • both forces of the main camera 100 and the sub camera 200 are described to operate in the full mode.
  • the main camera 100 is actively operated by the user 100a, and the video is displayed. It may be taken.
  • the sub camera 200 can operate following the operation of the main camera 100.
  • FIG. 12 is a schematic diagram of the follow-up control according to the modified example of the present invention.
  • the same parts as those in the above-described embodiment are denoted by the same reference numerals and the description thereof is omitted.
  • the sub camera 200 includes a display unit 300, and the composition of the video currently being shot is displayed on the display screen.
  • the CPU 211 reads “Left” on the display screen based on a command signal from the main camera 100 or based on a control signal that conveys the operation of the main camera 100.
  • the display unit 300 is controlled to display the message, “Pan 30 °! /,”.
  • the tracking operation equivalent to the embodiment is easy. Can be realized.
  • the photographing device is used in a hand-held state rather than being fixed to a tripod. In such a situation, it is difficult to pan or tilt the sub camera 200 even if the camera rotation unit 240 is controlled. Therefore, this modification is very effective.
  • a portable terminal such as a mobile phone is equipped with a video camera system, it is also very effective because it is difficult to fix it to a tripod.
  • the present invention is not limited to the above-described embodiments, and the entire specification can be modified as appropriate without departing from the gist or philosophy of the invention that can be read.
  • the system, the video imaging device, and the video imaging method are also included in the technical scope of the present invention.
  • the video shooting system, the video shooting device, and the video shooting method according to the present invention can be used for, for example, a video shooting system, a video shooting device, and a video shooting method capable of obtaining a high-quality video. .

Abstract

A video shooting system (10) is provided with a main camera (100) and a subcamera (200). A CPU (111) of the main camera (100) generates a command signal which corresponds to movement of the main camera (100), based on the current position, distance to an object and shooting direction of the main camera (100) and those of the subcamera (200), and transmits the command signal to the subcamera (200) through a communication section (160). The subcamera (200) operates by following the main camera (100) in response to the command signal.

Description

明 細 書  Specification
映像撮影システム、映像撮影装置、及び映像撮影方法  Video shooting system, video shooting device, and video shooting method
技術分野  Technical field
[0001] 本発明は、映像撮影システム、映像撮影装置、及び映像撮影方法の技術分野に 関する。  The present invention relates to a technical field of a video shooting system, a video shooting device, and a video shooting method.
背景技術  Background art
[0002] 複数の映像撮影装置を相互に連動させる映像撮影システムが提案されて ヽる(例 えば、特許文献 1参照)。  [0002] There has been proposed a video imaging system in which a plurality of video imaging devices are linked to each other (for example, see Patent Document 1).
[0003] 特許文献 1に記載された撮影システム(以下、「従来の技術」と称する)によれば、第 1のカメラ力も送信された第 1のカメラにおける露出値に基づいて第 2のカメラにおけ る露出値が制御される。従って、離れた場所にある 2台以上のカメラの露出を 1台の カメラの操作で任意に制御することが可能であるとされている。  [0003] According to the imaging system described in Patent Document 1 (hereinafter referred to as "conventional technology"), the first camera force is transmitted to the second camera based on the exposure value in the first camera to which the first camera power is also transmitted. The exposure value is controlled. Therefore, it is said that the exposure of two or more cameras at remote locations can be controlled arbitrarily by operating one camera.
[0004] 特許文献 1:特開 2001— 281717号公報  [0004] Patent Document 1: Japanese Patent Laid-Open No. 2001-281717
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0005] し力しながら従来の技術は、以下に示す問題点を有する。 [0005] However, the conventional technology has the following problems.
[0006] 映像の品質を決定する要素は露出の値以外にも多数あり、例え 2台のカメラ間で露 出の値を統一できたとしても、最終的に得られる映像の品質は 2台のカメラ間で相互 に統一されたものとはなり難い。即ち、従来の技術では、複数台の撮影装置を使用し てユーザを満足させ得る高品質な映像を撮影することが実質上困難である。  [0006] There are many factors that determine the quality of video in addition to the exposure value. For example, even if the exposure value can be unified between two cameras, the quality of the final video obtained is two. It is unlikely that they will be unified between cameras. That is, with the conventional technology, it is practically difficult to capture a high-quality image that can satisfy the user using a plurality of imaging devices.
[0007] 本発明は、例えば上述した問題点に鑑みてなされたものであり、高品質な映像を得 ることが可能な映像撮影システム、映像撮影装置、及び映像撮影方法を提供するこ とを課題とする。  [0007] The present invention has been made in view of the above-described problems, for example, and provides a video shooting system, a video shooting device, and a video shooting method capable of obtaining a high-quality video. Let it be an issue.
課題を解決するための手段  Means for solving the problem
[0008] く映像撮影システム〉 [0008] Ku Video Shooting System>
本発明の映像撮影システムは上記課題を解決するために、ネットワークに収容され 、映像を撮影するための少なくとも一つの主撮影装置及び少なくとも一つの副撮影 装置を含み、前記主撮影装置は、前記映像を撮影する第 1撮影手段と、前記第 1撮 影手段における前記映像の撮影条件を制御する第 1制御手段と、前記撮影条件のう ち、前記撮影される映像の構図を規定する条件に応じて、前記副撮影装置を追従さ せるための制御情報を生成する制御情報生成手段と、前記副撮影装置に対し、前 記ネットワークを介して前記生成された制御情報を送信する第 1通信手段とを具備し 、前記副撮影装置は、前記映像を撮影する第 2撮影手段と、前記ネットワークを介し て前記送信された制御情報を受信する第 2通信手段と、前記受信された制御情報に 基づ 、て、前記第 2撮影手段における前記映像の撮影条件を制御する第 2制御手 段とを具備する。 In order to solve the above problems, a video shooting system of the present invention is accommodated in a network, and at least one main shooting device and at least one sub-shooting for shooting a video. The main photographing device includes a first photographing means for photographing the video, a first control means for controlling the photographing condition of the video in the first photographing means, and among the photographing conditions, Control information generating means for generating control information for causing the sub-photographing device to follow in accordance with conditions defining the composition of the video to be photographed, and the generation for the sub-photographing device via the network. A first communication means for transmitting the transmitted control information, wherein the sub-photographing device is a second photographing means for photographing the video and a second communication for receiving the transmitted control information via the network. And a second control means for controlling the imaging conditions of the video in the second imaging means based on the received control information.
[0009] 本発明において、「ネットワーク」とは、例えば、 USB (Universal Serial Bus)又は IE EE1394などに準拠した WAN (Wide Area Network)又は LAN (Local Area Networ k)などの有線通信ネットワークや、これら有線通信ネットワークに対し、基地局又はァ クセスポイントなどを介してアクセスする無線通信ネットワークなどを含み、少なくとも 本発明に係る主撮影装置力 副撮影装置への通信を可能とするものを全て含む概 念である。  In the present invention, “network” means, for example, a wired communication network such as a WAN (Wide Area Network) or a LAN (Local Area Network) compliant with USB (Universal Serial Bus) or IE EE1394, or the like. A concept that includes a wireless communication network that accesses a wired communication network via a base station or an access point, etc., and includes at least all of the devices that enable communication with the main imaging device capability and secondary imaging device according to the present invention. It is.
[0010] また、本発明において、「主撮影装置」及び「副撮影装置」とは、本発明に係る映像 撮影システムを構成する撮影装置の夫々一を表す。ここで、「撮影装置」とは、被写 体を撮影可能な装置を全て含んだ概念であり、例えば、 DV (Digital Video)カメラ、ビ デォカメラ、カメラ、及びデジタルカメラなどを指す。或いは、携帯電話など携帯端末 に搭載されたデジタルカメラやビデオカメラなどを指す。  [0010] In the present invention, the "main photographing device" and the "sub photographing device" represent one of the photographing devices that constitute the video photographing system according to the present invention. Here, the “photographing device” is a concept including all devices capable of photographing a subject and includes, for example, a DV (Digital Video) camera, a video camera, a camera, a digital camera, and the like. Or it refers to a digital camera or video camera mounted on a mobile terminal such as a mobile phone.
[0011] 本発明の映像撮影システムによれば、その動作時には、主撮影装置においては第 1撮影手段により、また副撮影装置においては第 2撮影手段により、夫々映像が撮影 される。これら第 1及び第 2撮影手段における映像の撮影条件は、夫々第 1及び第 2 制御手段により制御されている。  According to the video imaging system of the present invention, during the operation, video is captured by the first imaging means in the main imaging apparatus and by the second imaging means in the sub-imaging apparatus. The image capturing conditions of the first and second imaging means are controlled by the first and second control means, respectively.
[0012] ここで、本発明における「映像の撮影条件」とは、撮影される映像と何らかの関係を 有する条件を全て含む概念である。具体的には、例えば、シャッタースピード、絞り値 、又は被写体深度などの露出に関する条件、フェードイン若しくはアウト、又は様々な 画像処理などの撮影手法に関する条件、及び、ズームイン若しくはアウト、左右方向 への第 1及び第 2撮影手段の回動(以降、適宜「パン」と称する)、又は上下方向への 第 1及び第 2撮影手段の回動 (以降、適宜「ティルト」と称する)などの構図に関する条 件などの全体又は一部を指す。 Here, the “video shooting conditions” in the present invention is a concept including all conditions having some relationship with the video to be shot. Specifically, for example, conditions relating to exposure such as shutter speed, aperture value, or subject depth, conditions relating to shooting methods such as fade-in or out, or various image processing, and zoom-in or out, left-right direction Rotation of the first and second imaging means (hereinafter referred to as “pan” as appropriate), or rotation of the first and second imaging means in the vertical direction (hereinafter referred to as “tilt” as appropriate), etc. This refers to all or part of the compositional conditions.
[0013] 本発明において第 1及び第 2制御手段は、このような撮影条件を制御するが、係る 制御の態様は、例えば、フルオート制御、セミオート制御、又は何らかの入力機器な どを介したユーザ力もの指示に基づ 、た制御など各種形態を採り得る。フルオート制 御の場合には、例えば、 ROM (Read Only Memory)又は何らかの記録媒体などに格 納される制御プログラムに従って、撮影中はユーザの操作を介することなくこれら撮 影条件が制御されてもよい。この場合、例えば、主又は副撮影装置が三脚などの固 定手段を介して設置されていれば、このような固定手段と主又は副撮影装置とのァタ ツチメント部分をモータなどによって駆動することにより、パン或いはティルトが制御さ れてもよい。また、セミオート制御される場合には、例えば、パン或いはティルトなど、 撮影条件の一部がユーザによる操作を介して制御されてもよい。  [0013] In the present invention, the first and second control means control such shooting conditions, and the mode of such control is, for example, full auto control, semi-auto control, or a user via some input device. Various forms such as simple control based on powerful instructions can be adopted. In the case of full auto control, even if these shooting conditions are controlled without any user operation during shooting according to a control program stored in a ROM (Read Only Memory) or some recording medium, for example. Good. In this case, for example, if the main or sub-photographing device is installed via a fixing means such as a tripod, the attachment portion between the fixing means and the main or sub-photographing device is driven by a motor or the like. The pan or tilt may be controlled by. In the case of semi-automatic control, for example, a part of shooting conditions such as panning or tilting may be controlled through an operation by a user.
[0014] ここで特に、主撮影装置及び副撮影装置の間で、従来技術の如く露出値などの情 報を共有したとしても、被写体の種類 (人間、動物、車、又は風景など)又は撮影目 的 (運動会、発表会、又は各種イベントなど)などによっては、十分な映像品質を得る ことは難しい。  [0014] Here, in particular, even if information such as exposure values is shared between the main photographing device and the sub photographing device as in the prior art, the type of subject (human, animal, car, landscape, etc.) or photographing Depending on the purpose (athletic meet, presentation, or various events), it is difficult to obtain sufficient video quality.
[0015] 例えば、運動会などにおいて、主撮影装置が、被写体である一の競技者の動きを 追いながら撮影を行っている場合に、別の位置に設置された副撮影装置によって、 主撮影装置の撮影動作に追従して、係る競技者を別方向から撮影するといつたよう なことは困難である。従って、主及び副撮影装置各々において撮影された映像は統 ー感が欠如した低品質の映像となり易い。  [0015] For example, when the main shooting device is shooting while following the movement of one athlete as a subject at an athletic meet or the like, the sub-shooting device installed at another position may It is difficult to follow the shooting action and take a picture of the competitor from another direction. Therefore, the images taken by the main and sub-photographing devices tend to be low-quality images that lack sense.
[0016] 然るに、本発明においては、以下の如くにしてこのような問題を解決し得る。即ち、 本発明の動作時においては更に、主撮影装置の制御情報生成手段において、撮影 される映像の構図を規定する条件に応じて、副撮影装置を追従させるための制御情 報が生成され、副撮影装置に対し、第 1通信手段によりネットワークを介して送信され る。  However, in the present invention, such a problem can be solved as follows. That is, during the operation of the present invention, the control information generating means of the main image capturing device further generates control information for causing the sub image capturing device to follow in accordance with the conditions defining the composition of the image to be captured. It is transmitted to the secondary imaging device via the network by the first communication means.
[0017] ここで、「構図を規定する撮影条件」とは、例えば、主撮影装置の位置、第 1撮影手 段が映像を撮影している方向(以降、適宜「撮影方位」と称する)、主撮影装置と被写 体との距離、又はズーム倍率などを指し、第 1撮影手段によって撮影される映像の構 図をいくらかなりとも規定し得る撮影条件を広く表す概念である。 [0017] Here, the "imaging conditions for defining the composition" are, for example, the position of the main imaging device, the first imaging operator The direction in which the stage is shooting the image (hereinafter referred to as “shooting direction” as appropriate), the distance between the main shooting device and the subject, the zoom magnification, etc., and the composition of the image shot by the first shooting means. This is a concept that broadly represents the shooting conditions that can define the figure to a great extent.
[0018] このような第 1撮影条件に応じて生成される制御情報とは、例えば、主撮影装置側 において構図の変化を伴う動作、例えば、設置位置の変更、設置位置が保持された 状態 (例えば三脚などによる固定状態)でのパン若しくはティルト、又はズーム倍率の 変更などが行われた際に、副撮影装置を主撮影装置に追従させるための情報を指 す。  [0018] Such control information generated according to the first imaging condition is, for example, an operation accompanied by a composition change on the main imaging device side, for example, a change in installation position, a state in which the installation position is maintained ( For example, when panning or tilting in a fixed state with a tripod or the like, or when changing the zoom magnification, etc., it is information that causes the sub-photographing device to follow the main photographing device.
[0019] ここで、「追従させる」とは、例えば、主撮影装置がパン (或いはティルト)した際に、 副撮影装置を同様に、同相又は逆相でパン (ティルト)させることなどを指す。或いは 、主撮影装置のズーム倍率が望遠側 (ズームイン)に変更された場合に、副撮影装置 のズーム倍率を望遠側、又は広角側 (ズームアウト)に変更することなどを指す。従つ て、係る制御情報の態様は、積極的に副撮影装置の動作を制御するコマンド信号の 類であってもよ 、し、副撮影装置側で何らかの制御プログラム又は制御アルゴリズム に従って主撮影装置への追従制御が可能である場合には、単に撮影条件の変化量 などを表した情報であってもよ 、。  Here, “following” refers to, for example, panning (tilting) the secondary imaging device in the same phase or in reverse phase when the main imaging device pans (or tilts). Alternatively, when the zoom magnification of the main photographing device is changed to the telephoto side (zoom-in), the zoom magnification of the sub-photographing device is changed to the telephoto side or the wide-angle side (zoom-out). Therefore, the mode of the control information may be a command signal that positively controls the operation of the sub-photographing device, and the sub-photographing device side transfers to the main photographing device according to some control program or control algorithm. If follow-up control is possible, the information may simply represent the amount of change in shooting conditions.
[0020] 副撮影装置においては、この制御情報が第 2通信手段によりネットワークを介して 受信される。副撮影装置の第 2制御手段は、この受信された制御情報に基づいて前 述の第 2撮影手段における映像の撮影条件を制御する。ここで、第 2制御手段による 撮影条件の制御は、主撮影装置カゝら送信される制御情報の態様に応じて多様な形 態を採り得る。例えば、上述したように、制御情報がコマンド信号の類である場合には 、係るコマンド信号に従ってズーム倍率を変更したり、副撮影装置をパン若しくはティ ルトさせたりしてもよい。また、単に主撮影装置側における撮影条件を表す情報が制 御情報として与えられた場合には、第 2制御手段が、現時点における主撮影装置の 動作を推測して、主撮影装置にぉ ヽて撮影されて ヽる映像に最も適合する撮影条件 で映像が撮影されるような制御を行ってもよい。尚、このような推測の根拠は、予め実 験的、経験的、又はシミュレーションなどによって与えられていてもよい。  [0020] In the sub-photographing device, this control information is received by the second communication means via the network. Based on the received control information, the second control means of the sub-photographing device controls the video shooting conditions in the second photographing means described above. Here, the control of the photographing conditions by the second control means can take various forms depending on the form of the control information transmitted from the main photographing apparatus. For example, as described above, when the control information is a type of command signal, the zoom magnification may be changed according to the command signal, or the sub-shooting device may be panned or tilted. In addition, when information representing the shooting conditions on the main photographing apparatus side is simply given as control information, the second control means estimates the current operation of the main photographing apparatus and contacts the main photographing apparatus. Control may be performed so that the video is shot under shooting conditions that best match the video being shot. Note that the basis for such an estimation may be given in advance by experiment, empirical, simulation, or the like.
[0021] 或いは、副撮影装置がユーザによってマニュアル操作されている場合には、撮影 条件の変更を促す情報を当該ユーザに提供することによって、撮影条件が制御され てもよい。即ち、このような制御を何ら実行しない場合と比較して、副撮影装置側にお ける撮影条件が主撮影装置側における撮影条件に追従して改善され得る限りにお[0021] Alternatively, if the sub-photographing device is manually operated by the user, The shooting condition may be controlled by providing information prompting the user to change the condition. That is, as compared with the case where no such control is performed, as long as the shooting conditions on the sub-photographing device side can be improved following the shooting conditions on the main-photographing device side.
V、て、第 2制御手段が実行する撮影条件の制御の態様は何ら限定されな 、。 V. The manner of controlling the photographing conditions performed by the second control means is not limited at all.
[0022] 例えば、このような「第 2撮影条件の変更を促す情報」とは、副撮影装置に、液晶デ イスプレイなどの何らかの表示手段が備わる場合には、そのような表示手段を介して 提供される情報であってもよい。例えば副撮影装置を右方向に 30° パンさせたい場 合には、係る表示手段上に右方向への矢印マークなどを表示させ、ユーザが副撮影 装置を 30度パンさせた時点で当該マークの表示を終了させることなどによって、第 2 制御手段が撮影条件を制御してもよい。或いは、このような情報とは、副撮影装置に スピーカなどの音声出力手段が備わる場合には、音声情報の類であってもよい。 [0022] For example, such "information for prompting the change of the second imaging condition" is provided through such a display means when the secondary imaging apparatus is provided with some display means such as a liquid crystal display. May be information. For example, if you want to pan the sub-photographing device by 30 ° to the right, display an arrow mark to the right on the display means, and when the user pans the sub-photographing device by 30 degrees, The second control means may control the photographing conditions by terminating the display. Alternatively, such information may be a kind of audio information when the sub-photographing device is provided with audio output means such as a speaker.
[0023] 尚、本発明の映像撮影システムにおいては、映像の撮影中は複数の映像撮影装 置間に制御上の序列が生じるが、映像撮影装置の構成自体は、映像撮影システムを 構成する全ての映像撮影装置間で同等であってよい。例えば、このような制御上の 序列は、映像撮影システムを構築して映像を撮影する段階において、その都度決定 されてよい。 [0023] In the video imaging system of the present invention, a sequence of control occurs between a plurality of video imaging devices during video imaging, but the configuration of the video imaging device itself constitutes all of the video imaging system. It may be equivalent among the video photographing apparatuses. For example, such a control order may be determined each time a video shooting system is constructed and a video is shot.
[0024] このように、本発明の映像撮影システムによれば、副撮影装置を主撮影装置の撮 影動作に追従させることによって、例えば、主撮影装置が被写体をアップで撮影する 際に、副撮影装置は逆に被写体を俯瞰するような広角映像を撮影することが可能と なる。また、主撮影装置が被写体の動きに追従してパン又はティルトを頻繁に伴う撮 影を行う際に、異なる位置に設置された副撮影装置を全く同様にパン又はティルトさ せることも可能となる。即ち、極めて高品質な映像を撮影することが可能となるのであ る。  As described above, according to the video image capturing system of the present invention, by making the secondary imaging device follow the imaging operation of the primary imaging device, for example, when the primary imaging device captures a subject up, On the other hand, the imaging device can shoot wide-angle images that look down on the subject. In addition, when the main photographing device follows the movement of the subject and frequently performs panning or tilting, it is possible to pan or tilt the sub-photographing devices installed at different positions in exactly the same way. . In other words, it is possible to shoot extremely high quality images.
[0025] 本発明の映像撮影システムの一の態様では、前記第 1及び第 2制御手段のうち少 なくとも一方は、予め設定された撮影パターンに基づいて、前記少なくとも一方に対 応する撮影条件を制御する。  In one aspect of the video imaging system of the present invention, at least one of the first and second control means is based on a preset imaging pattern, and imaging conditions corresponding to the at least one To control.
[0026] ここで述べられる「予め設定された撮影パターン」とは、例えば、運動会、音楽発表 会、又は屋外イベントなど多様な撮影目的毎に最適化されたパターンを指す。 [0027] 例えば、「運動会用」の撮影パターンとは、広範囲にわたって動く被写体を想定して 、例えば、撮影装置を所定のタイミングで頻繁にパンさせるような撮影パターンとして 設定される。また、例えば「音楽発表会用」の撮影パターンとは、固定された被写体を 想定して、例えば、ズームを多用するような撮影パターンとして設定される。このような 撮影パターンに基づいて撮影条件が制御される場合、主又は副撮影装置では、フル オートモードにより映像の撮影を行うことも容易にして可能である。但し、このようなフ ルオートモードによる撮影を行う場合であっても、例えば、予めパンする範囲をユー ザ側で設定可能となっていてもよい。また、パン (或いはティルト)、又はズームを行う タイミングが指定可能となっていてもよい。或いは、「頻繁に」、「標準的に」、若しくは「 控えめに」などのような、設定された撮影パターンにおける特徴的なアクション (パン やティルトなど)の発生頻度を抽象的に指定するような情報が入力可能である場合に は、ユーザによって入力されるそのような抽象的な指定情報に沿うような撮影条件の 制御を、第 1又は第 2制御手段が行ってもよい。 The “preset shooting pattern” described here refers to a pattern optimized for each of various shooting purposes such as an athletic meet, a music recital, or an outdoor event. [0027] For example, the "Athletic meet" shooting pattern is set as a shooting pattern that frequently pans the shooting apparatus at a predetermined timing, assuming a subject moving over a wide range. In addition, for example, the shooting pattern for “music recital” is set as a shooting pattern that uses a lot of zoom, assuming a fixed subject. When shooting conditions are controlled based on such a shooting pattern, the main or sub-shooting device can easily shoot a video in the full auto mode. However, even in the case of shooting in such a full auto mode, for example, the panning range may be set on the user side in advance. In addition, the timing for panning (or tilting) or zooming may be specified. Or, specify the frequency of occurrence of characteristic actions (such as pan and tilt) in the set shooting pattern, such as “Frequently”, “Standardly”, or “Conservatively”. When the information can be input, the first or second control means may control the photographing condition in accordance with such abstract designation information input by the user.
[0028] 尚、このような撮影パターンは、予め実験的、経験的、或いはシミュレーションなど の手法によって、各種撮影目的に対する最適なパターンが予測、推測又は特定可能 である場合には、そのようなパターンであってもよい。  [0028] It should be noted that such a shooting pattern is used when an optimum pattern for various shooting purposes can be predicted, estimated or specified in advance by an experimental, empirical, or simulation technique. It may be.
[0029] この態様によれば、予め設定された撮影パターンに基づ!、て撮影条件が制御され るので、副撮影装置を主撮影装置に追従させつつ、複数台の映像撮影装置を使用 した一層高品質な映像を撮影することが可能となる。  [0029] According to this aspect, since the shooting conditions are controlled based on a preset shooting pattern, a plurality of video shooting devices are used while the sub-shooting device follows the main shooting device. It is possible to shoot higher quality video.
[0030] 本発明の映像撮影システムの他の態様では、前記第 1及び第 2制御手段のうち少 なくとも一方は、前記予め設定された撮影パターンとして、前記主撮影装置と前記副 撮影装置との相対的な位置関係に対応する撮影パターンに基づいて前記撮影条件 を制御する。  [0030] In another aspect of the video imaging system of the present invention, at least one of the first and second control means includes the main imaging device and the secondary imaging device as the preset imaging pattern. The photographing conditions are controlled based on the photographing pattern corresponding to the relative positional relationship between the two.
[0031] この態様における「相対的な位置関係」とは、厳密な意味での位置関係であっても よいが、最も単純な態様の一として、例えば「近い」か「遠い」かといつた抽象的な距 離関係であってもよい。また、主及び副撮影装置の間で、被写体が相互に共通であ り、且つ第 1及び第 2撮影手段の撮影方位が係る被写体の方向であるとのコンセンサ スが得られているならば、係る被写体に対する角度差であってもよい。そのような場合 の角度差も、例えば、対面しているの力、或いは隣接しているのかといつた、抽象的 な概念を含んで規定されてもょ 、。 [0031] The "relative positional relationship" in this aspect may be a positional relationship in a strict sense, but as one of the simplest aspects, for example, an abstraction such as "close" or "far" It may be a similar distance relationship. In addition, if there is a consensus that the subject is common between the main and secondary photographing devices and that the photographing orientation of the first and second photographing means is the direction of the subject. It may be an angle difference with respect to the subject. In such cases The difference in angles may also be defined including abstract concepts such as when facing forces or when they are adjacent.
[0032] この態様によれば、このような相対的な位置関係に対応する撮影パターンに基づい て夫々の撮影装置における撮影条件が制御される。例えば、両者の位置関係が「正 対している」場合には、主撮影装置が右パンした場合には、副撮影装置が左パンす ることによって、同一の被写体が撮影されるような制御を行ってもよい。或いは、「被 写体力ゝらの距離差が大きい」場合には、適当な範囲でズームの補償を行って、映像 における被写体の大きさを調整するような制御が行われてもよい。  [0032] According to this aspect, the photographing conditions in each photographing apparatus are controlled based on the photographing pattern corresponding to such a relative positional relationship. For example, when the positional relationship between the two is “facing”, when the main shooting device pans to the right, the sub-shooting device pans to the left so that the same subject is shot. You may go. Alternatively, when “the distance difference of the subject power is large”, the zoom may be compensated within an appropriate range so that the size of the subject in the image is adjusted.
[0033] 従って、この態様によれば、主撮影装置及び副撮影装置の位置関係を反映した、 高品質な映像を撮影することが可能となる。  [0033] Therefore, according to this aspect, it is possible to shoot a high-quality video reflecting the positional relationship between the main photographing device and the sub photographing device.
[0034] 本発明の映像撮影システムの他の態様では、前記主撮影装置は、(i)被写体まで の距離、(ii)前記主撮影装置の現在位置、及び (iii)前記第 1撮影手段の撮影方位の うち少なくとも一つを含む第 1位置情報を取得する第 1取得手段を更に具備する。  [0034] In another aspect of the video imaging system of the present invention, the main imaging device includes: (i) a distance to a subject; (ii) a current position of the main imaging device; and (iii) the first imaging unit. First acquisition means for acquiring first position information including at least one of the shooting directions is further provided.
[0035] この態様によれば、第 1取得手段によって取得される第 1位置情報により、主撮影 装置において撮影される映像に様々な付帯情報を付与することが可能となり、映像 の品質を一層向上させることが可能となる。  [0035] According to this aspect, the first position information acquired by the first acquisition unit can add various supplementary information to the video imaged by the main imaging device, further improving the video quality. It becomes possible to make it.
[0036] ここで、第 1位置情報における「被写体までの距離」とは、厳密に距離値として特定 されなくともよい。例えば、一般的基準から見て「遠い」又は「近い」程度の曖昧さで、 被写体までの距離が取得されてもよい。但し、公知であるレーザ測距技術又はそれ に類する測距技術などにより、正確に被写体までの距離が取得された場合には、一 層映像の品質を向上させ得る。  [0036] Here, the "distance to the subject" in the first position information may not be strictly specified as a distance value. For example, the distance to the subject may be acquired with an ambiguity of “far” or “near” as viewed from a general standard. However, when the distance to the subject is accurately obtained by a known laser ranging technique or a similar ranging technique, the quality of the single layer video can be improved.
[0037] また、第 1位置情報における「現在位置」とは、緯度情報、経度情報、及び高度情 報力 なる正確な現在位置であってもよ 、し、係る現在位置を含む一定のエリアが特 定された程度であってもよい。このような現在位置は、 GPS (Global Positioning Syste m)などの測位技術を使用して取得されてもよ!、し、無線ネットワークなどを介して基 地局によって特定されてもよい。  [0037] Further, the "current position" in the first position information may be an accurate current position such as latitude information, longitude information, and altitude information power, and a certain area including the current position is It may be a specified degree. Such a current position may be obtained using positioning technology such as GPS (Global Positioning System)! Or may be specified by a base station via a wireless network or the like.
また、第 1位置情報における「撮影方位」とは、東西南北といった簡易な方位であつ ても、地磁気センサなどを介して取得される正確な絶対方位であってもよ ヽ。 [0038] 例えば、これら三種類の要素が全て正確に (即ち、ある程度の信憑性をもって)取 得された場合には、主撮影装置において撮影される映像の品質は著しく向上する。 In addition, the “shooting azimuth” in the first position information may be a simple azimuth such as east, west, south, or north, or an accurate absolute azimuth obtained via a geomagnetic sensor or the like. [0038] For example, when all of these three types of elements are obtained accurately (that is, with a certain degree of credibility), the quality of the video shot by the main shooting device is significantly improved.
[0039] 主撮影装置において第 1位置情報が取得される本発明の映像撮影システムの一の 態様では、前記副撮影装置は、(i)被写体までの距離、(ii)前記副撮影装置の現在 位置、及び (iii)前記第 2撮影手段の撮影方位のうち少なくとも一つを含む第 2位置情 報を取得する第 2取得手段を更に具備する。  [0039] In one aspect of the video photographing system of the present invention in which the first position information is acquired in the main photographing device, the sub photographing device includes: (i) a distance to a subject; (ii) a current state of the sub photographing device. And (iii) second acquisition means for acquiring second position information including at least one of the imaging directions of the second imaging means.
[0040] この態様によれば、副撮影装置においても、前述の第 1位置情報と同等な第 2位置 情報が取得されるので、映像の品質を向上させ得る。  [0040] According to this aspect, since the second position information equivalent to the first position information described above is acquired also in the sub-photographing device, the quality of the video can be improved.
[0041] 第 1位置情報及び第 2位置情報が取得される本発明の映像撮影システムの一の態 様では、前記第 1通信手段は、前記副撮影装置に対し、前記ネットワークを介して前 記取得された第 1位置情報を更に送信し、前記第 2通信手段は、前記送信された第 1位置情報を更に受信し、前記第 2制御手段は、前記取得された第 2位置情報及び 前記受信された第 1位置情報に基づいて前記撮影条件を制御する。  [0041] In an aspect of the video imaging system of the present invention in which the first position information and the second position information are acquired, the first communication means sends the sub-photographing apparatus to the sub-photographing apparatus via the network. The acquired first position information is further transmitted, the second communication means further receives the transmitted first position information, and the second control means is configured to receive the acquired second position information and the reception The imaging conditions are controlled based on the first position information.
[0042] この態様によれば、副撮影装置において、主撮影装置における第 1位置情報に基 づ 、た撮影条件の制御が行われるので、主及び副撮影装置間で相互に一体感のあ る極めて高品質な映像を撮影することが可能となる。  [0042] According to this aspect, in the sub-shooting device, the shooting conditions are controlled based on the first position information in the main shooting device, so that there is a sense of unity between the main and sub-shooting devices. It becomes possible to shoot extremely high quality images.
[0043] 更に、主撮影装置が所望の被写体を捕捉していることが予め判明していれば、副 撮影装置側で例え被写体をロストした場合であっても、速やかに被写体を発見、捕 捉することが可能となるので好適である。  [0043] Further, if it is previously known that the main photographing device captures the desired subject, even if the subject is lost on the side of the secondary photographing device, the subject is quickly found and captured. This is preferable because it can be performed.
[0044] また、予め撮影目的や被写体種類、或いは両者間の相対的な位置関係に対応す る複数の撮影パターンが用意されている場合には、このような送信された第 1位置情 報と取得された第 2位置情報とに基づいて、現在の両者の位置関係に最も適した撮 影パターンを自動的に選択し、撮影を行うように、第 2制御手段が撮影条件を制御し てもよい。  [0044] When a plurality of shooting patterns corresponding to the shooting purpose, the subject type, or the relative positional relationship between the two are prepared in advance, the transmitted first position information and Based on the acquired second position information, even if the second control means controls the shooting conditions so as to automatically select the shooting pattern most suitable for the current positional relationship between the two and perform shooting. Good.
[0045] 第 1位置情報及び第 2位置情報が取得される本発明の映像撮影システムの他の態 様では、前記第 2通信手段は、前記主撮影装置に対し、前記ネットワークを介して前 記取得された第 2位置情報を送信し、前記第 1通信手段は、前記送信された第 2位 置情報を受信し、前記第 1制御手段は、前記取得された第 1位置情報及び前記受信 された第 2位置情報に基づいて前記撮影条件を制御する。 [0045] In another aspect of the video image capturing system of the present invention in which the first position information and the second position information are acquired, the second communication unit is configured to transmit the main image capturing apparatus to the main image capturing apparatus via the network. The acquired second position information is transmitted, the first communication means receives the transmitted second position information, and the first control means includes the acquired first position information and the reception. The imaging condition is controlled based on the second position information.
[0046] この態様によれば、主撮影装置側で、副撮影装置の位置情報を取得することが可 能となるので、一層高品質な映像を撮影することが可能となる。この場合、主撮影装 置側において、副撮影装置の現在位置などを考慮した制御情報を送信することも容 易にして可能となる。例えば、副撮影装置が、撮影されるべき映像と無関係な方向を 向いている場合には、正しく被写体の方向を向くようなコマンド信号を送信する力 又 はパン若しくはティルトの量を指定するような制御情報を送信してもよ!/、。  [0046] According to this aspect, since it is possible to acquire the position information of the sub-photographing device on the main photographing device side, it is possible to photograph a higher quality video. In this case, it is also possible to easily transmit control information in consideration of the current position of the sub-imaging device on the main imaging device side. For example, if the sub-photographing device is pointing in a direction unrelated to the video to be shot, specify the power to send a command signal to correctly point the subject or the amount of panning or tilting. You can send control information!
[0047] 更に、主及び副撮影装置の両方において相互に相手の位置情報が保持された場 合には、両者間で相互の位置関係を正確に把握することが可能となるので、一層精 細な撮影条件の制御を行うことが可能となり、撮影される映像の品質が極めて向上す る。  [0047] Furthermore, when the position information of the other party is held in both the main and sub-photographing devices, it is possible to accurately grasp the mutual positional relationship between the two, so that the details are further refined. Therefore, it is possible to control the shooting conditions, and the quality of the shot video is greatly improved.
[0048] 本発明の映像撮影システムの他の態様では、前記主及び副撮影装置は夫々、前 記第 1及び第 2通信手段を介して相互に認証を行うための認証手段を更に具備する  [0048] In another aspect of the video imaging system of the present invention, each of the main and secondary imaging devices further includes authentication means for performing mutual authentication via the first and second communication means.
[0049] 本発明の映像撮影システムにおいては、少なくとも主撮影装置力ゝら副撮影装置に 対し制御情報の送信が行われるが、例えば、相互に離れた場所に設置されたこれら 撮影装置間を無線通信ネットワークで繋ぐ場合には、制御情報が送信されるべき撮 影装置と、他の撮影装置との間で混信が発生する可能性がある。しかしながら、この 態様によれば、予め主及び副撮影装置間で相互に認証手段を介した認証が行われ るので、そのような混信の可能性は著しく低減され、信頼性の高いデータの授受及び 映像の撮影を行うことが可能となる。 In the video imaging system of the present invention, control information is transmitted to at least the main imaging device and the auxiliary imaging device. For example, wireless communication between these imaging devices installed at locations separated from each other is performed. When connected via a communication network, interference may occur between the imaging device to which the control information is to be transmitted and another imaging device. However, according to this aspect, since the authentication is performed between the main and secondary imaging devices in advance through the authentication means, the possibility of such interference is remarkably reduced, and highly reliable data transmission and reception are possible. Video can be taken.
[0050] 尚、認証手段の態様は、主及び副撮影装置の間で相互認証が可能である限りにお いて、何ら限定されるものではなぐ例えば、予め撮影装置間に固有である IDなどの 識別情報を交換し合うことで認証がなされてもよ ヽ。  [0050] Note that the aspect of the authentication means is not limited as long as mutual authentication is possible between the main and secondary imaging devices, such as an ID that is unique between the imaging devices in advance. Authentication may be done by exchanging identification information.
[0051] 本発明の映像撮影システムの他の態様では、前記主及び副撮影装置のうち少なく とも一方は、該少なくとも一方において撮影された映像と対応する音声情報を取得す る音声情報取得手段を更に具備する。  [0051] In another aspect of the video imaging system of the present invention, at least one of the primary and secondary imaging devices includes audio information acquisition means for acquiring audio information corresponding to an image captured in at least one of the main and auxiliary imaging devices. In addition.
[0052] この態様によれば、主及び副撮影装置のうち少なくとも一方において、該少なくとも 一方において撮影される映像と対応する音声情報を取得することが可能となるので、 映像の品質が一層向上する。 [0052] According to this aspect, in at least one of the main and sub-photographing devices, the at least On the other hand, since it is possible to acquire audio information corresponding to the video imaged, the video quality is further improved.
<第 1映像撮影装置 >  <First video camera>
本発明の第 1映像撮影装置は上記課題を解決するために、ネットワークに収容され In order to solve the above problems, the first video imaging device of the present invention is accommodated in a network.
、該ネットワークに収容される他の映像撮影装置と共に映像を撮影する映像撮影装 置であって、前記映像を撮影する撮影手段と、前記撮影手段における前記映像の撮 影条件を制御する制御手段と、前記撮影条件のうち、前記撮影される映像の構図を 規定する条件に応じて、前記他の映像撮影装置を追従させるための制御情報を生 成する制御情報生成手段と、前記他の映像撮影装置に対し、前記ネットワークを介し て前記生成された制御情報を送信する通信手段とを具備する。 A video shooting device for shooting a video together with another video shooting device accommodated in the network, a shooting unit for shooting the video, and a control unit for controlling a shooting condition of the video in the shooting unit; A control information generating means for generating control information for causing the other video imaging device to follow in accordance with a condition defining a composition of the video to be shot among the shooting conditions, and the other video shooting. Communication means for transmitting the generated control information to the apparatus via the network.
[0053] 本発明の第 1映像撮影装置によれば、その動作時には、各手段によって、前述した 映像撮影システムにおける主撮影装置と同様な効果を実現することが可能となるの で、高品質な映像を撮影することが可能となる。  [0053] According to the first video imaging apparatus of the present invention, during the operation, the same effects as those of the main video imaging apparatus in the video imaging system described above can be realized by each means. It is possible to shoot a video.
<第 2映像撮影装置 >  <Second video shooting device>
本発明の第 2映像撮影装置は上記課題を解決するために、ネットワークに収容され 、該ネットワークに収容される他の映像撮影装置と共に映像を撮影する映像撮影装 置であって、前記映像を撮影する撮影手段と、前記他の映像撮影装置において、前 記撮影される映像の構図を規定する条件に応じて生成されると共に前記ネットワーク を介して送信される制御情報を、前記ネットワークを介して受信する通信手段と、前 記受信された制御情報に基づ!ヽて、前記撮影手段における前記映像の撮影条件を 制御する制御手段とを具備する。  In order to solve the above-described problem, the second video imaging device of the present invention is a video imaging device that is accommodated in a network and shoots video together with other video imaging devices accommodated in the network, and captures the video. And receiving the control information generated via the network and transmitted via the network in accordance with the conditions defining the composition of the video to be shot in the imaging unit and the other video imaging device. And communication means for controlling the image capturing conditions of the image capturing means based on the received control information.
[0054] 本発明の第 2映像撮影装置によれば、その動作時には、各手段によって、上述した 映像撮影システムにおける副撮影装置と同様な効果を実現することが可能となるの で、高品質な映像を撮影することが可能となる。  [0054] According to the second video imaging apparatus of the present invention, during the operation, the same effects as those of the sub-imaging apparatus in the video imaging system described above can be realized by each means. It is possible to shoot a video.
<映像撮影方法 >  <Video shooting method>
本発明の映像撮影方法は上記課題を解決するために、ネットワークに収容され、映 像を撮影するための少なくとも一つの主撮影装置及び少なくとも一つの副撮影装置 を含む映像撮影システムにお ヽて、相互に連動して前記映像を撮影するための映像 撮影方法であって、前記主撮影装置において、(i)前記映像を撮影する第 1撮影ェ 程、(ii)前記第 1撮影工程における前記映像の撮影条件を制御する第 1制御工程、 ( iii)前記撮影条件のうち、前記撮影される映像の構図を規定する条件に応じて、前記 副撮影装置を追従させるための制御情報を生成する制御情報生成工程、及び (iv) 前記副撮影装置に対し、前記ネットワークを介して前記生成された制御情報を送信 する送信工程と、前記副撮影装置において、(i)前記映像を撮影する第 2撮影工程、 (ii)前記ネットワークを介して前記送信された制御情報を受信する受信工程、及び (ii i)前記受信された制御情報に基づ 、て前記第 2撮影工程における前記映像の撮影 条件を制御する第 2制御工程とを具備する。 In order to solve the above-described problems, the video shooting method of the present invention is accommodated in a network, and includes a video shooting system including at least one main shooting device and at least one sub-shooting device for shooting images. Video for shooting the video in conjunction with each other (Ii) a first control step of controlling the video shooting conditions in the first shooting step in the main shooting device; (ii) a first control step of controlling the shooting conditions of the video in the first shooting step; ) A control information generating step for generating control information for causing the sub-photographing device to follow in accordance with a condition that defines the composition of the video to be photographed among the photographing conditions; and (iv) in the sub-photographing device. On the other hand, in the transmission step of transmitting the generated control information via the network, (i) a second shooting step of shooting the video in the sub-shooting device, (ii) the transmission of the image via the network. A receiving step for receiving the control information, and (ii) a second control step for controlling the shooting conditions of the video in the second shooting step based on the received control information.
[0055] 本発明の映像撮影方法によれば、上述した映像撮影システムにおける各手段と対 応する各工程における動作によって、係る映像撮影システムと同様に、高品質な映 像を撮影することが可能となる。  [0055] According to the video imaging method of the present invention, it is possible to capture a high-quality image in the same manner as in the video imaging system, by the operation in each process corresponding to each means in the video imaging system described above. It becomes.
[0056] 以上説明したように、映像撮影システムは、第 1撮影手段、第 1制御手段、制御情 報生成手段、第 1通信手段、第 2撮影手段、第 2通信手段、及び第 2制御手段を具 備するので、高品質な映像を撮影することが可能である。第 1映像撮影装置は、撮影 手段、制御手段、制御情報生成手段、及び通信手段を具備するので、高品質な映 像を撮影することが可能である。第 2映像撮影装置は、撮影手段、通信手段、及び制 御手段を具備するので、高品質な映像を撮影することが可能である。映像撮影方法 は、第 1撮影工程、第 1制御工程、制御情報生成工程、送信工程、第 2撮影工程、受 信工程、及び第 2制御工程を具備するので、高品質な映像を撮影することが可能と なる。  [0056] As described above, the video photographing system includes the first photographing means, the first control means, the control information generating means, the first communication means, the second photographing means, the second communication means, and the second control means. It is possible to shoot high-quality images. Since the first video imaging apparatus includes the imaging means, the control means, the control information generation means, and the communication means, it is possible to take a high-quality image. Since the second video imaging apparatus includes the imaging means, the communication means, and the control means, it is possible to take a high quality video. The video shooting method includes a first shooting process, a first control process, a control information generation process, a transmission process, a second shooting process, a reception process, and a second control process. Is possible.
[0057] 本発明のこのような作用及び他の利得は次に説明する実施例から明らかにされる。  [0057] These effects and other advantages of the present invention will become apparent from the embodiments described below.
図面の簡単な説明  Brief Description of Drawings
[0058] [図 1]本発明の実施例に係る映像撮影システムの概念図である。 FIG. 1 is a conceptual diagram of a video shooting system according to an embodiment of the present invention.
[図 2]図 1の映像撮影システムにおけるメインカメラのブロック図である。  2 is a block diagram of a main camera in the video imaging system of FIG.
[図 3]図 1の映像撮影システムの全体動作に係るフローチャートである。  3 is a flowchart relating to the overall operation of the video imaging system of FIG.
[図 4]図 3のフローチャートにおける撮影処理のフローチャートである。  FIG. 4 is a flowchart of a photographing process in the flowchart of FIG.
[図 5]図 1の映像撮影システムにおいて、被写体と各カメラとの位置関係を表す模式 図である。 [Fig. 5] Model showing the positional relationship between the subject and each camera in the video shooting system of Fig. 1. FIG.
[図 6]図 3のフローチャートにおける記録処理の模式図である。  FIG. 6 is a schematic diagram of a recording process in the flowchart of FIG.
[図 7]図 6の記録処理におけるビデオデータ処理のフローチャートである。  FIG. 7 is a flowchart of video data processing in the recording processing of FIG.
[図 8]図 6の記録処理におけるオーディオデータ処理のフローチャートである。  FIG. 8 is a flowchart of audio data processing in the recording processing of FIG.
[図 9]図 6の記録処理における付加情報処理のフローチャートである。  9 is a flowchart of additional information processing in the recording process of FIG.
[図 10]図 6の記録処理における記録フォーマットの模式図である。  FIG. 10 is a schematic diagram of a recording format in the recording process of FIG.
[図 11]図 10におけるヘッダ及び付加情報の模式図である。  FIG. 11 is a schematic diagram of the header and additional information in FIG.
[図 12]本発明の変形例に係るサブカメラの追従制御の模式図である。  FIG. 12 is a schematic diagram of sub camera follow-up control according to a modification of the present invention.
符号の説明  Explanation of symbols
[0059] 10· ··映像撮影システム、 20…被写体、 30…ネットワーク、 40· ··三脚、 100· ··メイン カメラ、 100a…ユーザ、 110· ··制御部、 111- --CPU, 112- --ROM, 113- --RAM, 120…集音部、 130…撮像部、 140· ··カメラ回動部、 150…レンズ駆動部、 160…通 信部、 170· ··位置情報取得部、 171· ··距離測定部、 172· ··位置検出部、 173· ··方 位検出部、 180…記録部、 190…入力部、 200…サブカメラ。  [0059] 10 ··· Image shooting system, 20… Subject, 30… Network, 40 ··· Tripod, 100 ··· Main camera, 100a · User, 110 ··· Control unit, 111--CPU, 112 ---ROM, 113- --RAM, 120 ... Sound collecting unit, 130 ... Imaging unit, 140 ... Camera rotation unit, 150 ... Lens driving unit, 160 ... Communication unit, 170 ... Position information acquisition 171 ··· Distance measurement unit, 172 ··· Position detection unit, 173 ··· Location detection unit, 180 ... recording unit, 190 ... input unit, 200 ... sub camera.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0060] 以下、本発明を実施するための最良の形態について実施例毎に順に図面に基づ いて説明する。 Hereinafter, the best mode for carrying out the present invention will be described in each embodiment in order with reference to the drawings.
[0061] 以下、本発明の好適な実施例について、図面を参照して説明する。  Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.
[0062] <実施例の構成 >  <Configuration of Example>
始めに、図 1を参照して、本発明の実施例に係る映像撮影システムの構成につい て説明する。ここに、図 1は、映像撮影システム 10の概念図である。  First, with reference to FIG. 1, the configuration of a video shooting system according to an embodiment of the present invention will be described. FIG. 1 is a conceptual diagram of the video photographing system 10.
[0063] 図 1において、映像撮影システム 10は、メインカメラ 100及びサブカメラ 200がネット ワーク 30に収容されてなり、メインカメラ 100とサブカメラ 200が相互に連動して被写 体 20を撮影するように構成されて 、る。  In FIG. 1, the video imaging system 10 includes a main camera 100 and a sub camera 200 housed in a network 30, and the main camera 100 and the sub camera 200 shoot the object 20 in conjunction with each other. It is structured as follows.
[0064] メインカメラ 100及びサブカメラ 200は夫々三脚 40によって固定されて!、る。各カメ ラは、三脚 40に対し、アタッチメント 41を介して固定されており、このアタッチメント 41 は、上下左右方向へ自在に回動することが可能に構成されている。従って、各カメラ も、三脚 40に固定された状態で 3次元的に回動することが可能に構成されている。 [0065] 尚、メインカメラ 100は、ユーザ 100aによって、予め、被写体 20を含んだ所望の構 図が得られるように、その設置状態が調整されている。本実施例においては、この設 置状態を適宜「標準状態」と称することとする。 [0064] Each of the main camera 100 and the sub camera 200 is fixed by a tripod 40 !. Each camera is fixed to a tripod 40 via an attachment 41, and the attachment 41 is configured to be able to freely rotate in the vertical and horizontal directions. Therefore, each camera can also be rotated three-dimensionally while being fixed to the tripod 40. Note that the installation state of the main camera 100 is adjusted in advance by the user 100a so that a desired composition including the subject 20 can be obtained. In this embodiment, this installation state is referred to as “standard state” as appropriate.
[0066] ネットワーク 30は、移動体通信ネットワーク及び有線通信ネットワークを含む、本発 明に係る「ネットワーク」の一例であり、例えば、 ADSL回線、光ファイノく、又は電話回 線などの各種回線、及びそれらに対応する基地局やアクセスポイントなどを含んでな る。  [0066] The network 30 is an example of a "network" according to the present invention including a mobile communication network and a wired communication network. For example, the network 30 includes various lines such as an ADSL line, optical fiber line, or telephone line, and It includes base stations and access points corresponding to them.
[0067] 次に、図 2を参照して、メインカメラ 100の詳細な構成について説明する。ここに、図 2は、メインカメラ 100のブロック図である。尚、本実施例において、メインカメラ 100及 びサブカメラ 200の内部構成は相互に同一であり、従って、図 2においては代表的に メインカメラ 100の構成として説明するものとする。サブカメラ 200と対応する各部の 符号は、図 2において括弧内に示すものとする。  Next, the detailed configuration of the main camera 100 will be described with reference to FIG. FIG. 2 is a block diagram of the main camera 100. In the present embodiment, the internal configurations of the main camera 100 and the sub camera 200 are the same, and therefore, the configuration of the main camera 100 will be described as a representative in FIG. The reference numerals of the parts corresponding to the sub camera 200 are shown in parentheses in FIG.
[0068] 図 2において、メインカメラ 100は、制御部 110、集音部 120、撮像部 130、カメラ回 動部 140、レンズ駆動部 150、通信部 160、位置情報取得部 170、記録部 180、及 び入力部 190を備える。  In FIG. 2, the main camera 100 includes a control unit 110, a sound collection unit 120, an imaging unit 130, a camera rotation unit 140, a lens driving unit 150, a communication unit 160, a position information acquisition unit 170, a recording unit 180, And an input unit 190.
[0069] 制御部 110は、 CPU (Central Processing Unit) 111、 ROM112、及び RAM (Ran dom Access Memory)丄 13を面 る。  [0069] The control unit 110 faces a CPU (Central Processing Unit) 111, a ROM 112, and a RAM (Random Access Memory) 13.
[0070] CPUl l lは、メインカメラ 100の動作を制御する制御ユニットである。  CPUl l l is a control unit that controls the operation of the main camera 100.
[0071] ROM111は、不揮発性のメモリであり、メインカメラ 100に予め付与されている固有 の ID番号、及び CPUl l lが実行する、後述する映像撮影プログラムが格納されてい る。尚、 CPUl l lは、係る映像撮影プログラムを実行することによって、本発明に係る 「第 1制御手段」、「制御情報生成手段」、及び「認証手段」の夫々一例として機能す るように構成されている。  The ROM 111 is a non-volatile memory, and stores a unique ID number assigned in advance to the main camera 100 and a video shooting program to be described later that is executed by the CPU 11. The CPUll is configured to function as an example of each of the “first control unit”, “control information generation unit”, and “authentication unit” according to the present invention by executing the video shooting program. ing.
[0072] 尚、サブカメラ 200における ROM212にも、 ROM112と同様に ID番号及び映像 撮影プログラムが格納されており、 CPU211は、係るプログラムを実行することによつ て、本発明に係る「第 2制御手段」、及び「認証手段」の夫々一例として機能するよう に構成されている。  Note that the ROM 212 in the sub camera 200 also stores an ID number and a video shooting program in the same manner as the ROM 112, and the CPU 211 executes the “second” according to the present invention by executing the program. Each of the “control means” and the “authentication means” is configured to function as an example.
[0073] RAMI 13は、揮発性のメモリであり、 CPUl l lが映像撮影プログラムを実行する 過程で生じる様々なデータを一時的に格納するためのバッファとして機能するように 構成されている。 [0073] The RAMI 13 is a volatile memory, and the CPU ll executes a video shooting program. It is configured to function as a buffer for temporarily storing various data generated in the process.
[0074] 集音部 120は、メインカメラ 100の周囲における音声を取得するための、図示略の マイクロフォン、及び係るマイクロフォンによって取得された音声信号を予め定められ た形式の音声情報に変換するための図示略の音声変換部など力 なり、本発明に 係る「音声情報取得手段」の一例として機能するように構成されて 、る。  [0074] The sound collection unit 120 is used to acquire sound around the main camera 100, and a microphone (not shown) and a sound signal acquired by the microphone are converted into sound information of a predetermined format. It is configured to function as an example of the “voice information acquisition unit” according to the present invention, with the help of a voice conversion unit (not shown).
[0075] 撮像部 130は、図示略のカメラレンズによって結像された像を画素毎に光電変換 する図示略の CCD (Charge Coupled Diode)などを有し、カメラレンズによって集光さ れて形成された視野像を結像し、結像された視野像を光電変換して結像信号として 記録部 180に記録することが可能に構成された、本発明に係る「第 1撮影手段」の一 例である。尚、サブカメラ 200における撮像部 230は、本発明に係る「第 2撮影手段」 の一例である。  The imaging unit 130 has a CCD (Charge Coupled Diode) (not shown) that photoelectrically converts an image formed by a camera lens (not shown) for each pixel, and is formed by being condensed by the camera lens. An example of the “first photographing means” according to the present invention, which is configured such that a formed field image is formed, and the formed field image is photoelectrically converted and recorded as an imaging signal in the recording unit 180. It is. The imaging unit 230 in the sub camera 200 is an example of the “second imaging unit” according to the present invention.
[0076] カメラ回動部 140は、メインカメラ 100をパン及びティルトさせるための図示略のモ ータを含む駆動機構である。カメラ回動部 140は、 CPU111からの、回転角度及び 回転速度をパラメータとする指示によって、三脚 40のアタッチメント 41を 3次元的に 回動させることが可能に構成されて 、る。  The camera rotation unit 140 is a drive mechanism including a motor (not shown) for panning and tilting the main camera 100. The camera rotation unit 140 is configured to be able to three-dimensionally rotate the attachment 41 of the tripod 40 according to an instruction from the CPU 111 using the rotation angle and the rotation speed as parameters.
[0077] レンズ駆動部 150は、フォーカス、及びズームを制御するためにレンズを駆動する 機構である。レンズ駆動部 150は、 CPU111からの、ズーム速度及びズーム距離を ノ ラメータとする指示によってレンズを駆動することが可能に構成されている。  The lens driving unit 150 is a mechanism that drives a lens to control focus and zoom. The lens driving unit 150 is configured to be able to drive the lens according to an instruction from the CPU 111 with the zoom speed and zoom distance as parameters.
[0078] 通信部 160は、図示略のアンテナを介してネットワーク 30に接続し、サブカメラ 200 との間でデータ通信の送受信を行うことが可能に構成された、本発明に係る「第 1通 信手段」の一例である。また、サブカメラ 200における通信部 260は、本発明に係る「 第 2通信手段」の一例として機能するように構成されて 、る。  [0078] The communication unit 160 is connected to the network 30 via an antenna (not shown), and is configured to be capable of transmitting and receiving data communication with the sub camera 200. It is an example of “communication means”. In addition, the communication unit 260 in the sub camera 200 is configured to function as an example of the “second communication unit” according to the present invention.
[0079] 位置情報取得部 170は、距離測定部 171、位置検出部 172、及び方位検出部 17 3を備え、本発明に係る「第 1位置情報」の一例を取得可能に構成された、本発明に 係る「第 1取得手段」の一例である。尚、サブカメラにおける位置情報取得部 270は、 本発明に係る「第 2取得手段」の一例として機能する。  [0079] The position information acquisition unit 170 includes a distance measurement unit 171, a position detection unit 172, and an orientation detection unit 173, and is configured to be able to acquire an example of "first position information" according to the present invention. It is an example of the “first acquisition means” according to the invention. The position information acquisition unit 270 in the sub camera functions as an example of the “second acquisition unit” according to the present invention.
[0080] 距離測定部 171は、図示略の赤外線センサなどを含んでなり、メインカメラ 100と被 写体 20との距離を測定することが可能に構成されている。 [0080] The distance measuring unit 171 includes an infrared sensor (not shown) and the like, The distance from the subject 20 can be measured.
[0081] 位置検出部 172は、 GPS又は準天衛星などを利用した公知の位置検出システムで あり、メインカメラ 100の現在位置を特定可能に構成されている。  The position detection unit 172 is a known position detection system that uses GPS or a quasi-sky satellite, and is configured to be able to specify the current position of the main camera 100.
[0082] 方位検出部 173は、地磁気センサなど力もなり、メインカメラ 100の絶対方位を特定 可能に構成されている。  The direction detection unit 173 is configured to be able to specify the absolute direction of the main camera 100 by using a force such as a geomagnetic sensor.
[0083] 記録部 180は、映像及び音声データ、並びに付加情報などを記録するための記録 媒体である。尚、付加情報とは、例えば、被写体 20までの距離、メインカメラ 100の現 在位置、メインカメラの方位、ズームサイズ、パン角度 (方向)、及びティルト角度 (方 向)などを表す情報を指す。また、映像データは、例えば、 MPEG2、 MPEG4、又は H. 264などのデータ圧縮形式によって圧縮されたデータとして、音声データは、リニ ァ PCM、又は AC— 3などの形式に準拠したデータとして、夫々記録される。  The recording unit 180 is a recording medium for recording video and audio data, additional information, and the like. The additional information refers to information indicating, for example, the distance to the subject 20, the current position of the main camera 100, the orientation of the main camera, the zoom size, the pan angle (direction), and the tilt angle (direction). . Also, video data is, for example, data compressed by a data compression format such as MPEG2, MPEG4, or H.264, and audio data is data that conforms to a format such as linear PCM or AC-3, respectively. To be recorded.
[0084] 入力部 190は、ユーザ 100aが CPU111に対し、様々な指示を与えることが可能に 構成されており、タツチパネル装置、操作ボタン、操作ダイアル、又は操作レバー(ッ マミ)などの一部又は全体で構成されて 、る。  [0084] The input unit 190 is configured so that the user 100a can give various instructions to the CPU 111. The input unit 190 is a part of a touch panel device, an operation button, an operation dial, or an operation lever (knob) or It consists of the whole.
[0085] 尚、メインカメラ 100及びサブカメラ 200は、例えばユーザ 100aが、撮影中の映像 を適宜確認可能であるように、例えば液晶ディスプレイなどで構成された表示部を備 えているが、図 2においては説明の簡略化のため省略されている。  Note that the main camera 100 and the sub camera 200 are provided with a display unit configured with, for example, a liquid crystal display so that the user 100a can appropriately check the video being captured, for example. Are omitted for simplicity of explanation.
<実施例の動作 >  <Operation of the embodiment>
次に、上記構成を有する映像撮影システム 10の動作について説明する。 <全体動作 >  Next, the operation of the video imaging system 10 having the above configuration will be described. <Overall operation>
始めに、図 3を参照して、映像撮影システム 10の全体動作について説明する。ここ に、図 3は、映像撮影システム 10の全体的な動作に係るフローチャートである。  First, the overall operation of the video imaging system 10 will be described with reference to FIG. FIG. 3 is a flowchart relating to the overall operation of the video imaging system 10.
[0086] 図 3において、メインカメラ 100及びサブカメラ 200で電源が投入されると、始めに 相互認証が行われる(ステップ A10)。この際、 CPU111及び CPU211は、 ROM 11 2及び ROM212に格納されているメインカメラ 100及びサブカメラ 200各々の ID番 号を通信部 160及び通信部 260を介して相互に交換し合う。以降、メインカメラ 100 力 サブカメラ 200への、或!、はサブカメラ 200からメインカメラ 100へのデータ送信 の際には、相互に自分の ID番号が付与されたデータが送信される。 [0087] 次に、 CPUl l l及び CPU211は、夫々相互認証が無事終了したか否かを判別す る(ステップ Al 1)。相互に認証が成功しな 、場合 (ステップ Al 1: YES)、 CPU 111 及び CPU211は夫々認証処理を繰り返すと共に、認証が成功すると (ステップ Al l: YES)、 CPUl l l及び CPU211は、夫々メインカメラ 100及びサブカメラ 200をアイ ドリングモードに設定する (ステップ A12)。ここで、アイドリングモードとは、撮影、再 生、又は各種設定などユーザ 100aからの指示を待ち受けるモードである。尚、本実 施例においては、このアイドリングモードに設定されることにより、サブカメラ 200はメ インカメラ 100に追従するような動作を行うように制御されるものとする。 In FIG. 3, when the main camera 100 and the sub camera 200 are powered on, mutual authentication is first performed (step A10). At this time, the CPU 111 and the CPU 211 exchange the ID numbers of the main camera 100 and the sub camera 200 stored in the ROMs 112 and 212 via the communication unit 160 and the communication unit 260, respectively. Thereafter, when data is transmitted from the main camera 100 to the sub camera 200, or from the sub camera 200 to the main camera 100, data each having its own ID number is transmitted. Next, CPUll and CPU211 each determine whether or not mutual authentication has been successfully completed (step Al1). If mutual authentication is not successful (step Al 1: YES), CPU 111 and CPU 211 repeat the authentication process, and if authentication is successful (step Al l: YES), CPU 111 and CPU 211 are the main camera. 100 and sub camera 200 are set to idling mode (step A12). Here, the idling mode is a mode that waits for an instruction from the user 100a such as shooting, playback, or various settings. In this embodiment, it is assumed that the sub camera 200 is controlled to perform an operation following the main camera 100 by setting the idling mode.
[0088] このアイドリングモードにおいて、 CPUl l lは、ユーザ 100aからの入力部 190を介 した操作入力の有無を判別する (ステップ A13)。ステップ A13に係る判別は、一定 のクロックに基づいて絶えず実行されている。例えば、このような操作入力とは、被写 体の撮影及びこの撮影された映像の記録を行う撮影モードの起動指示、予め撮影さ れた映像を再生する再生モードの起動指示、又はメインカメラ 100の電気的又は機 械的な設定を変更するための設定モードの起動指示などを指す。  In this idling mode, the CPU 11 determines whether or not there is an operation input from the user 100a via the input unit 190 (step A13). The determination related to step A13 is constantly executed based on a fixed clock. For example, such an operation input is an instruction for starting a shooting mode for shooting a subject and recording the shot video, an instruction for starting a playback mode for playing back a previously shot video, or the main camera 100. This indicates a setting mode start instruction for changing the electrical or mechanical settings of the machine.
[0089] CPUl l lは、何らの操作入力も検出されない場合には (ステップ A13 :NO)、アイ ドリングモードを継続させると共に、何らかの操作入力が検出された場合には (ステツ プ A13 : YES)、係る操作入力が、撮影モードの起動指示であるか否かを判別する( ステップ A14)。撮影モードの起動が指示された場合には (ステップ A14 : YES)、 CP U111は、撮影処理、及び記録処理を実行する (ステップ A15)。この撮影処理及び 記録処理につ!、ては後述する。  [0089] If no operation input is detected (step A13: NO), CPUll continues the idling mode and if any operation input is detected (step A13: YES), It is determined whether or not the operation input is a shooting mode start instruction (step A14). When activation of the shooting mode is instructed (step A14: YES), the CPU 111 executes shooting processing and recording processing (step A15). This shooting process and recording process will be described later.
[0090] 一方、ユーザ 100aからの操作入力が、撮影モードの起動指示ではなかった場合( ステップ A14 : NO)、 CPUl l lは、次に、ユーザ 100aからの操作入力力 メインカメ ラ 100の停止指示である力否かを判別する (ステップ A16)。停止指示であった場合( ステップ A16 :YES)、 CPUl l lは、メインカメラ 100の電源を切って、メインカメラ 10 0を停止させる。  [0090] On the other hand, if the operation input from the user 100a is not an instruction to start the shooting mode (step A14: NO), the CPUll then receives the operation input force from the user 100a as an instruction to stop the main camera 100. It is determined whether or not there is a certain force (step A16). If it is a stop instruction (step A16: YES), the CPU 11 turns off the main camera 100 and stops the main camera 100.
[0091] ユーザ 100aからの操作入力が停止指示ではなかった場合は (ステップ A16 :NO) 、 CPUl l lは更にその他の操作入力であるか否かを判別する (ステップ A17)。尚、 ここで述べるその他の操作入力とは即ち、前述の再生モード若しくは設定モードの起 動指示などを指す。 [0091] If the operation input from the user 100a is not a stop instruction (step A16: NO), CPUll further determines whether it is another operation input (step A17). The other operation input described here means that the above-described playback mode or setting mode is started. Indicates a movement instruction.
[0092] これらその他の操作入力ではなかった場合には(ステップ A17 :NO)、 CPU111は 検出エラーとして処理を再びステップ A13に戻すと共に、その他の操作入力であつ た場合には (ステップ A17 :YES)、 CPU111は、その他の操作入力に対応する制 御を行う(ステップ A18)。  [0092] If these are not other operation inputs (step A17: NO), the CPU 111 returns the processing to step A13 again as a detection error, and if it is another operation input (step A17: YES) ), The CPU 111 performs control corresponding to other operation inputs (step A18).
[0093] CPU111は、その他の操作入力に対応する制御が終了した力否かを判別する (ス テツプ A19)。係る制御が終了していない場合 (ステップ A19 :NO)、 CPU111は、 係る制御を続行すると共に、制御が終了した場合には、処理を再びステップ A13に 戻し、メインカメラ 100をアイドリングモードに制御する。尚、係るその他の操作入力に 対応する制御に関しては、通常のビデオカメラにおける映像再生に関する制御又は 各種設定に関する制御と同等のため、本実施例における詳細な説明を省略すること とする。  The CPU 111 determines whether or not the force corresponding to the other operation input has ended (step A19). When the control is not finished (step A19: NO), the CPU 111 continues the control and, when the control is finished, returns the process to step A13 again and controls the main camera 100 to the idling mode. . The control corresponding to the other operation input is the same as the control related to video reproduction or various settings in a normal video camera, and thus detailed description in this embodiment will be omitted.
<撮影処理 >  <Shooting process>
次に、図 4を参照して、本実施例に係る撮影処理の詳細について説明する。ここに 、図 4は、撮影処理のフローチャートである。尚、係る撮影処理は、 CPU111及び 21 1が、夫々 ROM112及び 212に格納される映像撮影プログラム(即ち、本発明に係 るコンピュータプログラムの一例)を実行することによって実現される。従って、この撮 影処理が開始される時点で、 CPU111は既に、 CPU211に対して通信部 160を介 して係る撮影処理の開始を指示するコマンド信号を送信しており、このコマンド信号 に基づいて、 CPU211は ROM212に格納される映像撮影プログラムを読み出し、 C PU211からの次なる指示待ち状態で待機して 、るものとする。  Next, with reference to FIG. 4, the details of the photographing process according to the present embodiment will be described. FIG. 4 is a flowchart of the photographing process. Note that such shooting processing is realized by the CPUs 111 and 211 executing the video shooting programs stored in the ROMs 112 and 212 (that is, an example of a computer program according to the present invention). Therefore, at the time when this imaging process is started, the CPU 111 has already transmitted a command signal for instructing the CPU 211 to start the imaging process via the communication unit 160, and based on this command signal. The CPU 211 reads the video shooting program stored in the ROM 212 and waits for the next instruction waiting state from the CPU 211.
[0094] 図 4において、撮影処理が開始されると、最初にメインカメラ 100及びサブカメラ 20 0各々における位置情報の取得が行われる(ステップ B10)。  In FIG. 4, when shooting processing is started, first, position information is acquired in each of the main camera 100 and the sub camera 200 (step B10).
[0095] ステップ BIOに係る動作においては、最初に、 CPU111が位置情報取得部 170に 対して、メインカメラ 100に関する位置情報の取得を指示する。この指示に基づいて 、距離測定部 171、位置検出部 172、及び方位検出部 173によって、夫々メインカメ ラ 100と被写体 20との距離、メインカメラ 100の現在位置、及びメインカメラ 100の撮 影方位を表す 3種類の情報が取得され、 RAM 113に一時的に格納される。 [0096] 一方、メインカメラ 100に関する位置情報の取得指示を位置情報取得部 170に与 えるのと並行して、 CPU111は、サブカメラ 200における位置情報の取得を要求する コマンド信号を、通信部 160を介してサブカメラ 200に送信する。 In the operation related to step BIO, first, the CPU 111 instructs the position information acquisition unit 170 to acquire position information regarding the main camera 100. Based on this instruction, the distance measurement unit 171, the position detection unit 172, and the direction detection unit 173 determine the distance between the main camera 100 and the subject 20, the current position of the main camera 100, and the shooting direction of the main camera 100, respectively. Three types of information to be represented are acquired and temporarily stored in the RAM 113. On the other hand, in parallel with giving the position information acquisition unit 170 an instruction to acquire position information regarding the main camera 100, the CPU 111 sends a command signal for requesting acquisition of position information in the sub camera 200 to the communication unit 160. To the sub camera 200.
[0097] 但し、この際、サブカメラ 200が、被写体 20を好適に捕捉している状態で設置され ているかは不明であるから、 CPU111は、先ずサブカメラ 200に関する仮の位置情 報の取得を要求するコマンド信号を送信する。  However, at this time, since it is unknown whether the sub camera 200 is installed in a state where the subject 20 is properly captured, the CPU 111 first obtains temporary position information regarding the sub camera 200. Send the requested command signal.
[0098] サブカメラ 200では、 CPU211が係るコマンド信号に基づ!/、て、位置情報取得部 2 70に対しサブカメラ 200に関する位置情報の取得を指示する。この指示に基づいて 、位置検出部 272、及び方位検出部 273によって、夫々サブカメラ 200の現在位置、 及びサブカメラ 200の撮影方位を表す 2種類の情報が取得され、 RAM213に一時 的に格納される。更に、この取得されたサブカメラ 200に関する位置情報力 通信部 260を介してメインカメラ 100に対して送信される。この送信されたサブカメラ 200に 関する位置情報は、サブカメラ 200に関する仮の位置情報として、メインカメラ 100の RAMI 13に一時的に格納される。この状態で、ステップ B10に係る処理が終了する  In the sub camera 200, the CPU 211 instructs the position information acquisition unit 270 to acquire position information related to the sub camera 200 based on the command signal. Based on this instruction, the position detection unit 272 and the direction detection unit 273 acquire two types of information representing the current position of the sub camera 200 and the shooting direction of the sub camera 200, respectively, and are temporarily stored in the RAM 213. The Further, the acquired position information related to the sub camera 200 is transmitted to the main camera 100 via the communication unit 260. The transmitted position information regarding the sub camera 200 is temporarily stored in the RAMI 13 of the main camera 100 as temporary position information regarding the sub camera 200. In this state, the process related to step B10 ends.
[0099] 次に、 CPU111は、サブカメラ 200をメインカメラ 100の標準状態と対応する状態、 即ちサブカメラ 200における標準状態に設定する (ステップ Bl l)。 Next, the CPU 111 sets the sub camera 200 to a state corresponding to the standard state of the main camera 100, that is, the standard state of the sub camera 200 (step Bl).
[0100] 具体的には、サブカメラ 200に関する仮の位置情報を取得すると、 CPU111は、 R AMI 13に格納されている、メインカメラ 100の位置情報と、この仮の位置情報とを比 較し、サブカメラ 200が正確に被写体 20の方向を向いているかを判別する。  [0100] Specifically, when the temporary position information related to the sub camera 200 is acquired, the CPU 111 compares the position information of the main camera 100 stored in the R AMI 13 with the temporary position information. Then, it is determined whether or not the sub-camera 200 is accurately facing the subject 20.
[0101] サブカメラ 200が被写体 20の方向を向いていない場合、 CPU111は、被写体 20 の方向を向くようにサブカメラ 200をパン及びティルトさせるためのコマンド信号を新 たに生成し、通信部 160を介してサブカメラ 200に送信する。このコマンド信号を受 信したサブカメラ 200では、 CPU211が係るコマンド信号に基づいてカメラ回動部 24 0を制御し、指示通りにサブカメラ 200をパン或 、はティルトさせる。  [0101] When the sub camera 200 is not directed toward the subject 20, the CPU 111 newly generates a command signal for panning and tilting the sub camera 200 so as to face the subject 20, and the communication unit 160 To the sub camera 200. In the sub camera 200 that has received this command signal, the CPU 211 controls the camera rotation unit 240 based on the command signal, and pans or tilts the sub camera 200 as instructed.
[0102] 指示通りにサブカメラ 200がパン或いはティルトされると、サブカメラ 200における C PU211は、位置情報取得部 270を制御して再びサブカメラ 200の位置情報を取得 する。この際、前述した 2種類の情報に加え、距離測定部 271による被写体 20とサブ カメラ 200との距離情報が測定され、サブカメラ 200の真の位置情報として通信部 26 0を介してメインカメラ 100に送信される。 [0102] When the sub camera 200 is panned or tilted as instructed, the CPU 211 in the sub camera 200 controls the position information acquisition unit 270 to acquire the position information of the sub camera 200 again. At this time, in addition to the two types of information described above, the subject 20 and sub The distance information with respect to the camera 200 is measured and transmitted to the main camera 100 via the communication unit 260 as the true position information of the sub camera 200.
[0103] 一方、サブカメラ 200が被写体 20の方向を向いていた場合、 CPU111は、被写体 20とサブカメラ 200との距離を表す情報の取得のみを要求するコマンド信号を生成 し、通信部 160を介してサブカメラ 200に送信する。このコマンド信号を受けて、サブ カメラ 200では、上述したのと同様に距離情報が取得され、仮の位置情報を補完す るための情報としてメインカメラ 100に送信される。  On the other hand, when the sub camera 200 faces the direction of the subject 20, the CPU 111 generates a command signal that requests only acquisition of information indicating the distance between the subject 20 and the sub camera 200, and sets the communication unit 160. To the sub camera 200. In response to this command signal, the sub camera 200 acquires distance information in the same manner as described above, and transmits it to the main camera 100 as information for complementing the temporary position information.
[0104] 上記いずれかによりサブカメラ 200と被写体 20との距離を表す情報が取得されると 、 CPU111は、更に、この距離情報から、被写体 20とメインカメラ 100との距離、及び 被写体 20とサブカメラ 200との距離の差分 (以降、適宜「 Δ d」と称する)を検出する。 CPU111は、この検出された A dから、サブカメラ 200における映像の構図力 標準 状態にあるメインカメラ 100と同等なものとなるような、サブカメラ 200側のズーム補償 用のコマンド信号を生成し、サブカメラ 200に対して送信する。尚、係る A dの値は、 RAM 113に一時的に格納される。  When information indicating the distance between the sub camera 200 and the subject 20 is acquired by any of the above, the CPU 111 further determines the distance between the subject 20 and the main camera 100 and the subject 20 and the sub from the distance information. A difference in distance from the camera 200 (hereinafter referred to as “Δd” as appropriate) is detected. The CPU 111 generates a command signal for zoom compensation on the sub camera 200 side from the detected Ad so as to be equivalent to the main camera 100 in the standard state of the image composition in the sub camera 200. Send to sub camera 200. The value of Ad is temporarily stored in the RAM 113.
[0105] このコマンド信号を受け取ったサブカメラ 200においては、レンズ駆動部 250力 係 るコマンド信号に基づ 、てズーム距離を変動させ、更にフォーカスを合わせることに よって、被写体 20を捕捉する。このようにして、サブカメラ 200における標準状態が設 定される。尚、ステップ B11に係る処理の一部は、サブカメラ 200側の構図が予め被 写体 20を捕捉するように設定されて!ヽる場合には必ずしも必要な ヽ。この場合には、 ステップ B10に係る処理において、サブカメラ 200から真の位置情報に相当する情 報が送信され、サブカメラ 200におけるズームの補償のみが行われてもよい。更には 、このようなズームの補償は必ずしも行われなくともよ!/、。  In the sub camera 200 that has received this command signal, the subject 20 is captured by varying the zoom distance and further focusing based on the command signal related to the lens driving unit 250 force. In this way, the standard state in the sub camera 200 is set. Note that a part of the processing related to step B11 is necessarily required when the composition on the sub camera 200 side is set in advance so as to capture the subject 20! In this case, in the process according to Step B10, information corresponding to true position information may be transmitted from the sub camera 200, and only zoom compensation in the sub camera 200 may be performed. Furthermore, such zoom compensation does not always have to be done! /.
[0106] メインカメラ 100及びサブカメラ 200が夫々標準状態に設定されると、メインカメラ 10 0の CPU111は、 RAMI 13に格納された両者の位置情報に基づいて、メインカメラ 100とサブカメラ 200との角度(以降、適宜「Δ 0」と称する)を取得する (ステップ Β 12 )。また、取得された Δ Θの値は、 RAMI 13に一時的に格納される。  [0106] When the main camera 100 and the sub camera 200 are set to the standard state, the CPU 111 of the main camera 100, based on the positional information of both stored in the RAMI 13, (Hereinafter referred to as “Δ 0” as appropriate) is acquired (step Β 12). Further, the acquired value of ΔΘ is temporarily stored in the RAMI 13.
[0107] ここで、図 5を参照して、 Δ Θ及び前述の A dの詳細について説明する。ここに、図 5は、被写体 20とメインカメラ 100及びサブカメラ 200との位置関係を表す模式図で ある。 Here, with reference to FIG. 5, details of ΔΘ and the above-mentioned Ad will be described. FIG. 5 is a schematic diagram showing the positional relationship between the subject 20, the main camera 100, and the sub camera 200. is there.
[0108] 図 5において、メインカメラ 100と被写体 20との距離、及びサブカメラ 200と被写体 2 0との距離は、夫々「(1 & )」及び「(1(31))」と表される。従って、 Adは、「d(main)— d (sub)」の絶対値として定義される。尚、図 5において、各カメラと被写体 20との距離 は、各カメラのレンズ端面を基準として表されているが、各カメラと被写体との距離を 表す趣旨を逸脱しな ヽ範囲で、且つメインカメラ 100及びサブカメラ 200間で共通の 基準に基づいて規定される距離である限りにおいて、距離の定義は自由に決定され てよい。  In FIG. 5, the distance between the main camera 100 and the subject 20 and the distance between the sub camera 200 and the subject 20 are represented as “(1 &)” and “(1 (31))”, respectively. . Therefore, Ad is defined as the absolute value of “d (main) —d (sub)”. In FIG. 5, the distance between each camera and the subject 20 is expressed with reference to the lens end surface of each camera. As long as the distance is defined based on a common standard between the camera 100 and the sub camera 200, the definition of the distance may be freely determined.
[0109] 一方、図 5において、 Δ Θは、被写体 20とメインカメラ 100とを結ぶ線分、及び被写 体 20とサブカメラ 200とを結ぶ線分によって形成される角度のうち、小さい方の角度 を指す。  On the other hand, in FIG. 5, ΔΘ is the smaller of the angles formed by the line segment connecting the subject 20 and the main camera 100 and the line segment connecting the subject 20 and the sub camera 200. Refers to an angle.
[0110] 図 4に戻り、 CPU113は Δ 0力 「120° ≤Δ 0≤18Ο° 」の範囲(以降、適宜「第 1の範囲」と称する)内にある力否かを判別する (ステップ Β 13)。第 1の範囲内にあつ た場合には (ステップ B13:YES)、 CPUlllは、撮像部 130を制御して、第 1撮影モ ードにおける撮影を行う(ステップ B14)。尚、第 1撮影モードに関しては後述する。  Returning to FIG. 4, the CPU 113 determines whether the force is within the range of Δ 0 force “120 ° ≤Δ 0≤18≤ °” (hereinafter referred to as “first range” as appropriate) (step Β 13). If it is within the first range (step B13: YES), the CPUll controls the imaging unit 130 to perform imaging in the first imaging mode (step B14). The first shooting mode will be described later.
[0111] CPUlllは、第 1撮影モードにおける映像の撮影が終了した力否かを判別する (ス テツプ B15)。撮映が終了していなければ (ステップ B15:NO)、 CPUlllは第 1撮 影モードによる撮影を続行すると共に、撮影が終了した際には (ステップ B15: YES) 、撮影処理は終了する。尚、撮影が終了するとは、例えば、ユーザ 100aが入力部 19 0を介して停止を指示したり、予め設定された撮影時間が経過したりする場合を指し、 その態様は様々であってよ!、。  [0111] CPUlll determines whether or not the power of the video shooting in the first shooting mode is finished (step B15). If shooting has not ended (step B15: NO), CPUlll continues shooting in the first shooting mode, and when shooting ends (step B15: YES), the shooting process ends. Note that the end of shooting refers to, for example, a case where the user 100a gives an instruction to stop via the input unit 190 or a preset shooting time elapses, and the modes may vary! ,.
[0112] 一方、 Δ Θが第 1の範囲内に無かった場合 (ステップ B13:NO)、 CPUlllは、 Δ  [0112] On the other hand, if ΔΘ is not within the first range (step B13: NO), CPUlll is
Θが「45° < Δ Θ <120° 」の範囲(以降、適宜「第 2の範囲」と称する)内にあるか 否かを判別する (ステップ B16)。第 2の範囲内にあった場合には (ステップ B16: YE S)、 CPUlllは、撮像部 130を制御して、第 2撮影モードにおける撮影を行う(ステ ップ B17)。尚、第 2撮影モードに関しては後述する。  It is determined whether or not Θ is within a range of “45 ° <ΔΘ <120 °” (hereinafter referred to as “second range” as appropriate) (step B16). If it is within the second range (step B16: YES), the CPUll controls the imaging unit 130 to perform imaging in the second imaging mode (step B17). The second shooting mode will be described later.
[0113] CPUlllは、第 2撮影モードにおける映像の撮影が終了した力否かを判別する (ス テツプ B18)。撮映が終了していなければ (ステップ B18: NO)、 CPUlllは第 2撮 影モードによる撮影を続行すると共に、撮影が終了した際には (ステップ B18 : YES) 、撮影処理は終了する。 [0113] CPUlll determines whether or not the power of the video shooting in the second shooting mode has ended (step B18). If shooting is not finished (step B18: NO), CPUlll The shooting in the shadow mode is continued, and when the shooting is finished (step B18: YES), the shooting process is finished.
[0114] また、 Δ Θが第 2の範囲内に無かった場合 (ステップ B16 :NO)、 CPU111は、 Δ  [0114] If ΔΘ is not within the second range (step B16: NO), CPU 111
0力「Δ 0≤45° 」の範囲(以降、適宜「第 3の範囲」と称する)内にあるか否かを判 別する(ステップ Β19)。第 3の範囲内にあった場合には (ステップ B19 : YES)、 CPU 111は、撮像部 130を制御して、第 3撮影モードにおける撮影を行う(ステップ B20) 。尚、第 3撮影モードに関しては後述する。  It is determined whether it is within the range of 0 force “Δ 0 ≤ 45 °” (hereinafter referred to as “third range” as appropriate) (step Β19). If it is within the third range (step B19: YES), the CPU 111 controls the imaging unit 130 to perform imaging in the third imaging mode (step B20). The third shooting mode will be described later.
[0115] CPU111は、第 3撮影モードにおける映像の撮影が終了した力否かを判別する (ス テツプ B21)。撮映が終了していなければ (ステップ B21 : NO)、 CPU111は第 3撮 影モードによる撮影を続行すると共に、撮影が終了した際には (ステップ B21: YES) 、撮影処理は終了する。  [0115] CPU 111 determines whether or not the power of the video shooting in the third shooting mode has ended (step B21). If shooting has not ended (step B21: NO), the CPU 111 continues shooting in the third shooting mode, and when shooting ends (step B21: YES), the shooting process ends.
[0116] ここで、各撮影モードの詳細について説明する。  Here, details of each shooting mode will be described.
[0117] <第 1撮影モード >  [0117] <First shooting mode>
メインカメラ 100とサブカメラ 200との角度が第 1の範囲内にある場合、第 1撮影モ ードによる撮影が行われる。第 1の範囲は、メインカメラ 100とサブカメラ 200とが角度 的にみて比較的離れた位置にある場合に対応しており、シチュエーションとしては運 動会などにおけるトラック競技の撮影などが挙げられる。第 1撮影モードは、例えば、 下記(1)〜(12)までの撮影ルーチンを繰り返すことによって行われる。  When the angle between the main camera 100 and the sub camera 200 is within the first range, shooting in the first shooting mode is performed. The first range corresponds to the case where the main camera 100 and the sub camera 200 are relatively distant from each other in terms of angle. Examples of situations include shooting of track competitions at sporting events. The first shooting mode is performed, for example, by repeating the shooting routines (1) to (12) below.
[0118] 即ち、( 1)標準状態→ (2)左右パン (同相)→ (3)標準状態→ (4)左右パン (逆相) → (5)標準状態→ (6)上下ティルト(同相)→ (7)上下ティルト (逆相)→ (8)標準状態 →(9)ズーム(同相)→(10)標準状態→(11)ズーム (逆相)→(12)標準状態である  [0118] That is, (1) Standard state → (2) Left and right pan (In-phase) → (3) Standard state → (4) Left and right pan (Reverse phase) → (5) Standard state → (6) Vertical tilt (In-phase) → (7) Vertical tilt (reverse phase) → (8) Standard state → (9) Zoom (in phase) → (10) Standard state → (11) Zoom (reverse phase) → (12) Standard state
[0119] (1)において、「標準状態」とは、即ち前述の標準状態であり、 (2)において、メイン カメラ 100が、カメラ回動部 140の作用によりこの標準状態力もパンする。メインカメラ をパンさせるのと並行して、 CPU111は、通信部 160を介してサブカメラ 200にコマ ンド信号を送信しており、係るコマンド信号に応じて、サブカメラ 200も同相でパンす る。 [0119] In (1), the "standard state" is the above-described standard state. In (2), the main camera 100 also pans this standard state force by the action of the camera rotation unit 140. In parallel with the panning of the main camera, the CPU 111 transmits a command signal to the sub camera 200 via the communication unit 160, and the sub camera 200 pans in phase in response to the command signal.
[0120] ここで、「同相」とは、例えば、メインカメラ 100とサブカメラ 200とが角度的に近接し ている場合には、「左」又は「右」で表される方向が相互に等しいことを指す。但し、こ の第 1撮影モードのように、両者の角度が離れている場合には、端的に言えば両者 が対面している(Δ Θ = 180° )場合には、メインカメラ 100が左方向にパンしたらサ ブカメラ 200が右方向にパンする、即ち両者が左右逆向きにパンすることを指す。即 ち、「同相でパンする」とは、同一の被写体を捕捉するような方向にパンすることを指 す概念であり、絶対的なパンの方向を指すものではない。 [0120] Here, "in-phase" means, for example, that the main camera 100 and the sub camera 200 are close in angle. The directions represented by “left” or “right” are equal to each other. However, if the two are separated from each other as in the first shooting mode, the main camera 100 is moved to the left when the two are facing each other (ΔΘ = 180 °). This means that the sub-camera 200 pans to the right, that is, both pan in the opposite direction. In other words, “panning in-phase” is a concept that refers to panning in a direction that captures the same subject, not an absolute panning direction.
[0121] このようにサブカメラ 200を「同相」でパンさせる場合、メインカメラ 100の CPU111 は、メインカメラ 100とサブカメラ 200との相対的な位置関係に基づいて、その都度サ ブカメラ 200を左右どちらの方向にどれだけパンさせるかを演算し、その演算結果に 基づいたコマンド信号を生成してサブカメラ 200に送信する。  [0121] When the sub camera 200 is panned in “in-phase” in this way, the CPU 111 of the main camera 100 moves the sub camera 200 left and right each time based on the relative positional relationship between the main camera 100 and the sub camera 200. It calculates how much to pan in which direction, generates a command signal based on the calculation result, and transmits it to the sub camera 200.
[0122] (3)において、メインカメラ 100及びサブカメラ 200は夫々ー且標準状態に復帰す る。この際、メインカメラ 100では、 CPU111がカメラ回動部 140を制御することによつ て、またサブカメラ 200では、メインカメラ 100の CPU111によって生成され、通信部 160を介して送信されるコマンド信号に応じて CPU211がカメラ回動部 240を制御 することによって、夫々前述した標準状態に復帰する。尚、この標準状態への復帰は 、以降の説明において基本的に同様であるから適宜説明を省略する。  [0122] In (3), the main camera 100 and the sub camera 200 each return to the standard state. At this time, in the main camera 100, the CPU 111 controls the camera rotation unit 140, and in the sub camera 200, the command signal is generated by the CPU 111 of the main camera 100 and transmitted via the communication unit 160. In response to this, the CPU 211 controls the camera rotation unit 240 to return to the standard state described above. Since the return to the standard state is basically the same in the following description, the description is omitted as appropriate.
[0123] (4)にお!/、て、メインカメラ 100は再びパンする。この(4)にお!/、ては、 CPUl 11は 、サブカメラ 200を逆相でパンさせるためのコマンド信号を生成して送信する。このコ マンド信号により、サブカメラ 200は、メインカメラ 100と逆相でパンする。ここで、「逆 相でパンする」とは、前述の「同相」と反対の概念であり、被写体が相互に離れる方向 へパンすることを指す。  [0123] In (4)! /, The main camera 100 pans again. In this (4), the CPU / 11 generates and transmits a command signal for panning the sub camera 200 in reverse phase. With this command signal, the sub camera 200 pans in the opposite phase to the main camera 100. Here, “panning in reverse phase” is a concept opposite to the above-mentioned “in-phase” and refers to panning in a direction in which subjects are separated from each other.
[0124] (5)において再び標準状態に復帰した後、(6)でメインカメラ 100はティルトする。こ の場合も、 CPU111がカメラ回動部 140を制御することによって係るティルトが実現さ れる。またサブカメラ 200は、送信されるコマンド信号に応じて、カメラ回動部 240の 作用により、メインカメラ 100と同相でティルトする。  [0124] After returning to the standard state again in (5), the main camera 100 is tilted in (6). Also in this case, the tilt is realized by the CPU 111 controlling the camera rotation unit 140. Further, the sub camera 200 is tilted in phase with the main camera 100 by the action of the camera rotation unit 240 in accordance with the transmitted command signal.
[0125] ここで、「同相でティルトする」とは、パンの場合とは異なり、メインカメラ 100とサブ力 メラ 200との角度とは無関係に、「上」又は「下」で表される方向が相互に等しくなるよ うにティルトすることを指す。 [0126] (7)において、メインカメラ 100とサブカメラ 200とは相互に逆相でティルトされる。こ こで、「逆相でティルトする」とは、即ち、「上」又は「下」で表される方向が相互に異な るようにティルトすることを指す。 [0125] Here, "tilting in phase" is different from panning, and the direction represented by "up" or "down" is independent of the angle between the main camera 100 and the sub-power camera 200. Tilt to be equal to each other. [0126] In (7), the main camera 100 and the sub camera 200 are tilted in opposite phases to each other. Here, “tilt in reverse phase” refers to tilting so that the directions represented by “up” and “down” are different from each other.
[0127] (8)においてー且メインカメラ 100及びサブカメラ 200が標準状態に復帰した後、( 9)において、メインカメラ 100は被写体 20をズーム撮影する。この際、 CPU111が、 レンズ駆動部 150を制御して、ズーム倍率及びフォーカスを調整する。それと同時に 、 CPU111は、サブカメラ 200を同相でズーム撮影させるためのコマンド信号を生成 して通信部 160を介してサブカメラ 200に送信し、係るコマンド信号に応じて、サブ力 メラ 200の CPU211はレンズ駆動部 250を制御して係る同相のズーム撮影を実行す る。  [0127] In (8), and after the main camera 100 and the sub camera 200 are restored to the standard state, in (9), the main camera 100 performs zoom photography of the subject 20. At this time, the CPU 111 controls the lens driving unit 150 to adjust the zoom magnification and focus. At the same time, the CPU 111 generates a command signal for causing the sub camera 200 to perform zoom shooting in the same phase and transmits the command signal to the sub camera 200 via the communication unit 160, and the CPU 211 of the sub power camera 200 responds to the command signal. The lens driving unit 250 is controlled to execute in-phase zoom shooting.
[0128] ここで、「同相」でズーム撮影するとは、即ち、「望遠 (ズームイン)」及び「広角(ズー ムアウト)」などと表されるズーム方向がメインカメラ 100とサブカメラ 200とで相互に等 しいことを指す概念である。即ち、メインカメラ 100が被写体 20を望遠撮影する場合 には同じく望遠撮影し、広角撮影する場合には、同じく広角撮影することを指す。  Here, “in-phase” zoom photography means that the zoom directions represented by “telephoto (zoom in)” and “wide angle (zoom out)” are mutually different between the main camera 100 and the sub camera 200. It is a concept that points to equality. That is, when the main camera 100 performs telephoto shooting of the subject 20, the same telephoto shooting is used, and when the main camera 100 performs wide-angle shooting, the same wide-angle shooting is performed.
[0129] (10)において再び標準状態に復帰した後、(11)において、メインカメラ 100及び サブカメラ 200は相互に逆相で被写体 20をズーム撮影する。ここで、「逆相」で撮影 するとは、例えば、メインカメラ 100が望遠側で被写体を撮影するならば、サブカメラ 2 00は広角側で被写体を撮影することを指す。そして再び(12)でメインカメラ 100及 びサブカメラ 200は相互に標準状態へ復帰する。このように、第 1撮影モードにおい ては、主としてパン及びティルトを中心とした撮影が行われる。  [0129] After returning to the standard state again in (10), in (11), the main camera 100 and the sub camera 200 perform zoom photography of the subject 20 in mutually opposite phases. Here, “photographing in reverse phase” means that, for example, if the main camera 100 captures a subject on the telephoto side, the sub camera 200 captures a subject on the wide-angle side. Then (12) again, the main camera 100 and the sub camera 200 are restored to the standard state. In this way, in the first shooting mode, shooting is performed mainly centering on pan and tilt.
[0130] <第 2撮影モード >  [0130] <Second shooting mode>
メインカメラ 100とサブカメラ 200との角度が第 2の範囲内にある場合、第 2撮影モ ードによる撮影が行われる。第 2の範囲は、メインカメラ 100とサブカメラ 200とが角度 的にみて標準的な位置関係にある場合に対応しており、特に特徴的なシチユエーシ ヨンを持たず、例えば、下記(1)〜(14)までの撮影ルーチンを繰り返すことによって 行われる。  When the angle between the main camera 100 and the sub camera 200 is within the second range, shooting in the second shooting mode is performed. The second range corresponds to the case where the main camera 100 and the sub camera 200 are in a standard positional relationship in terms of angle, and does not have a particularly characteristic situation. For example, the following (1) to (1) This is done by repeating the shooting routine up to (14).
[0131] 即ち、( 1)標準状態→ (2)左右パン (同相)→ (3)標準状態→ (4)左右パン (逆相) → (5)標準状態→ (6)ズーム(同相)→ (7)ズーム (逆相)→ (8)標準状態→ (9)上下 ティルト(同相)→(10)上下ティルト (逆相)→(11)ズーム(同相)→(12)標準状態→ (13)ズーム (逆相)→ ( 14)標準状態である。 [0131] That is, (1) Standard state → (2) Left and right pan (In-phase) → (3) Standard state → (4) Left and right pan (Reverse phase) → (5) Standard state → (6) Zoom (In-phase) → (7) Zoom (reverse phase) → (8) Standard condition → (9) Up and down Tilt (in-phase) → (10) Vertical tilt (reverse phase) → (11) Zoom (in-phase) → (12) Standard state → (13) Zoom (reverse phase) → (14) Standard state.
[0132] 尚、個々の撮影状態 (パン及びティルト、並びにズームなど)については、上述の第 1撮影パターンと同様であるから説明を省略する。この第 2撮影パターンにおいては 、第 1撮影パターンと異なり、パン及びティルトと、ズームイン及びアウトなどのズーム 動作とが平均的に混ざった撮影が実行される。  Note that the individual shooting states (pan, tilt, zoom, and the like) are the same as those in the first shooting pattern described above, and thus the description thereof is omitted. In the second shooting pattern, unlike the first shooting pattern, shooting in which pan and tilt and zoom operations such as zoom in and out are mixed on average is executed.
[0133] <第 3撮影モード >  [0133] <Third shooting mode>
メインカメラ 100とサブカメラ 200との角度が第 3の範囲内にある場合、第 3撮影モ ードによる撮影が行われる。第 3の範囲は、メインカメラ 100とサブカメラ 200とが角度 的にみて比較的近接した位置関係にある場合に対応しており、シチュエーションとし ては、室内でのピアノや演劇などの発表会などが挙げられる。第 3撮影モードは、例 えば下記( 1 )〜( 12)までの撮影ルーチンを繰り返すことによって行われる。  When the angle between the main camera 100 and the sub camera 200 is within the third range, shooting in the third shooting mode is performed. The third range corresponds to the case where the main camera 100 and the sub camera 200 are relatively close to each other in terms of angle. Examples of situations include indoor presentations such as pianos and plays. Is mentioned. The third shooting mode is performed, for example, by repeating the shooting routines (1) to (12) below.
[0134] 即ち、( 1)標準状態→ (2)左右パン (同相)→ (3)標準状態→ (4)左右パン (逆相) → (5)標準状態→ (6)ズーム(同相)→ (7)ズーム (逆相)→ (8)標準状態→ (9)ズー ム(同相)→(10)標準状態→(11)ズーム (逆相)→(12)標準状態である。  [0134] That is, (1) Standard state → (2) Left and right pan (In-phase) → (3) Standard state → (4) Left and right pan (Reverse phase) → (5) Standard state → (6) Zoom (In-phase) → (7) Zoom (reverse phase) → (8) Standard state → (9) Zoom (in phase) → (10) Standard state → (11) Zoom (reverse phase) → (12) Standard state.
[0135] 尚、個々の撮影状態 (パン及びティルト、並びにズームなど)については、上述の第 1撮影パターンと同様であるから説明を省略する。この第 3撮影パターンにおいては 、第 1及び第 2撮影パターンと異なり、主としてズームイン及びアウトなどのズーム動 作を中心とした撮影が実行される。  Note that the individual shooting states (pan, tilt, zoom, and the like) are the same as those in the first shooting pattern described above, and thus the description thereof is omitted. In the third shooting pattern, unlike the first and second shooting patterns, shooting is performed mainly focusing on zoom operations such as zooming in and out.
[0136] 尚、本実施例において、上述した第 1〜第 3撮影モードにおける映像撮影の各段 階では、映像文法に準拠した撮影が行われてもよい。この場合、 CPU111及び CP U211は、このような撮影が行われるように、メインカメラ 100及びサブカメラ 200の各 部を制御してもよい。  [0136] In this embodiment, at each stage of video shooting in the first to third shooting modes described above, shooting based on the video grammar may be performed. In this case, the CPU 111 and the CPU 211 may control each part of the main camera 100 and the sub camera 200 so that such shooting is performed.
[0137] ここで、「映像文法」とは、映像素材と、映像効果と、表現したい概念との間に存在 する普遍的な法則を指す。例えば、映像素材と表現したい概念が決定していれば、 ある程度必要とされる映像効果は特定される。或いは、映像素材と映像効果が決ま つて ヽれば、その映像によって表される撮影者の意思をある程度的確に視聴者に伝 えることが可能である。映像を撮影することに慣れていない一般のユーザは、このよう な映像文法の存在すら知らな 、場合が多 、から、このような映像文法に基づ ヽた撮 影が行われる場合には映像の品質を向上させることも簡便にして可能である。 [0137] Here, "video grammar" refers to a universal law that exists between video material, video effects, and concepts to be expressed. For example, if the concept to be expressed as video material has been decided, the video effect that is required to some extent is specified. Alternatively, if the video material and video effects are determined, the intention of the photographer represented by the video can be conveyed to the viewer with a certain degree of accuracy. For general users who are not accustomed to shooting video, Since there are many cases where there is no known image grammar, it is possible to improve the quality of the image easily when shooting based on such image grammar is performed.
[0138] ここで、映像文法に準拠した撮影とは、例えば、運動会など被写体の動きが比較的 に多いシチュエーションでは、パン及びティルト、並びにズームイン及びアウトなどの ズーム動作におけるスピードを速くしたり、カットを頻繁に切替えたりして、躍動感のぁ る映像を撮影することなどを指す。また、音楽発表会など被写体の動きが比較的に少 ないシチュエーションでは、パン及びティルト、並びにズームの速度を夫々遅めにし たり、被写体のアップ映像を様々な角度力も撮影したりするなど、品格の高い且つ飽 きのこな ヽ映像を撮影することなど指す。このような映像文法に準拠した撮影アルゴリ ズムは、例えば、 ROM 112及び ROM212に格納される映像撮影プログラムに採用 されることによって、撮影される映像に反映されてもよい。  [0138] Here, shooting conforming to the video grammar means that, for example, in situations where there is a relatively large amount of movement of the subject such as an athletic meet, the speed in zoom operations such as panning and tilting, zooming in and out, etc. can be increased or cut off. This refers to shooting lively images by switching frequently. Also, in situations where there is relatively little movement of the subject, such as music recitals, the panning, tilting, and zooming speeds are slowed down, and the subject's up-video is shot with various angular forces. For example, taking high-quality images. Such a shooting algorithm that complies with the video grammar may be reflected in the video to be shot by being adopted in a video shooting program stored in the ROM 112 and the ROM 212, for example.
[0139] 尚、上述した第 1、第 2、及び第 3の各撮影モードにおいては、上述した各撮影ルー チンの他にも、被写体から比較的に離れた場所の状況を撮影するために、少なくとも 一方のカメラ力 例えば 180度以上パンするような撮影ルーチンが適宜カ卩えられても よい。例えば、このような撮影ルーチンを使用して、運動会、競技会、演奏会、又は発 表会などにお!、て観覧席で被写体 (競技者や演奏者など)を観覧して!/、る両親の表 情などを撮影することも容易にして可能である。この場合、映像に臨場感やドラマ感と V、つた要素を一層加味することも可能である。 [0139] In each of the first, second, and third shooting modes described above, in addition to the shooting routines described above, in order to take a picture of a situation at a location relatively distant from the subject, An imaging routine that pans at least one camera force, for example, 180 degrees or more, may be provided as appropriate. For example, using such a shooting routine, at a sports day, a competition, a concert, or a presentation! Look at the subject (competitor, performer, etc.) at the bleachers! It is also possible to take pictures of parents' expressions easily. In this case, it is also possible to add a sense of reality, drama, V, and other elements to the video.
<記録処理 >  <Recording process>
次に、図 6を参照して、このような撮影処理において撮影された映像を記録する記 録処理について説明する。ここに、図 6は、記録処理のフローチャートである。尚、本 実施例において、記録処理は、撮影処理と並行して行われる。また、図 6に係る処理 は、メインカメラ 100において実行される記録処理であり、サブカメラ 200側の処理に ついてはメインカメラ 100と同等なので説明を省略する。  Next, with reference to FIG. 6, a recording process for recording the video shot in such a shooting process will be described. FIG. 6 is a flowchart of the recording process. In the present embodiment, the recording process is performed in parallel with the photographing process. 6 is a recording process executed in the main camera 100, and the processing on the sub camera 200 side is the same as that of the main camera 100, and thus the description thereof is omitted.
[0140] 図 6において、始めに、 CPU111は、クロック信号などに基づいて、 1ビデオフレー ム(以下、「1フレーム」)が経過したか否かを判別する (ステップ C10)。ここで「ビデオ フレーム」とは、映像の最小単位であり、例えば、動画映像における「1コマ」に相当す る。 [0141] 1フレームがまだ経過していない場合 (ステップ C10 :NO)、 CPU111は 1フレーム が経過するまで待つと共に、 1フレームが経過した場合には (ステップ CIO : YES)、 ビデオデータ処理 (ステップ C 11)、オーディオデータ処理 (ステップ C 12)、及び付 加情報処理 (ステップ C 13)を夫々並行して実行する。 In FIG. 6, first, the CPU 111 determines whether or not one video frame (hereinafter, “one frame”) has elapsed based on a clock signal or the like (step C10). Here, the “video frame” is a minimum unit of video, and corresponds to “one frame” in a video, for example. [0141] If one frame has not yet passed (step C10: NO), the CPU 111 waits until one frame has passed, and if one frame has passed (step CIO: YES), the video data processing (step C 11), audio data processing (step C 12), and additional information processing (step C 13) are executed in parallel.
[0142] ここで、図 7〜図 9を参照してこれら各処理について説明する。ここに、図 7は、ビデ ォデータ処理のフローチャート、図 8は、オーディオデータ処理のフローチャート、そ して図 9は、付加情報処理のフローチャートである。  [0142] Here, each of these processes will be described with reference to Figs. FIG. 7 is a flowchart of video data processing, FIG. 8 is a flowchart of audio data processing, and FIG. 9 is a flowchart of additional information processing.
[0143] 図 7にお 、て、始めにビデオデータのサンプリング及びデジタルィ匕が行われる(ステ ップ Cl l l)。デジタル化されたビデオデータは、エンコードされ、 RAMI 13〖こ一時 的に格納される (ステップ C 112)。 RAMI 13にビデオデータが格納されると、ビデオ データ処理が終了する。  [0143] In FIG. 7, video data sampling and digital input are first performed (step Clll). The digitized video data is encoded and temporarily stored in RAMI 13 (step C 112). When the video data is stored in the RAMI 13, the video data processing ends.
[0144] 図 8にお 、て、始めにオーディオデータのサンプリング及びデジタルィ匕が行われる ( ステップ C121)。デジタル化されたオーディオデータは、エンコードされ、 RAMI 13 に一時的に格納される(ステップ C122)。 RAMI 13にオーディオデータが格納され ると、オーディオデータ処理が終了する。  [0144] In FIG. 8, audio data sampling and digital input are first performed (step C121). The digitized audio data is encoded and temporarily stored in the RAMI 13 (step C122). When the audio data is stored in the RAMI 13, the audio data processing ends.
[0145] 図 9において、始めに距離情報、方位情報、位置情報、ズーム距離情報、パン角度 情報、ティルト角度情報、並びに、パン、ティルト、及びズームにおける速度情報など 力 対応する各部より付加情報として取得される (ステップ C131)。取得された付カロ 情報は、 RAM113に格納される(ステップ C132)。付加情報が RAMI 13に格納さ れると、付加情報処理が終了する。  In FIG. 9, first, distance information, azimuth information, position information, zoom distance information, pan angle information, tilt angle information, and speed information in pan, tilt, and zoom, etc. Obtained (step C131). The acquired attached calorie information is stored in the RAM 113 (step C132). When the additional information is stored in the RAMI 13, the additional information processing ends.
[0146] 図 6に戻り、 RAMI 13に格納されたビデオデータ及びオーディオデータは、マルチ プレタスされ (ステップ C14)、付加情報と共に RAMI 13に格納される。  Returning to FIG. 6, the video data and audio data stored in the RAMI 13 are multiplexed (step C14) and stored in the RAMI 13 together with additional information.
[0147] CPU111は、 RAM113に格納されたこれらデータが、 1GOP格納されたか否かを 判別する(ステップ C 15)。ここで 1GOPとは、 1から 15フレーム程度の映像データ、 音声データ及び付加情報で構成された映像単位である。  [0147] CPU 111 determines whether or not these data stored in RAM 113 are stored in 1 GOP (step C15). Here, 1GOP is a video unit composed of video data, audio data and additional information of about 1 to 15 frames.
[0148] 1GOP分のデータが格納されていない場合 (ステップ C15 :NO)、 CPU111は、処 理をステップ C10に戻し、次なるビデオフレームの映像データ、音声データ及び付加 情報の処理を実行する。一方、 RAMI 13に 1GOP分のデータが格納されると (ステ ップ C15 :YES)、 CPU111は、記録部 180を制御して、これらデータを記録させる( ステップ C 16)。 [0148] When data for 1 GOP is not stored (step C15: NO), the CPU 111 returns the process to step C10, and executes the processing of video data, audio data, and additional information of the next video frame. On the other hand, if 1GOP of data is stored in RAMI 13, (C15: YES), the CPU 111 controls the recording unit 180 to record these data (step C16).
[0149] ここで、図 10及び図 11を参照して、本実施例に係る記録フォーマットについて説明 する。ここに、図 10は、記録フォーマットの模式図であり、図 11は、係る記録フォーマ ットにおけるヘッダ及び付加情報の模式図である。  Here, the recording format according to the present embodiment will be described with reference to FIG. 10 and FIG. Here, FIG. 10 is a schematic diagram of a recording format, and FIG. 11 is a schematic diagram of a header and additional information in the recording format.
[0150] 図 10において、連続した静止画からなるビデオストリームは、データが GOP単位で 順次配列した構成を有している。一つの GOPは、ヘッダ、ビデオ Zオーディオデータ[0150] In FIG. 10, a video stream composed of continuous still images has a configuration in which data is sequentially arranged in GOP units. One GOP is header, video Z audio data
、及び付加情報力 構成されている。 And additional information power.
[0151] 図 11 (a)には、このうちヘッダの様子が表される。ヘッダは、当該 GOPに含まれるフ レーム数、 GOP番号、ビデオ Zオーディオデータの記録媒体上のアドレス、ビデオ[0151] FIG. 11 (a) shows the state of the header. The header includes the number of frames included in the GOP, the GOP number, the address on the recording medium of the video Z audio data, the video
Zオーディオデータのサイズ、付加情報の記録媒体上のアドレス、及び付加情報の サイズからなる。 It consists of the size of the Z audio data, the address of the additional information on the recording medium, and the size of the additional information.
[0152] 図 11 (b)には、付加情報の様子が表される。付加情報は、距離、方位、位置、ズー ムサイズ、パンの方向及び角度、ティルトの方向及び角度などの情報力もなる。  [0152] Fig. 11 (b) shows the state of the additional information. Additional information includes information such as distance, direction, position, zoom size, pan direction and angle, and tilt direction and angle.
[0153] 図 6に戻って、 1GOP分の映像の記録が終了すると、 CPU111は、記録すべき映 像の有無を判別する (ステップ C17)。記録すべき映像がまだ存在する場合には (ス テツプ C17 :NO)、 CPU111は処理を再びステップ C10に戻し、次なるビデオフレー ムに関するビデオデータ、オーディオデータ、及び付加情報の処理を実行する。記 録すべき映像が無くなった場合には (ステップ C17 : YES)、記録処理が終了する。  Returning to FIG. 6, when the recording of the video for 1 GOP is completed, the CPU 111 determines whether there is a video to be recorded (step C17). If there is still an image to be recorded (step C17: NO), the CPU 111 returns the process to step C10 again, and processes the video data, audio data, and additional information relating to the next video frame. If there is no more video to record (step C17: YES), the recording process ends.
[0154] 以上説明したように、本実施例に係る映像撮影システム 10は、メインカメラ 100とサ ブカメラ 200とが相互に連動し、相互の位置関係に基づいて被写体 20を効果的に 撮影することが可能となる。従って、極めて高品質な映像の撮影が可能となるのであ る。  [0154] As described above, in the video imaging system 10 according to the present embodiment, the main camera 100 and the sub camera 200 are interlocked with each other, and the subject 20 is effectively captured based on the mutual positional relationship. Is possible. Therefore, it is possible to shoot extremely high quality images.
[0155] 尚、本実施例においては、メインカメラ 100側力もサブカメラ 200を制御するための コマンド信号が送信されることによって、サブカメラ 200が動作するように構成されて いるが、例えば、サブカメラ 200の CPU211が、メインカメラ 100から送信される位置 情報に応じて、或いはメインカメラ 100のパン及びティルトの動作量又はズームの動 作量などに応じて、サブカメラ 200がどのように動くべきかを決定してもよい。 [0156] また、本実施例においては、現在位置、撮影方位、及び被写体との距離が夫々取 得可能に構成されている力 例えば、室内などでは、 GPSなどの位置検索信号が届 かずに現在位置が特定不可能となる場合もある。そのような場合であっても、メイン力 メラ 100とサブカメラ 200との間で、被写体 20を撮影するとのコンセンサスが予め得ら れて 、るならば何ら問題は生じな!/、。 [0155] In the present embodiment, the sub camera 200 is configured to operate by transmitting a command signal for controlling the sub camera 200 in the main camera 100 side force. How the CPU 211 of the camera 200 should move according to the position information transmitted from the main camera 100, or according to the pan / tilt operation amount or zoom operation amount of the main camera 100, etc. You may decide. [0156] In the present embodiment, the current position, the shooting direction, and the force configured to be able to acquire the distance to the subject. For example, in a room or the like, the position search signal such as GPS does not reach the current position. In some cases, the position cannot be specified. Even in such a case, if the consensus of photographing the subject 20 is obtained in advance between the main camera 100 and the sub camera 200, no problem will arise!
[0157] また、位置情報を取得するための各手段は必ずしも必要とはならな!、。例えば、上 述したようなコンセンサスが得られている場合には、メインカメラ 100からのコマンド信 号或いはメインカメラ 100の動作を伝える信号などに基づいてサブカメラ 200が動作 し得る限りにお 、て、本発明の効果は変わらず享受されるものである。  [0157] Also, each means for obtaining the position information is not necessarily required! For example, when the above-mentioned consensus is obtained, as long as the sub camera 200 can operate based on a command signal from the main camera 100 or a signal conveying the operation of the main camera 100, etc. The effects of the present invention can be enjoyed unchanged.
[0158] また、本実施例においては、メインカメラ 100及びサブカメラ 200の両者力 フルォ ートモードで動作するように説明している力 例えば、メインカメラ 100がユーザ 100a によって積極的に操作され、映像が撮影されてもよい。この場合にも、サブカメラ 200 がメインカメラ 100の動作に応じて追従して動作することが可能である。  [0158] In the present embodiment, both forces of the main camera 100 and the sub camera 200 are described to operate in the full mode. For example, the main camera 100 is actively operated by the user 100a, and the video is displayed. It may be taken. Also in this case, the sub camera 200 can operate following the operation of the main camera 100.
[0159] <変形例 >  [0159] <Modification>
サブカメラ 200をメインカメラ 100に追従させる態様は、上述の実施例に限定されな い。例えば、図 12に示すような態様を採ることも可能である。ここに、図 12は、本発明 の変形例に係る追従制御の模式図である。尚、以下の説明において、上述の実施例 と重複する箇所には同一の符号を付してその説明を省略することとする。  The manner in which the sub camera 200 follows the main camera 100 is not limited to the above-described embodiment. For example, it is possible to adopt an embodiment as shown in FIG. FIG. 12 is a schematic diagram of the follow-up control according to the modified example of the present invention. In the following description, the same parts as those in the above-described embodiment are denoted by the same reference numerals and the description thereof is omitted.
[0160] 図 12において、サブカメラ 200は表示部 300を備えており、その表示画面には、現 在撮影中の映像の構図が映し出されている。ここで、例えば、メインカメラ 100が左に 30° パンした場合、メインカメラ 100からのコマンド信号によって、或いはメインカメラ 100の動作を伝える制御信号に基づいて、 CPU211が、係る表示画面に「左に 30 ° パンして下さ!/、」とのメッセージを表示するように表示部 300が制御される。  [0160] In FIG. 12, the sub camera 200 includes a display unit 300, and the composition of the video currently being shot is displayed on the display screen. Here, for example, when the main camera 100 pans 30 ° to the left, the CPU 211 reads “Left” on the display screen based on a command signal from the main camera 100 or based on a control signal that conveys the operation of the main camera 100. The display unit 300 is controlled to display the message, “Pan 30 °! /,”.
[0161] このようなメッセージによりメインカメラ 100への追従を促されることによって、サブ力 メラ 200を操作するユーザがサブカメラ 200を自主的にパンさせれば、実施例と同等 の追従動作が容易にして実現可能となる。撮影装置は三脚などに固定されてのみ使 用される訳ではなぐ手持ち状態で使用されることも多い。このような状況では、カメラ 回動部 240を制御してもサブカメラ 200をパン或いはティルトさせることは困難である から、本変形例は非常に有効である。また、携帯電話などに代表される携帯型端末 にビデオカメラシステムが備わる場合も、三脚への固定が困難であることから同様に 非常に有効である。 [0161] By prompting the main camera 100 to follow such a message, if the user operating the sub-power camera 200 voluntarily pans the sub-camera 200, the tracking operation equivalent to the embodiment is easy. Can be realized. In many cases, the photographing device is used in a hand-held state rather than being fixed to a tripod. In such a situation, it is difficult to pan or tilt the sub camera 200 even if the camera rotation unit 240 is controlled. Therefore, this modification is very effective. In addition, when a portable terminal such as a mobile phone is equipped with a video camera system, it is also very effective because it is difficult to fix it to a tripod.
[0162] 本発明は、上述した実施例に限られるものではなぐ請求の範囲及び明細書全体 力 読み取れる発明の要旨或いは思想に反しない範囲で適宜変更可能であり、その ような変更を伴う映像撮影システム、映像撮影装置、及び映像撮影方法もまた本発 明の技術的範囲に含まれるものである。  [0162] The present invention is not limited to the above-described embodiments, and the entire specification can be modified as appropriate without departing from the gist or philosophy of the invention that can be read. The system, the video imaging device, and the video imaging method are also included in the technical scope of the present invention.
産業上の利用可能性  Industrial applicability
[0163] 本発明に係る映像撮影システム、映像撮影装置、及び映像撮影方法は、例えば、 高品質な映像を得ることが可能な映像撮影システム、映像撮影装置、及び映像撮影 方法に利用可能である。 [0163] The video shooting system, the video shooting device, and the video shooting method according to the present invention can be used for, for example, a video shooting system, a video shooting device, and a video shooting method capable of obtaining a high-quality video. .

Claims

請求の範囲 The scope of the claims
[1] ネットワークに収容され、映像を撮影するための少なくとも一つの主撮影装置及び 少なくとも一つの副撮影装置を含み、  [1] including at least one primary imaging device and at least one secondary imaging device, which are accommodated in a network and for capturing images,
前記主撮影装置は、  The main photographing device is:
前記映像を撮影する第 1撮影手段と、  A first photographing means for photographing the video;
前記第 1撮影手段における前記映像の撮影条件を制御する第 1制御手段と、 前記撮影条件のうち、前記撮影される映像の構図を規定する条件に応じて、前記 副撮影装置を追従させるための制御情報を生成する制御情報生成手段と、 前記副撮影装置に対し、前記ネットワークを介して前記生成された制御情報を送信 する第 1通信手段と  First control means for controlling shooting conditions of the video in the first shooting means; and for causing the sub-shooting apparatus to follow in accordance with a condition defining a composition of the shot video among the shooting conditions. Control information generating means for generating control information; and first communication means for transmitting the generated control information to the sub-shooting device via the network.
を具備し、  Comprising
前記副撮影装置は、  The auxiliary photographing device is
前記映像を撮影する第 2撮影手段と、  A second photographing means for photographing the video;
前記ネットワークを介して前記送信された制御情報を受信する第 2通信手段と、 前記受信された制御情報に基づ!、て、前記第 2撮影手段における前記映像の撮 影条件を制御する第 2制御手段と  Second communication means for receiving the transmitted control information via the network; and second control means for controlling the imaging condition of the video in the second imaging means based on the received control information! Control means and
を具備することを特徴とする映像撮影システム。  A video shooting system comprising:
[2] 前記第 1及び第 2制御手段のうち少なくとも一方は、予め設定された撮影パターン に基づいて、前記少なくとも一方に対応する撮影条件を制御する [2] At least one of the first and second control means controls a shooting condition corresponding to the at least one based on a preset shooting pattern
ことを特徴とする請求の範囲第 1項に記載の映像撮影システム。  The video imaging system according to claim 1, wherein:
[3] 前記第 1及び第 2制御手段のうち少なくとも一方は、前記予め設定された撮影バタ ーンとして、前記主撮影装置と前記副撮影装置との相対的な位置関係に対応する撮 影パターンに基づ 、て前記撮影条件を制御する [3] At least one of the first and second control means has a photographing pattern corresponding to a relative positional relationship between the main photographing device and the sub photographing device as the preset photographing pattern. To control the shooting conditions based on
ことを特徴とする請求の範囲第 2項に記載の映像撮影システム。  The video imaging system according to claim 2, wherein:
[4] 前記主撮影装置は、 ( 被写体までの距離、 (ii)前記主撮影装置の現在位置、及 び (iii)前記第 1撮影手段の撮影方位のうち少なくとも一つを含む第 1位置情報を取 得する第 1取得手段を更に具備する [4] The main photographing device includes first position information including at least one of (a distance to a subject, (ii) a current position of the main photographing device, and (iii) a photographing direction of the first photographing unit). A first acquisition means for acquiring
ことを特徴とする請求の範囲第 1項に記載の映像撮影システム。 The video imaging system according to claim 1, wherein:
[5] 前記副撮影装置は、 ( 被写体までの距離、 (ii)前記副撮影装置の現在位置、及 び (iii)前記第 2撮影手段の撮影方位のうち少なくとも一つを含む第 2位置情報を取 得する第 2取得手段を更に具備する [5] The sub-photographing device includes second position information including at least one of (distance to a subject, (ii) a current position of the sub-photographing device, and (iii) a photographing direction of the second photographing unit). A second acquisition means for acquiring
ことを特徴とする請求の範囲第 4項に記載の映像撮影システム。  The video imaging system according to claim 4, wherein:
[6] 前記第 1通信手段は、前記副撮影装置に対し、前記ネットワークを介して前記取得 された第 1位置情報を更に送信し、 [6] The first communication means further transmits the acquired first position information to the auxiliary photographing device via the network,
前記第 2通信手段は、前記送信された第 1位置情報を更に受信し、  The second communication means further receives the transmitted first position information;
前記第 2制御手段は、前記取得された第 2位置情報及び前記受信された第 1位置 情報に基づ 、て前記撮影条件を制御する  The second control means controls the photographing condition based on the acquired second position information and the received first position information.
ことを特徴とする請求の範囲第 5項に記載の映像撮影システム。  The video imaging system according to claim 5, wherein:
[7] 前記第 2通信手段は、前記主撮影装置に対し、前記ネットワークを介して前記取得 された第 2位置情報を送信し、 [7] The second communication means transmits the acquired second position information to the main photographing device via the network,
前記第 1通信手段は、前記送信された第 2位置情報を受信し、  The first communication means receives the transmitted second position information;
前記第 1制御手段は、前記取得された第 1位置情報及び前記受信された第 2位置 情報に基づ 、て前記撮影条件を制御する  The first control means controls the imaging condition based on the acquired first position information and the received second position information.
ことを特徴とする請求の範囲第 5項に記載の映像撮影システム。  The video imaging system according to claim 5, wherein:
[8] 前記主及び副撮影装置は夫々、前記第 1及び第 2通信手段を介して相互に認証を 行うための認証手段を更に具備する [8] The main and sub-photographing devices further include authentication means for performing mutual authentication via the first and second communication means, respectively.
ことを特徴とする請求の範囲第 1項に記載の映像撮影システム。  The video imaging system according to claim 1, wherein:
[9] 前記主及び副撮影装置のうち少なくとも一方は、該少なくとも一方において撮影さ れた映像と対応する音声情報を取得する音声情報取得手段を更に具備する ことを特徴とする請求の範囲第 1項に記載の映像撮影システム。 [9] The at least one of the main and sub imaging devices further includes audio information acquisition means for acquiring audio information corresponding to the video imaged by at least one of the main and sub imaging devices. The video imaging system according to item.
[10] ネットワークに収容され、該ネットワークに収容される他の映像撮影装置と共に映像 を撮影する映像撮影装置であって、 [10] A video imaging device that is accommodated in a network and shoots video together with other video imaging devices accommodated in the network,
前記映像を撮影する撮影手段と、  Photographing means for photographing the video;
前記撮影手段における前記映像の撮影条件を制御する制御手段と、  Control means for controlling shooting conditions of the video in the shooting means;
前記撮影条件のうち、前記撮影される映像の構図を規定する条件に応じて、前記 他の映像撮影装置を追従させるための制御情報を生成する制御情報生成手段と、 前記他の映像撮影装置に対し、前記ネットワークを介して前記生成された制御情 報を送信する通信手段と Control information generating means for generating control information for causing the other video imaging apparatus to follow in accordance with a condition defining the composition of the video to be captured among the imaging conditions; Communication means for transmitting the generated control information to the other video imaging device via the network;
を具備することを特徴とする映像撮影装置。  A video photographing apparatus comprising:
[11] ネットワークに収容され、該ネットワークに収容される他の映像撮影装置と共に映像 を撮影する映像撮影装置であって、  [11] A video imaging device that is accommodated in a network and shoots video together with other video imaging devices accommodated in the network,
前記映像を撮影する撮影手段と、  Photographing means for photographing the video;
前記他の映像撮影装置において、前記撮影される映像の構図を規定する条件に 応じて生成されると共に前記ネットワークを介して送信される制御情報を、前記ネット ワークを介して受信する通信手段と、  A communication means for receiving, via the network, control information generated according to a condition defining a composition of the video to be shot and transmitted via the network in the other video photographing device;
前記受信された制御情報に基づ!/、て、前記撮影手段における前記映像の撮影条 件を制御する制御手段と  Based on the received control information! /, Control means for controlling the imaging conditions of the video in the imaging means;
を具備することを特徴とする映像撮影装置。  A video photographing apparatus comprising:
[12] ネットワークに収容され、映像を撮影するための少なくとも一つの主撮影装置及び 少なくとも一つの副撮影装置を含む映像撮影システムにお 、て、相互に連動して前 記映像を撮影するための映像撮影方法であって、  [12] In a video shooting system that is housed in a network and includes at least one main shooting device and at least one sub-shooting device for shooting a video, A video shooting method,
前記主撮影装置において、(i)前記映像を撮影する第 1撮影工程、(ii)前記第 1撮 影工程における前記映像の撮影条件を制御する第 1制御工程、 (iii)前記撮影条件 のうち、前記撮影される映像の構図を規定する条件に応じて、前記副撮影装置を追 従させるための制御情報を生成する制御情報生成工程、及び (iv)前記副撮影装置 に対し、前記ネットワークを介して前記生成された制御情報を送信する送信工程と、 前記副撮影装置において、(i)前記映像を撮影する第 2撮影工程、(ii)前記ネットヮ ークを介して前記送信された制御情報を受信する受信工程、及び (iii)前記受信され た制御情報に基づいて前記第 2撮影工程における前記映像の撮影条件を制御する 第 2制御工程と  In the main photographing device, (i) a first photographing step for photographing the video, (ii) a first control step for controlling photographing conditions of the video in the first photographing step, and (iii) the photographing conditions A control information generating step for generating control information for following the sub-photographing device in accordance with conditions defining the composition of the captured video; and (iv) the network for the sub-photographing device. A transmission step of transmitting the generated control information via the network, (i) a second imaging step of photographing the video, and (ii) the transmitted control information via the network. And (iii) a second control step of controlling shooting conditions of the video in the second shooting step based on the received control information;
を具備することを特徴とする映像撮影方法。  A video shooting method comprising:
PCT/JP2005/016727 2004-09-10 2005-09-12 Video shooting system, video shooting device and video shooting method WO2006028247A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004263809 2004-09-10
JP2004-263809 2004-09-10

Publications (1)

Publication Number Publication Date
WO2006028247A1 true WO2006028247A1 (en) 2006-03-16

Family

ID=36036526

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/016727 WO2006028247A1 (en) 2004-09-10 2005-09-12 Video shooting system, video shooting device and video shooting method

Country Status (1)

Country Link
WO (1) WO2006028247A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010166218A (en) * 2009-01-14 2010-07-29 Tokyo Broadcasting System Holdings Inc Camera system and method of controlling the same
JP2011211387A (en) * 2010-03-29 2011-10-20 Hitachi Computer Peripherals Co Ltd Imaging apparatus and monitoring device
JP2013520927A (en) * 2010-04-01 2013-06-06 キャメロン ジェームズ Frame-linked 2D / 3D camera system
WO2020188957A1 (en) * 2019-03-20 2020-09-24 ソニー株式会社 Remote control device, imaging control device and control method thereof
CN115336247A (en) * 2020-06-10 2022-11-11 Jvc建伍株式会社 Image processing device and image processing system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08181902A (en) * 1994-12-22 1996-07-12 Canon Inc Camera control system
JPH09502331A (en) * 1994-06-22 1997-03-04 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Surveillance camera system
JPH0965175A (en) * 1995-08-25 1997-03-07 Canon Inc Remote control panhead device
JP2000113166A (en) * 1998-09-30 2000-04-21 Canon Inc Camera control system, camera control method, camera control server, camera device, user interface device, camera linkage control server, and program storage medium
JP2003158664A (en) * 2001-11-21 2003-05-30 Matsushita Electric Ind Co Ltd Camera controller
JP2003284050A (en) * 2002-03-25 2003-10-03 Hitachi Kokusai Electric Inc Television monitoring system
JP2003348428A (en) * 2002-05-24 2003-12-05 Sharp Corp Photographing system, photographing method, photographing program, and computer-readable recording medium having the photographing program recorded thereon
JP2004088558A (en) * 2002-08-28 2004-03-18 Sony Corp Monitoring system, method, program, and recording medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09502331A (en) * 1994-06-22 1997-03-04 フィリップス エレクトロニクス ネムローゼ フェンノートシャップ Surveillance camera system
JPH08181902A (en) * 1994-12-22 1996-07-12 Canon Inc Camera control system
JPH0965175A (en) * 1995-08-25 1997-03-07 Canon Inc Remote control panhead device
JP2000113166A (en) * 1998-09-30 2000-04-21 Canon Inc Camera control system, camera control method, camera control server, camera device, user interface device, camera linkage control server, and program storage medium
JP2003158664A (en) * 2001-11-21 2003-05-30 Matsushita Electric Ind Co Ltd Camera controller
JP2003284050A (en) * 2002-03-25 2003-10-03 Hitachi Kokusai Electric Inc Television monitoring system
JP2003348428A (en) * 2002-05-24 2003-12-05 Sharp Corp Photographing system, photographing method, photographing program, and computer-readable recording medium having the photographing program recorded thereon
JP2004088558A (en) * 2002-08-28 2004-03-18 Sony Corp Monitoring system, method, program, and recording medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010166218A (en) * 2009-01-14 2010-07-29 Tokyo Broadcasting System Holdings Inc Camera system and method of controlling the same
JP2011211387A (en) * 2010-03-29 2011-10-20 Hitachi Computer Peripherals Co Ltd Imaging apparatus and monitoring device
JP2013520927A (en) * 2010-04-01 2013-06-06 キャメロン ジェームズ Frame-linked 2D / 3D camera system
WO2020188957A1 (en) * 2019-03-20 2020-09-24 ソニー株式会社 Remote control device, imaging control device and control method thereof
US11792508B2 (en) 2019-03-20 2023-10-17 Sony Group Corporation Remote control device, imaging controlling device, and methods for them
CN115336247A (en) * 2020-06-10 2022-11-11 Jvc建伍株式会社 Image processing device and image processing system
CN115336247B (en) * 2020-06-10 2024-03-08 Jvc建伍株式会社 Image processing device and image processing system

Similar Documents

Publication Publication Date Title
JP4929940B2 (en) Shooting system
US8760518B2 (en) Photographing apparatus, photographing system and photographing method
JPWO2004066632A1 (en) Remote video display method, video acquisition device, method and program thereof
JP2004274625A (en) Photographing system
JP2006245650A (en) Information processing system, information processing apparatus and method, and program
JP4736381B2 (en) Imaging apparatus and method, monitoring system, program, and recording medium
WO2006028247A1 (en) Video shooting system, video shooting device and video shooting method
JP2018082298A (en) Image display system, communication system, image display method and program
JP2006033257A (en) Image distribution apparatus
JP4583717B2 (en) Imaging apparatus and method, image information providing system, program, and control apparatus
JP6950793B2 (en) Electronics and programs
JP3937355B2 (en) Imaging system, imaging main apparatus and imaging main method
JP5861420B2 (en) Electronic camera
WO2021251127A1 (en) Information processing device, information processing method, imaging device, and image transfer system
JP2019179963A (en) Photographing device and photographing method
JP7366594B2 (en) Information processing equipment and its control method
WO2021131349A1 (en) Imaging device, imaging device control method, control program, information processing device, information processing device control method, and control program
JP3931768B2 (en) Image photographing apparatus and image photographing system
JP2005348449A (en) Imaging, displaying, recording, reproducing, transmitting device and recording medium
WO2021153507A1 (en) Imaging device, control method for imaging device, control program, information processing device, control method for information processing device, and control program
JP2004241834A (en) Moving picture generating apparatus and method, moving picture transmission system, program, and recording medium
JP4211377B2 (en) Image signal processing device
JP4127080B2 (en) Imaging apparatus and method
JP2003037831A (en) Picture distribution system
JP2007074417A (en) Camera control system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP