WO2008010345A1 - Panorama image photographing system and panorama image photographing method - Google Patents

Panorama image photographing system and panorama image photographing method Download PDF

Info

Publication number
WO2008010345A1
WO2008010345A1 PCT/JP2007/060193 JP2007060193W WO2008010345A1 WO 2008010345 A1 WO2008010345 A1 WO 2008010345A1 JP 2007060193 W JP2007060193 W JP 2007060193W WO 2008010345 A1 WO2008010345 A1 WO 2008010345A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
circle boundary
panoramic
outer circle
degree
Prior art date
Application number
PCT/JP2007/060193
Other languages
French (fr)
Japanese (ja)
Inventor
Masayuki Ito
Original Assignee
Opt Corporation
Shinya Makoto
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Opt Corporation, Shinya Makoto filed Critical Opt Corporation
Publication of WO2008010345A1 publication Critical patent/WO2008010345A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a panorama developed image imaging apparatus and a panorama developed image imaging system.
  • Patent Document 1 discloses an omnidirectional photographing apparatus.
  • the omnidirectional imaging device develops and displays or records the video captured by one camera module that has a lens that can directly capture a 360-degree annular image.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2006-148787 (abstract, detailed description of the invention, drawings, particularly, paragraph 0024, paragraph 0049, FIG. 8, FIG. 9, etc.)
  • a lens that directly captures a 360-degree annular image is a special convex glass lens, specifically, a zenith reflecting surface and a shape surrounding the zenith reflecting surface.
  • Patent Document 1 suggests that a fish-eye lens can be used in place of the special convex glass lens.
  • the omnidirectional imaging device of Patent Document 1 uses such a special convex glass lens, the image height that can be captured is limited. [0006] Because there is a restriction on the image height, the omnidirectional photographing apparatus of Patent Document 1 imposes a restriction on the distance between the glass lens and the subject in practice. For example, if the subject gets too close to the glass lens, the subject's head will be cut off in the 360-degree annular image. In order to prevent the subject's head from being cut off, the distance of the glass lens to the subject must be secured.
  • An object of the present invention is to obtain a panorama unfolded image capturing apparatus and a panorama unfolded image capturing system capable of capturing a 360-degree panorama unfolded image without causing the subject to be cut.
  • the V Lamma development image capturing apparatus generates a 360 degree V Lama development image by imaging.
  • the panoramic developed image imaging device includes a fish-eye lens, an imaging unit that forms a circular image by the fish-eye lens on a light-receiving surface on which a plurality of light-receiving elements are arranged, and generates captured image data having the circular image;
  • the 360 degree panorama is stored in the memory based on the user's operation and the memory that stores the position or size of the inner and outer circle boundaries that define the range of the ring shape used to generate the unfolded image.
  • a user update unit that updates the position or size of at least one of the inner circle boundary and the outer circle boundary, and the inner circle boundary and outer circle boundary stored in the memory in the captured image generated by the imaging unit.
  • Panorama unfolded image generating means for generating a 360-degree non-linear raster image based on an image within the range of the ring shape.
  • a circular image formed by a fisheye lens is formed on the imaging means.
  • the imaging means generates a captured image having a circular image.
  • the circular image is included without the subject's head being cut off.
  • the no-V llama unfolded image generating means includes an inner circle boundary and an outer circle boundary in the captured image. Based on the image in the range of the ring shape sandwiched between the two, a 360 degree V-Rama expansion image is generated.
  • the positions and sizes of the inner circle boundary and the outer circle boundary can be updated by the user update unit based on the user's operation. Therefore, for example, even if it is not possible to physically secure the distance of the fisheye lens to the subject, the position of the inner circle boundary is updated by the user's operation so that the position of the outer circle boundary is updated.
  • Subject force Can be adjusted so that it is within the range of the S-ring shape. As a result, the subject's head is cut off, and a 360-degree panoramic image can be generated.
  • a panoramic developed image imaging device has the following characteristics. That is, the fisheye lens is of a three-dimensional projection system.
  • the memory stores an image height correction table for correcting the balance of the imaging size of the subject obtained based on the relationship between the incident angle of the stereoscopic lens fisheye lens and the image height.
  • the panorama spread image generation means refers to the image height correction table, and generates a 360-degree V-Rama development image in which the tolerance of the imaging size of the subject is adjusted.
  • the peripheral portion in the image formed by the fisheye lens is less distorted and the amount of information in the peripheral portion than when a general equidistant projection fisheye lens is used. Will increase. Therefore, when the panoramic developed image pickup device is placed on the table with the fisheye lens facing upward, it is possible to obtain an image of a person or a blackboard with a high resolution. The characters on the blackboard can be reproduced in a 360-degree panoramic image. In addition, it is possible to obtain a well-balanced image such as a person or a blackboard with a fisheye lens.
  • a panoramic developed image capturing apparatus has the following features in addition to the above-described components of the present invention. That is, the user update means updates the image height correction table stored in the memory after updating the position or size of at least one of the inner circle boundary and the outer circle boundary stored in the memory. It is updated based on the outer circle boundary and the relationship between the angle of incidence and the image height of the cuboid projection fisheye lens.
  • the panoramic developed image capturing apparatus has the following features in addition to the above-described components of the present invention.
  • the user update means, when the size after the change of the inner circle boundary or the outer circle boundary is a radius greater than or equal to the predetermined number of pixels, Update the inner circle boundary or outer circle boundary stored in memory.
  • the radius of the inner circle boundary becomes one pixel
  • the same pixel in the captured image is used many times in order to obtain pixel values of a plurality of display pixels at the top of the 360-degree panorama development image. Will be.
  • the load of generating a 360-degree panorama development image increases without improving the image quality of the 360-degree rama development image.
  • the panoramic developed image capturing apparatus has the following features in addition to the above-described components of the present invention. That is, the no-rama development image generation means calculates the distance and the direction from the center of the circular image of each display pixel of the 360-degree panorama development image, and uses the calculated distance and the direction of each display pixel. Then, the coordinates in the captured image are calculated, and the pixel value of the pixel of the captured image located at each calculated coordinate is acquired as the pixel value of each display pixel.
  • a panoramic developed image imaging device includes the following components in addition to the above-described components of the present invention. It has the following characteristics.
  • the no-rama development image generation means divides the range between the inner circle boundary and the outer circle boundary in the captured image into two ranges every 180 degrees, and 2 for each range of the divided images.
  • Two horizontally long panoramic divided development images are generated.
  • the display means can display a large 360-degree image of the llama.
  • a NORMAL development image imaging system displays a panorama development image imaging device that generates a 360-degree panorama development image by imaging, and a NORMAL development image generated by the panorama development image imaging device. Or it has a computer apparatus which preserve
  • the panoramic developed image imaging device includes a fish-eye lens, an imaging unit that forms a circular image by the fish-eye lens on a light-receiving surface on which a plurality of light-receiving elements are arranged, and generates data of a captured image having a circular image;
  • a memory that stores the position or size of the inner and outer circle boundaries that define the range of the ring shape used to generate a 360-degree image of the 360 ° llama in the image, and a memory based on user operations. Is the position of at least one of the inner circle boundary and the outer circle boundary stored in the !!
  • the user update means for updating the size, and the inner circle boundary stored in the memory in the captured image generated by the imaging means Panorama unfolded image generating means for generating a 360 degree NOV llama unfolded image based on an image within a range of a ring shape sandwiched between the outer circle boundaries.
  • a 360-degree panoramic developed image can be taken without the subject being cut off.
  • FIG. 1 is a perspective view showing a panoramic developed image imaging system according to an embodiment of the present invention.
  • 2 is a block diagram showing an electric circuit of the 360-degree video camera device in FIG.
  • FIG. 3 is an explanatory view showing an optical arrangement of the fisheye lens and the image sensor in FIG.
  • FIG. 4 is a diagram illustrating an example of a captured image generated by the color conversion processing unit in FIG. 1 based on luminance distribution data of the image sensor.
  • FIG. 5 is an explanatory diagram showing an image obtained by superimposing a circular mark indicating the position of the inner circle boundary and a circular mark indicating the position of the outer circle boundary on the captured image in the 360-degree video camera device of FIG. is there.
  • FIG. 6 is an image height (field angle difference) characteristic diagram of the three-dimensional projection type fisheye lens in FIG.
  • FIG. 7 is a flow chart showing the flow of a divided expanded image generation process by the expanded still image generation unit for one divided image range generated in the 360-degree video camera device of FIG.
  • FIG. 8 is a diagram showing an example of an array of a plurality of display pixels in a divided expanded image generated in the 360-degree video camera device of FIG.
  • FIG. 9 is an explanatory diagram illustrating directions in a ring-shaped image (directions in a divided image range) of each developed pixel row in FIG. 8.
  • FIG. 10 is an explanatory diagram of coordinate positions in an original image (circular image) of a specific display pixel in the 360-degree video camera device of FIG.
  • FIG. 11 A display example of a 360-degree panoramic developed image displayed on the liquid crystal display device of the PC in FIG.
  • FIG. 12 is a flowchart showing the operation of the process management unit in the change mode of the 360-degree video camera device of FIG. Explanation of symbols
  • the panorama development image capturing apparatus will be described by taking a conference 360-degree video camera apparatus as an example.
  • the V-Lama unfolding imaging system will be described using an example of a 360-degree video camera device connected to a personal computer.
  • FIG. 1 is a perspective view showing a panoramic developed image imaging system according to an embodiment of the present invention.
  • the NOV llama unfolded image capturing system includes a 360 degree video camera device 1 as a NOV llama unfolded image capturing device and a PC (personal computer) 3 as a computer device.
  • the 360-degree video camera device 1 is connected to the PC 3 by a USB (Universal Serial Bus) cable 2.
  • USB Universal Serial Bus
  • the PC 3 includes a USB connector 11, a liquid crystal display device 12, a speaker 13, an input device 14, and the like.
  • a CPU Central Processing Unit
  • a storage device power program (not shown) so that the PC 3 has a communication processing unit 15, a reproduction processing unit 16, a command generation unit. 17 is realized.
  • the communication processing unit 15 controls data communication using the USB connector 11.
  • the reproduction processing unit 16 controls the content displayed on the liquid crystal display device 12 and causes the speaker 13 to output sound.
  • the command generation unit 17 generates a command based on input data to which the input device 14 is also input. Examples of the input device 14 include a keyboard and a pointing device.
  • the 360-degree video camera device 1 has a cubic-shaped housing 21.
  • the housing 21 is provided with a fisheye lens 22, a USB connector 23, a video output connector 24, an audio output connector 25, and the like.
  • the fisheye lens 22 is disposed on the upper surface of the housing 21.
  • housing A ventilation hole 20 for the microphone 26 is formed on the upper surface of 21.
  • the USB connector 23, the video output connector 24 and the audio output connector 25 are arranged on the side surface of the housing 21.
  • FIG. 2 is a block diagram showing an electric circuit of 360-degree video camera device 1 in FIG.
  • an electric circuit for generating a 360-degree expanded image of a 360-degree llama by imaging is incorporated.
  • the 360-degree video camera device 1 includes an image sensor 27 as a part of the imaging means, an FPG A (Field Programmable Gate Array). 28, DSP (Digital Signal Processor) 29, external memory 30, audio IC (Integrated Circuit) 31, video encoder 32, etc.
  • FPG A Field Programmable Gate Array
  • DSP Digital Signal Processor
  • FIG. 3 is an explanatory diagram showing an optical arrangement of the fisheye lens 22 and the image sensor 27.
  • the image sensor 27 is, for example, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor.
  • the image sensor 27 has a light receiving surface 33.
  • a plurality of light receiving elements (not shown) are arranged in a matrix at a ratio of, for example, 3: 4.
  • Each light receiving element outputs a value corresponding to the amount of received light.
  • the image sensor 27 generates luminance distribution data having a plurality of received light amount values output from a plurality of light receiving element forces.
  • the image sensor 27 generates luminance distribution data at a predetermined cycle.
  • the fish-eye lens 22 has a wide viewing angle of, for example, 180 degrees or more, and has a stereoscopic projection method. Compared with a general equidistant projection fisheye lens, the three-dimensional projection fisheye lens 22 causes less distortion in the peripheral portion of the image formed and increases the amount of information in the peripheral portion.
  • the projection method of the fish-eye lens 22 in addition to the stereoscopic projection method and the equidistant projection method, an equal stereoscopic projection method, an orthographic projection method, and the like can be adopted.
  • the 360-degree video camera device 1 is placed on the table in the conference room with the fisheye lens 22 facing upward, the amount of information such as a person or blackboard that appears in the surrounding area increases, so the stereoscopic projection method The one is the best.
  • the fisheye lens 22 is disposed above the light receiving surface 33 of the image sensor 27. This thus, a circular image (hereinafter referred to as a circular image) by the fisheye lens 22 is formed on the light receiving surface 33 of the image sensor 27.
  • the image sensor 27 periodically generates luminance distribution data and outputs it to the FP GA28.
  • a color conversion processing unit 36 is realized as a part of the imaging means.
  • the color conversion processing unit 36 replaces the data of each pixel of the luminance distribution data using a color conversion table (not shown).
  • the color conversion processing unit 36 replaces, for example, pixel data in a circular image with predetermined color data using the pixel value and the peripheral pixel values in the luminance distribution data. Thereby, captured image data having appropriate color data is generated.
  • FIG. 4 is a diagram illustrating an example of a captured image that the color conversion processing unit 36 generates based on the luminance distribution data of the image sensor 27. As shown in FIG. 4, the captured image has a circular image at the center. An image of the subject is formed inside the circular image. The color conversion processing unit 36 outputs the generated captured image data to the DSP 29.
  • the microphone 26 generates a waveform signal corresponding to the sound.
  • the waveform signal is converted into an audio signal by the audio IC 31 and supplied to the audio output connector 25.
  • a speaker unit or headphones can be connected to the audio output connector 25. Audio can be heard through the speaker unit or headphones connected to the audio output connector 25.
  • a voice storage processing unit 37 is realized in the voice IC 31. The sound storage processing unit 37 samples the waveform signal supplied from the microphone 26 and stores the sound data 38 generated by the sampling in the external memory 30.
  • the DSP 29 has an EEPROM (Electrically Erasable Programmable Read-Only Memory) 41 as a memory.
  • the EEPROM 41 stores inner circle boundary data 42, outer circle boundary data 43, an incident angle image height table 44, an image height correction table 45, a direction table 46, and the like.
  • FIG. 5 is an explanatory diagram showing an image in which a circular mark 47 indicating the position of the inner circle boundary and a circular mark 48 indicating the position of the outer circle boundary are superimposed on the captured image.
  • the inner circle boundary and the outer circle boundary are boundaries that specify an image range to be expanded as a panoramic expansion image.
  • the image portion between the inner circle boundary and the outer circle boundary is developed as a panoramic image.
  • the data 42 is data indicating the position and size of the inner circle boundary in the captured image.
  • the outer circle boundary data 43 is data indicating the position and size of the outer circle boundary in the captured image.
  • FIG. 6 is an image height (field angle difference) characteristic diagram of the fisheye lens 22 of the three-dimensional projection method.
  • the horizontal axis is the relative field angle when the optical axis direction of the fish-eye lens 22 is 0 degrees
  • the vertical axis is the image height (field angle difference).
  • the image height (field angle difference) of the subject existing in the direction of the angle of view of the fisheye lens 22 is about 0.1.
  • the image height (angle difference) of the subject that exists in the direction of 90 degrees is about 0.2.
  • the incident angle image height table 44 is obtained by converting the image height (view angle difference) characteristics with respect to the angle of view of the fisheye lens 22 into data.
  • the stereoscopic projection method has a larger amount of peripheral information than the conventional equi-distance projection fish-eye lens, so that the blind spots are reduced and the image is less distorted. However, the distortion is reduced, but not zero.
  • the image height correction table 45 stores correction data for obtaining a developed image in which the subject appears as a natural balance from the image formed on the image sensor 27 by the fisheye lens 22 of the three-dimensional projection method.
  • the DSP 29 has a CPU (not shown). This CPU executes the program
  • the DSP 29 includes a still image storage processing unit 51, a developed still image generation unit 52 as a V Lama development image generation unit, a compressed still image generation unit 53, a streaming generation unit 54 as a display data generation unit, and a user update unit.
  • the processing management unit 55, the communication processing unit 56, and the like are realized.
  • External memory 30 is connected to the DSP 29.
  • External memory 30 is SRAM (Static RAM)
  • DSP29 CPU It consists of storage devices such as RAM) and DDR—SDRAM (Double Data Rate SDRAM), and is accessible to the DSP29 CPU.
  • RAM random access memory
  • DDR—SDRAM Double Data Rate SDRAM
  • the external memory 30 stores still image data 39, audio data 38, two compressed and expanded still image data 61 and 62, and the like.
  • the external memory 30 has a first VRAM (VideoRAM) area 63 and a second VRAM area 64.
  • One of the two compressed decompressed still image data 61 and 62 is stored in the first VRAM area 63, and the other is stored in the second VRAM area 64.
  • the first VRAM area 63 stores the nth compressed decompression still image data 61
  • the second VRAM area 64 stores the nth compressed decompression still image data 62.
  • the still image storage processing unit 51 implemented in the DSP 29 stores the captured image data in the external memory 30 as still image data 39. save.
  • the developed still image generation unit 52 generates expanded still image data from the still image data 39 stored in the external memory 30. Expanded still image data is image data obtained by expanding a circular image in a captured image into a 360-degree V-Rama image. The expanded still image generating unit 52 generates and stores expanded still image data alternately in the first VRAM area 63 and the second VRAM area 64.
  • the compressed still image generation unit 53 compresses the expanded still image data stored in the first VRAM area 63 or the second VRAM area 64.
  • an image compression method such as a PEG (Joint Photographic Coding Experts Group) compression method or an MPEG (Moving Picture Coding Experts Group) compression method may be used.
  • the compressed expanded still image data 61 is stored in the first VRAM area 63
  • the compressed expanded still image data 62 is stored in the second VRAM area 64.
  • the streaming generation unit 54 reads the compressed and expanded still image data 61 and 62 and the audio data 38 from the external memory 30, and generates streaming data having these content data.
  • a streaming format such as MPEG or div-X (divix) may be adopted.
  • the communication processing unit 56 controls data communication using the USB connector 23.
  • the process management unit 55 manages the execution of the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, the streaming generation unit 54, the communication processing unit 56, and the like.
  • the process management unit 55 instructs the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, the streaming generation unit 54, the communication processing unit 56, and the like to start or stop them.
  • the video encoder 32 reads the compressed decompressed still image data 61 and 62 from the external memory 30, and generates a video signal. Examples of video signals include NTSC (National TV Standards Committee) and PAL (Phase Alternating Line). The video encoder 32 outputs the generated video signal to the video output connector 24. A television receiver or the like can be connected to the video output connector 24. With a television receiver connected to the video output connector 24, the video signal can be reproduced and the video can be viewed.
  • NTSC National TV Standards Committee
  • PAL Phase Alternating Line
  • the 360-degree video camera device 1 When power is supplied to the 360-degree video camera device 1 from, for example, the USB cable 2, the DP S29 and the like operate, and the 360-degree video camera device 1 includes a color conversion processing unit 36, an audio storage processing unit 37, and a stationary state.
  • An image storage processing unit 51, a developed still image generation unit 52, a compressed still image generation unit 53, a streaming generation unit 54, a process management unit 55, and the like are realized.
  • the process management unit 55 instructs the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, and the streaming generation unit 54 to start them.
  • the image sensor 27 of the 360-degree video camera device 1 On the image sensor 27 of the 360-degree video camera device 1, an image formed by the light collected by the fisheye lens 22 is formed.
  • the image sensor 27 generates luminance distribution data including the luminance distribution of the circular image.
  • the color conversion processing unit 36 generates captured image data having a circular image as illustrated in FIG. 4 from the luminance distribution data using a color conversion table (not shown).
  • the still image storage processing unit 51 converts the captured image data into the external memory 3 as still image data 39. Save to 0.
  • the image sensor 27 periodically generates luminance distribution data. Therefore, the still image data 39 in the external memory 30 is updated to new captured image data every predetermined period.
  • the developed still image generating unit 52 When the still image data 39 in the external memory 30 is updated, the developed still image generating unit 52 generates expanded still image data from the updated still image data 39.
  • the expanded still image generation unit 52 first reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41, and generates a panoramic expanded image in the captured image based on the updated still image data 39. Determine the range of the ring shape to do.
  • the developed still image generation unit 52 determines two division ranges obtained by equally dividing the ring-shaped image range into two at 180 degrees. In FIG. 5, dotted lines 71 and 72 for horizontally dividing the ring-shaped image range into two are shown.
  • the developed still image generation unit 52 divides the ring-shaped image between the outer circle boundary and the inner circle boundary into two parts every 180 degrees with these two dotted lines 71 and 72, for example, and two upper and lower divided images. For each of the ranges 73 and 74, the divided development image generation processing is executed.
  • FIG. 7 is a flowchart showing the flow of a divided expanded image generation process by the expanded still image generation unit 52 for one divided image range 73 (74).
  • the developed still image generating unit 52 first confirms whether or not the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 have been updated after the last generated generated image (step ST1).
  • the developed still image generating unit 52 displays the divided developed image.
  • the direction in each divided pixel range 73 (74) of each developed pixel column is calculated to generate the direction table 46 (step ST2).
  • FIG. 8 is a diagram illustrating an example of an array of a plurality of display pixels of the divided development image.
  • a plurality of display pixels are arranged in 3 rows x 4 columns in this divided expanded image.
  • the developed still image generation unit 52 has a ring shape for each of four developed pixel columns (1, Z) 81, (2, Z) 8 2, (3, Z) 83, (4, Z) 84.
  • the direction in the image is calculated.
  • FIG. 9 shows the direction of the developed pixel columns 81, 82, 83, 84 in FIG.
  • FIG. 16 is an explanatory diagram illustrating a direction in a divided image range 73 (74).
  • the developed still image generating unit 52 calculates the angle of the expanded pixel row (2, Z) 82 as “ ⁇ 1 + equal division angle X 2”.
  • the expanded still image generating unit 52 calculates the angle of the fourth expanded pixel column (4, Z) 84 as “ ⁇ 1 + equal division angle X 3”.
  • the developed still image generating unit 52 calculates the direction of each of the developed pixel columns 81, 82, 83, and 84 as described above.
  • the developed still image generating unit 52 generates a direction table 46 in which each direction is associated with a plurality of developed pixel columns 81, 82, 83, and 84.
  • the developed still image generation unit 52 stores the generated direction table 46 in the EEPROM 41.
  • the direction table 46 may be stored in the external memory 30 or the like.
  • the expanded still image generating unit 52 specifies the first display pixel of the divided expanded image (step ST3).
  • the expanded still image generation unit 52 specifies, for example, the upper left display pixel ((1, 1) in FIG. 8) of the divided expanded image.
  • the developed still image generation unit 52 identifies the first display pixel of the divided developed image (step ST3).
  • the developed still image generation unit 52 specifies a pixel in the captured image corresponding to the specified display pixel, acquires a pixel value of the pixel in the specified captured image, and acquires the acquired pixel value. Is stored as the pixel value of the specified display pixel.
  • the developed still image generation unit 52 first refers to the image height correction table 45 and calculates the distance R from the center of the original image (circular image) for the specified display pixel. (Step ST4). For example, referring to the image height correction table 45, the developed still image generation unit 52 compares one developed pixel row of the divided developed image with a pixel array in the radial direction within the ring shape range of the captured image. The distance R from the center of the donut at the position of the original image (circular image) corresponding to the specific display pixel is calculated. Next, the developed still image generation unit 52 uses the direction of the specific display pixel stored in the direction table 46 and the calculated distance R to perform the horizontal display of the specific display pixel in the original image (circular image). Calculate the coordinates (step ST5). The developed still image generation unit 52 calculates the ordinate in the original image (circular image) of the specific display pixel from the direction of the specific display pixel stored in the direction table 46 and the calculated distance R (step) ST6).
  • FIG. 10 is an explanatory diagram of the coordinate position in the original image (circular image) of the specific display pixel.
  • the center of the original image (circular image) (IOX, IOY) force is the center of the inner circle boundary and the center of the outer circle boundary.
  • (IOX, IOY) is the center of the donut with the inner and outer circle boundaries.
  • the coordinate position (IX, IY) of the specific display pixel in the original image (circular image) is Is calculated by The coordinate position (IX, IY) is based on the upper left of the original image (circular image).
  • IX IOX + RX cos ⁇ ⁇ ⁇ ⁇ Equation 1
  • the developed still image generating unit 52 When the coordinate position (IX, IY) of the specific display pixel in the original image (circular image) is calculated, the developed still image generating unit 52 generates the pixel 91 of the original image (circular image) at that coordinate. The value is acquired and stored as the pixel value of the specified display pixel. The developed still image generation unit 52 stores the pixel value of the specified display pixel in the first VRAM area 63 or the second VRAM area 64 of the external memory 30 (step ST7).
  • the developed still image generation unit 52 determines and stores the pixel value of one display pixel specified in step ST3. Thereafter, the developed still image generating unit 52 determines whether or not the display pixels for which the pixel value has not been determined remain among the plurality of display pixels of the divided expanded image, determines the pixel value, and determines the pixel value. Until the number of display pixels is exhausted, the process for determining the pixel value of the above display pixels is repeated.
  • the developed still image generating unit 52 first determines whether or not the determination of the pixel value for the display pixels for one column of the divided expanded image has been completed (step ST8). If one column has not been completed (No in step ST8), the developed still image generation unit 52 identifies the display pixel of the next row in the same column (step ST9), and Pixel Determine the value (steps ST4 to ST7).
  • step ST8 the expanded still image generation unit 52 further determines all of the divided expanded images. It is determined whether or not the force has been determined for the pixel value for the column (step ST10). If the processing has not been completed for all columns (No in step ST10), the developed still image generation unit 52 identifies the display pixel at the upper end (top row) of the next column (step ST11). Then, the pixel value of the specific display pixel is determined (steps ST4 to ST7).
  • the expanded still image generation unit 52 performs the divided expanded image generation processing of FIG. finish.
  • the divided expanded image generation process for one divided image range 73 (74) is completed.
  • the developed still image generation unit 52 executes a divided expanded image generation process for the remaining one divided image range 74 (73).
  • two divided expanded images based on the still image data 39 are stored in the first VRAM area 63 or the second VRAM area 64 of the external memory 30.
  • the compressed still image generating unit 53 compresses the two divided expanded images.
  • the compressed and expanded still image data 61 and 62 are stored in the first VRAM area 63 or the second VRAM area 64.
  • the streaming generation unit 54 reads the compressed decompressed still image data 61 and 62 in order from the first VRAM area 63 and the second VRAM area 64, and also reads the audio data 38 from the external memory 30. Generate streaming data with content data. In the streaming data, the two divided expanded images are arranged one above the other.
  • the streaming generation unit 54 supplies the generated streaming data to the communication processing unit 56.
  • the communication processing unit 56 of the 360-degree video camera apparatus 1 transmits streaming data to the communication processing unit 15 of the PC 3 via the USB connector 23, the USB cable 2, and the USB connector 11 of the PC 3.
  • the communication processing unit 15 of the PC 3 supplies the received streaming data to the reproduction processing unit 16.
  • the The playback processing unit 16 of the PC 3 extracts compressed decompressed still image data from the streaming data and decodes it, and supplies the data of the two divided expanded images to the liquid crystal display device 12 as display data.
  • the liquid crystal display device 12 displays two divided expanded images.
  • the playback processing unit 16 of the PC 3 extracts the audio data with the streaming data power and supplies it to the speaker 13.
  • the speaker 13 outputs a sound based on the extracted audio data.
  • FIG. 11 is a display example of a 360-degree panoramic developed image displayed on the liquid crystal display device 12 of the PC 3 by imaging.
  • a 360-degree image expanded by imaging is displayed by two divided expanded screens 101 and 102 arranged vertically. Since the two divided development screens 101 and 102 are arranged one above the other, the display screen of the liquid crystal display device 12 can be used more effectively than when these are one development image, for example.
  • a 360-degree panoramic image can be displayed large on the liquid crystal display device 12.
  • the two split development screens 101 and 102 in FIG. 11 have two split developments every 180 degrees obtained by dividing the ring-shaped image range of FIG. 5 into two by dotted lines 71 and 72. An image is displayed. The two split developed images are continuous with each other at both ends, and a 360-degree panoramic image is displayed seamlessly as a whole. With this display, the user of the PC 3 can view the 360-degree video image developed by the 360-degree video camera device 1. Further, the user of the PC 3 can hear the sound acquired by the 360-degree video camera device 1 by the sound emitted from the speaker 13.
  • the image sensor 27 of the 360-degree video camera device 1 generates brightness distribution data for each period.
  • the color conversion processing unit 36 generates captured image data from the luminance distribution data.
  • the still image storage processing unit 51 updates the still image data 39 in the external memory 30 with the captured image data generated every period.
  • the developed still image generation unit 52 reads still image data 39 from the external memory 30, and alternately generates two divided expanded image data based on the read still image data 39 in the first VRAM area 63 and the second VRAM area 64.
  • the streaming generation unit 54 reads the compressed data of the two divided expanded images written in the first VRAM area 63 or the second VRAM area 64, and generates streaming data.
  • the streaming generation unit 54 power generates a decompressed still image while reading the data of two compressed expanded images from one of the first VRAM area 63 and the second VRAM area 64.
  • the unit 52 can generate two divided expanded images in the other VRAM area, and the compressed still image generating unit 53 can compress the two divided expanded images in the other VRAM area.
  • the streaming generation unit 54 can assign the two divided development images based on the luminance distribution data generated by the image sensor 27 every period to the streaming data without causing image omission (frame omission). Accordingly, the moving image based on the luminance distribution data periodically generated by the image sensor 27 is displayed on the liquid crystal display device 12 of PC3.
  • the command generation unit 17 of the PC 3 generates an auto pan instruction command based on the input data input from the input device 14.
  • the communication processing unit 15 of the PC 3 transmits the auto pan instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2.
  • the communication processing unit 56 of the 360-degree video camera device 1 supplies the received autopan command to the processing management unit 55.
  • the process management unit 55 When the auto pan instruction command is supplied, the process management unit 55 periodically updates the direction table 46 stored in the EEPROM 41. Specifically, the process management unit 55 periodically repeats the process of adding a predetermined constant angle to the data in a plurality of directions for each developed pixel column in the direction table 46.
  • the developed still image generation unit 52 periodically increases the direction of the specific display pixel by a predetermined constant angle in the flowchart of FIG.
  • the angle 01 of the developed pixel row (1, Z) 81 in the first row in FIG. 9 periodically increases by a predetermined constant angle.
  • the direction of the specific display pixel used in steps ST5 and ST6 in FIG. 7 also periodically increases by a predetermined constant angle.
  • the command generation unit 17 of the PC 3 generates a change mode start instruction command based on the input data input from the input device 14.
  • the communication processing unit 15 of the PC 3 transmits a change mode start instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2.
  • the communication processing unit 56 of the 360-degree video camera device 1 supplies the received change mode start instruction command to the processing management unit 55.
  • FIG. 12 is a flowchart showing the operation of the process management unit 55 in the change mode.
  • step ST21 When the change mode start instruction command is supplied (step ST21), the process management unit 55 supplies a boundary display instruction to the stream generation unit 54 (step ST22). After that, the process management unit 55 waits to receive a boundary change instruction command and a change mode end instruction command (steps ST23 and ST24).
  • the streaming generation unit 54 to which the boundary display instruction is supplied reads the still image data 39 from the external memory 30. Every time the still image data 39 is read, the streaming generation unit 54 reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41 following the reading. As shown in FIG. 5, the streaming generation unit 54 generates a display image in which the inner circle boundary mark 47 and the outer circle boundary mark 48 are superimposed on the captured image of the still image data 39. Note that the streaming generation unit 54 may read the inner circle boundary data 42 and the outer circle boundary data 43 each time the still image data 39 is read a plurality of times.
  • the streaming generation unit 54 generates streaming data including the generated display image data, and supplies the streaming data to the communication processing unit 56.
  • the communication processing unit 56 of the 360-degree video camera device 1 transmits the streaming data to the communication processing unit 15 of the PC 3 via the USB cable 2.
  • the communication processing unit 15 of the PC 3 supplies the received streaming data to the reproduction processing unit 16.
  • the reproduction processing unit 16 of the PC 3 extracts display image data from the streaming data and supplies it to the liquid crystal display device 12 as display data.
  • the liquid crystal display device 12 displays a captured image in which a circular mark 47 on the inner circle boundary and a circular mark 48 on the outer circle boundary are superimposed.
  • the command generation unit 17 of the PC 3 moves the positions of the inner circle boundary and the outer circle boundary, and enlarges / reduces the radii of the inner circle boundary and the outer circle boundary based on a user operation on the input device 14. A boundary change instruction command is generated.
  • the communication processing unit 15 of the PC 3 transmits a boundary change instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2.
  • the communication processing unit 56 of the 360 degree video camera apparatus 1 supplies the received boundary change instruction command to the processing management unit 55.
  • the process management unit 55 first determines whether the instruction is appropriate (step ST25). Specifically, the process management unit 55 determines whether, for example, the radius of the inner circle boundary after the change or the radius of the outer circle boundary after the change is equal to or larger than a predetermined radius (for example, 144 pixels).
  • step ST25 If it is determined that the instruction is appropriate (Yes in step ST25), the process management unit 55 determines that the inner circle boundary data 42 and the outer circle boundary stored in the EEPROM 41 are based on the change instruction. The boundary data 43 is updated (step ST26). Further, the process management unit 55 updates the image height correction table 45 stored in the EEPROM 41 based on the new combination of the inner circle boundary and the outer circle boundary (step ST27).
  • the subject forms an image with a different size depending on the angle of view. Therefore, as described in FIG. 6, even if the distance between the inner circle boundary and the outer circle boundary has not changed due to the update, the position of the inner circle boundary and the outer circle boundary only moves, and the imaging is performed. The division ratio of a plurality of pixels in the space changes.
  • the processing management unit 55 updates the image height correction table 45 in accordance with the change in the division ratio of the plurality of pixels accompanying the change between the inner circle boundary and the outer circle boundary. Thereafter, the process management unit 55 returns to the reception waiting state for the boundary change instruction command and the change mode end instruction command (steps ST23 and ST24).
  • step ST25 If the indication is not appropriate (No in step ST25), for example, the radius of the inner circle boundary is If the radius is smaller than the predetermined radius, process management unit 55 ignores the change instruction and returns to a waiting state for receiving a boundary change instruction command and a change mode end instruction command (steps ST23 and ST24).
  • the streaming generation unit 54 to which the boundary display instruction is supplied reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41 each time the still image data 39 is read from the external memory 30. Therefore, when the process management unit 55 updates the inner circle boundary data 42 and the outer circle boundary data 43, the streaming generation unit 54 also updates the position and size of the overlap of the circular marks 47 and 48 with respect to the captured image. The positions and sizes of the circular marks 47 and 48 displayed on the liquid crystal display device 12 so as to overlap the captured image are changed according to the boundary change instruction command. The user can know that the inner circle boundary data 42 and the outer circle boundary data 43 have been updated by the display on the liquid crystal display device 12.
  • the command generation unit 17 of the PC 3 generates a change mode end instruction command based on a user operation on the input device 14.
  • the communication processing unit 15 of the PC 3 transmits a change mode end instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2.
  • the communication processing unit 56 of the 360-degree video camera apparatus 1 supplies the received change mode end instruction command to the process management unit 55.
  • the process management unit 55 ends the process of the change mode in FIG.
  • the inner circle boundary data 42, the outer circle boundary data 43, and the image height correction table 45 are changed to new ones, and the developed still image generation unit 52 of the 360-degree video camera device 1 performs this change. Using this data, a 360-degree no-V llama expansion image is generated.
  • a circular image by the fisheye lens 22 is formed on the image sensor 27 of the 360-degree video camera device 1.
  • the still image storage processing unit 51 generates captured image data having a circular image based on the luminance distribution data of the image sensor 27. This circular image includes the subject's head without being cut off.
  • the developed still image generating unit 52 uses two divided developed images based on an image within the range of the ring shape sandwiched between the inner circle boundary and the outer circle boundary in the captured image. 360 A panorama unfolded image is generated. The position and size of the inner and outer circle boundaries are
  • the processing management unit 55 can freely set the user.
  • the process management unit 55 by updating the position or size of the inner circle boundary or outer circle boundary by the process management unit 55, it is possible to adjust the subject in the circular image to be within the ring shape range.
  • the position of the inner circle boundary etc. It can generate a 360-degree image with no head cut.
  • the display screen of the liquid crystal display device 12 can be used more effectively than if, for example, a 360-degree panoramic development image is a single development image.
  • the liquid crystal display device 12 can display a large 360-degree image of the V llama.
  • the fisheye lens 22 is a stereoscopic projection system. Compared to the case of using a regular equidistant projection fisheye lens, the stereoscopic projection fisheye lens 22 has less distortion at the periphery in the image formed by the fisheye lens 22, and the amount of information at the periphery. Will increase.
  • the EEPROM 41 also stores an image height correction table 45 that corrects the balance of the imaging size of the subject obtained based on the relationship between the incident angle and the image height of the fisheye lens 22 of this stereoscopic projection method! To do.
  • the developed still image generation unit 52 refers to the image height correction table 45 and generates a 360-degree panoramic developed image in which the balance of the imaging size of the subject is adjusted.
  • the image height correction table 45 stored in the EEPROM 41 is It is updated each time the inner circle boundary data 42 or the outer circle boundary data 43 is updated.
  • the division ratio of the image space between them changes by a plurality of pixels. Even if the positions of the inner circle boundary and the outer circle boundary only move, the division ratio of the plurality of pixels in the image space between them changes.
  • the image height correction table 45 is maintained to be compatible with these.
  • the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 are updated when the changed size is a radius of 144 pixels or more. For example, if the radius of the inner circle boundary becomes 1 pixel, the same pixel in the captured image will be used many times to obtain the pixel values of the top display pixels of the 360 ° panorama development image. It's a little bit. In this case, the load of generating a 360-degree panoramic image increases without improving the image quality of the 360-degree image. By preventing the radius of the inner circle boundary or outer circle boundary from becoming smaller than 144 pixels as in this embodiment, it is possible to prevent such disadvantages from occurring.
  • the developed still image generation unit 52 calculates the distance R and the direction from the center of the circular image of each display pixel of the 360-degree panoramic developed image, and calculates the calculated distance R and Using the direction, the coordinates (IX, IY) in the captured image of each display pixel are calculated, and the pixel value of the pixel 91 in the captured image located at each calculated coordinate (IX, IY) is calculated for each display pixel. Obtained as a pixel value.
  • this embodiment is an example of a preferred embodiment of the present invention.
  • the fish-eye lens 22 uses a stereoscopic projection type.
  • a general equidistant projection method, an isometric projection method, an orthographic projection method, or the like may be used as the fisheye lens 22.
  • the developed still image generating unit 52 generates two divided expanded images every 180 degrees as a 360-degree panoramic expanded image.
  • the developed still image generating unit 52 may generate one 360-degree panoramic expanded image, or may be divided into three or four.
  • the process management unit 55 updates both the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 based on a user operation.
  • the process management unit 55 may update only one of the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 based on a user operation.
  • the PC 3 is connected to the 360 degree video camera apparatus 1 by the USB cable 2.
  • the 360 degree video camera apparatus 1 may be connected to an input board for outputting a command, for example, via a USB cable 2 or the like.
  • the PC 3 connected to the 360-degree video camera device 1 displays a moving image based on a 360-degree panoramic image on the liquid crystal display device 12.
  • PC3 should store a 360-degree video image in a storage device (not shown).
  • the present invention can be widely used as a panorama developed image capturing apparatus, a panorama expanded image capturing system, and the like for capturing an image of a meeting.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

A 360-degree panorama image of an object is photographed without its discontinuity. A panorama image photographing device (1) generates a photographed 360-degree spread panorama image. The panorama image photographing device (1) is comprised of a memory (41) for storing positions and sizes of an inner circle boundary (42) and an outer circle boundary (43) to define a ring shaped region used for generating the 360-degree spread panorama image in a photographed image generated by photographing means (27, 36) based on an image formed by a fish-eye lens (22), a user updating means for updating the position or size of at least either the inner circle boundary (42) or the outer circle boundary (43) stored in the memory (41) in accordance with user's operations, and a panorama image generating means (52) for generating the 360-degree spread panorama image based on the image in the ring shaped region put between the inner circle boundary (42) and the outer circle boundary (43) stored in the memory (41).

Description

明 細 書  Specification
ノ Vラマ展開画像撮像装置およびパノラマ展開画像撮像システム 技術分野  NOV llama unfolding image capturing device and panoramic unfolding image capturing system
[0001] 本発明は、パノラマ展開画像撮像装置およびパノラマ展開画像撮像システムに関 する。  [0001] The present invention relates to a panorama developed image imaging apparatus and a panorama developed image imaging system.
背景技術  Background art
[0002] 特許文献 1は、全方位撮影装置を開示する。全方位撮影装置は、 360度の環状映 像を直接取込可能なレンズを有する 1つのカメラモジュールにより撮影された映像を 展開して表示したり、記録したりする。  [0002] Patent Document 1 discloses an omnidirectional photographing apparatus. The omnidirectional imaging device develops and displays or records the video captured by one camera module that has a lens that can directly capture a 360-degree annular image.
[0003] 特許文献 1 :特開 2006— 148787号公報 (要約、発明の詳細な説明、図面、特に、 段落 0024、段落 0049、図 8、図 9など)  Patent Document 1: Japanese Patent Application Laid-Open No. 2006-148787 (abstract, detailed description of the invention, drawings, particularly, paragraph 0024, paragraph 0049, FIG. 8, FIG. 9, etc.)
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0004] 特許文献 1にお 、て 360度の環状映像を直接取り込むレンズは、特殊な凸型のガ ラスレンズであり、具体的には、天頂反射面と、その周りを取り囲むように形成された 略円環状のレンズ面と、レンズ面の下側に配設される略円環状の奥側反射面と、奥 側反射面により取り囲まれている透過面と、を有する。また、特許文献 1は、この特殊 な凸型のガラスレンズの変わりに、魚眼レンズを使用することが可能であることを示唆 している。 [0004] In Patent Document 1, a lens that directly captures a 360-degree annular image is a special convex glass lens, specifically, a zenith reflecting surface and a shape surrounding the zenith reflecting surface. A substantially annular lens surface; a substantially annular back reflecting surface disposed below the lens surface; and a transmission surface surrounded by the back reflecting surface. Patent Document 1 suggests that a fish-eye lens can be used in place of the special convex glass lens.
[0005] し力しながら、特許文献 1の全方位撮影装置では、 360度の環状映像において、撮 像可能な像高さに制限がある。この環状映像では、レンズ面の下側部位により屈折さ れ、奥側反射面の外周部位で反射され、さらに天頂反射面の外周部位により反射さ れる方向からの入射光から、レンズ面の上側部位を透過し、奥側反射面の内周部位 で反射され、さらに天頂反射面の内周部位により反射される方向からの入射光までと なる画角範囲の画像し力得ることができない。環状映像の中央には、天頂反射面が 円形に撮影される。特許文献 1の全方位撮影装置では、このような特殊な凸型のガラ スレンズなどを使用しているので、撮像可能な像高さに制限が生じる。 [0006] 像高さに制限があるため、特許文献 1の全方位撮影装置では、実用上、ガラスレン ズと被写体との距離などに制限が生じる。たとえば、被写体がガラスレンズに近づき すぎてしまうと、 360度の環状映像において被写体の頭などが切れてしまうことになる 。被写体の頭などが切れてしまわないようにするためには、被写体に対するガラスレ ンズの距離を確保しなければならな 、。 However, with the omnidirectional imaging device of Patent Document 1, there is a limit to the image height that can be captured in a 360-degree annular image. In this annular image, the upper part of the lens surface is refracted by the lower part of the lens surface, reflected from the outer peripheral part of the rear reflective surface, and further incident light from the direction reflected by the outer peripheral part of the zenith reflective surface. Therefore, it is not possible to obtain an image force in the range of the angle of view from the direction of the incident light from the direction reflected by the inner peripheral portion of the back reflecting surface and further reflected by the inner peripheral portion of the zenith reflecting surface. In the center of the ring image, the zenith reflection surface is photographed in a circle. Since the omnidirectional imaging device of Patent Document 1 uses such a special convex glass lens, the image height that can be captured is limited. [0006] Because there is a restriction on the image height, the omnidirectional photographing apparatus of Patent Document 1 imposes a restriction on the distance between the glass lens and the subject in practice. For example, if the subject gets too close to the glass lens, the subject's head will be cut off in the 360-degree annular image. In order to prevent the subject's head from being cut off, the distance of the glass lens to the subject must be secured.
[0007] 特に、全方位撮影装置をテーブルの中央に置!ヽて会議風景などを撮像しょうとす る場合、この実用上の距離制限が全方位撮影装置の利便性を大きく損なうことになる 。たとえば小さな会議室では、被写体に対するガラスレンズの距離が確保できなくな り、被写体の頭などが切れな 、ように撮影することができなくなってしまう可能性があ る。  [0007] In particular, when the omnidirectional photographing apparatus is placed in the center of the table and it is intended to capture a conference scene or the like, this practical distance limitation greatly impairs the convenience of the omnidirectional photographing apparatus. For example, in a small conference room, it may not be possible to secure the distance of the glass lens to the subject, and the subject's head may be cut off, making it impossible to shoot.
[0008] 本発明は、被写体が切れてしまうことなぐ 360度のパノラマ展開画像を撮影するこ とができるパノラマ展開画像撮像装置およびパノラマ展開画像撮像システムを得るこ とを目的とする。  [0008] An object of the present invention is to obtain a panorama unfolded image capturing apparatus and a panorama unfolded image capturing system capable of capturing a 360-degree panorama unfolded image without causing the subject to be cut.
課題を解決するための手段  Means for solving the problem
[0009] 本発明に係るノ Vラマ展開画像撮像装置は、撮像による 360度のノ Vラマ展開画 像を生成するものである。そして、パノラマ展開画像撮像装置は、魚眼レンズと、複数 の受光素子が配列される受光面に魚眼レンズによる円形の像が結像し、円形画像を 有する撮像画像のデータを生成する撮像手段と、撮像画像中の、 360度のパノラマ 展開画像の生成に使用するリング形状の範囲を規定する内円境界および外円境界 の位置あるいはサイズを記憶するメモリと、ユーザの操作に基づいて、メモリに記憶さ れる内円境界および外円境界の中の少なくとも一方の位置あるいはサイズを更新す るユーザ更新手段と、撮像手段が生成する撮像画像中の、メモリに記憶される内円 境界と外円境界とで挟まれるリング形状の範囲内の画像に基づいて、 360度のノ Vラ マ展開画像を生成するパノラマ展開画像生成手段と、を有する。  [0009] The V Lamma development image capturing apparatus according to the present invention generates a 360 degree V Lama development image by imaging. The panoramic developed image imaging device includes a fish-eye lens, an imaging unit that forms a circular image by the fish-eye lens on a light-receiving surface on which a plurality of light-receiving elements are arranged, and generates captured image data having the circular image; The 360 degree panorama is stored in the memory based on the user's operation and the memory that stores the position or size of the inner and outer circle boundaries that define the range of the ring shape used to generate the unfolded image. A user update unit that updates the position or size of at least one of the inner circle boundary and the outer circle boundary, and the inner circle boundary and outer circle boundary stored in the memory in the captured image generated by the imaging unit. Panorama unfolded image generating means for generating a 360-degree non-linear raster image based on an image within the range of the ring shape.
[0010] この構成を採用すれば、撮像手段には、魚眼レンズによる円形の像が結像する。撮 像手段は、円形画像を有する撮像画像を生成する。円形画像内には、被写体の頭 などが切れたりすることなく含まれる。  If this configuration is adopted, a circular image formed by a fisheye lens is formed on the imaging means. The imaging means generates a captured image having a circular image. The circular image is included without the subject's head being cut off.
[0011] また、ノ Vラマ展開画像生成手段は、この撮像画像の中の、内円境界と外円境界と に挟まれているリング形状の範囲内の画像に基づいて、 360度のノ Vラマ展開画像 を生成する。この内円境界および外円境界の位置およびサイズは、ユーザの操作に 基づいて、ユーザ更新手段により更新することができる。したがって、たとえば被写体 に対する魚眼レンズの距離を物理的に確保することがでなくても、ユーザの操作によ り内円境界の位置ある ヽは外円境界の位置を更新することで、円形画像内の被写体 力 Sリング形状の範囲内に収まるように調整することができる。その結果、被写体の頭な どが切れて 、な 、360度のパノラマ展開画像を生成することができる。 [0011] Further, the no-V llama unfolded image generating means includes an inner circle boundary and an outer circle boundary in the captured image. Based on the image in the range of the ring shape sandwiched between the two, a 360 degree V-Rama expansion image is generated. The positions and sizes of the inner circle boundary and the outer circle boundary can be updated by the user update unit based on the user's operation. Therefore, for example, even if it is not possible to physically secure the distance of the fisheye lens to the subject, the position of the inner circle boundary is updated by the user's operation so that the position of the outer circle boundary is updated. Subject force Can be adjusted so that it is within the range of the S-ring shape. As a result, the subject's head is cut off, and a 360-degree panoramic image can be generated.
[0012] 本発明に係るパノラマ展開画像撮像装置は、上述した発明の構成に加えて、以下 の特徴を有するものである。すなわち、魚眼レンズは、立体射影方式のものである。メ モリは、立体射影方式の魚眼レンズの入射角と像高との関係に基づ!、て得られる被 写体の結像サイズのバランスを補正する像高補正テーブルを記憶する。パノラマ展 開画像生成手段は、像高補正テーブルを参照し、被写体の結像サイズのノ ランスを 整えた 360度のノ Vラマ展開画像を生成する。  In addition to the configuration of the above-described invention, a panoramic developed image imaging device according to the present invention has the following characteristics. That is, the fisheye lens is of a three-dimensional projection system. The memory stores an image height correction table for correcting the balance of the imaging size of the subject obtained based on the relationship between the incident angle of the stereoscopic lens fisheye lens and the image height. The panorama spread image generation means refers to the image height correction table, and generates a 360-degree V-Rama development image in which the tolerance of the imaging size of the subject is adjusted.
[0013] この構成を採用すれば、一般的な等距離射影方式の魚眼レンズを使用する場合に 比べて、魚眼レンズにより結像される画像中の周辺部のゆがみが少なぐ且つ、周辺 部の情報量が多くなる。したがって、パノラマ展開画像撮像装置を、その魚眼レンズ を上向きにテーブルに載置したとき、その周隨こいる人物や黒板などの画像を、高 分解能で得ることができる。黒板の文字などを、 360度のパノラマ展開画像中に再現 することができる。また、魚眼レンズの周隨こいる人物や黒板などの画像として、バラ ンスが整ったものを得ることができる。  [0013] If this configuration is adopted, the peripheral portion in the image formed by the fisheye lens is less distorted and the amount of information in the peripheral portion than when a general equidistant projection fisheye lens is used. Will increase. Therefore, when the panoramic developed image pickup device is placed on the table with the fisheye lens facing upward, it is possible to obtain an image of a person or a blackboard with a high resolution. The characters on the blackboard can be reproduced in a 360-degree panoramic image. In addition, it is possible to obtain a well-balanced image such as a person or a blackboard with a fisheye lens.
[0014] 本発明に係るパノラマ展開画像撮像装置は、上述した発明の各構成に加えて、以 下の特徴を有するものである。すなわち、ユーザ更新手段は、メモリに記憶される内 円境界および外円境界の中の少なくとも一方の位置あるいはサイズを更新した後に 、メモリに記憶される像高補正テーブルを、更新した内円境界および外円境界と、立 体射影方式の魚眼レンズの入射角と像高との関係とに基づ 、て更新する。  [0014] A panoramic developed image capturing apparatus according to the present invention has the following features in addition to the above-described components of the present invention. That is, the user update means updates the image height correction table stored in the memory after updating the position or size of at least one of the inner circle boundary and the outer circle boundary stored in the memory. It is updated based on the outer circle boundary and the relationship between the angle of incidence and the image height of the cuboid projection fisheye lens.
[0015] 内円境界と外円境界との間隔が変更により変化する場合、それらの間の画像空間 の、複数の画素による分割割合が変化する。また、内円境界および外円境界の位置 が移動するだけであっても、それらの間の画像空間の、複数の画素による分割割合 が変化する。この構成を採用すれば、像高補正テーブルを、これらと適合するものに 維持することができる。 [0015] When the interval between the inner circle boundary and the outer circle boundary changes due to the change, the division ratio of the plurality of pixels in the image space between them changes. Even if the position of the inner circle boundary and the outer circle boundary only moves, the division ratio of the image space between them by multiple pixels Changes. If this configuration is adopted, the image height correction table can be kept compatible with them.
[0016] 本発明に係るパノラマ展開画像撮像装置は、上述した発明の各構成に加えて、以 下の特徴を有するものである。すなわち、ユーザ更新手段は、内円境界あるいは外 円境界の変更後のサイズが、所定の画素数以上の半径である場合、内円境界あるい は外円境界の変更指示に基づ 、て、メモリに記憶される内円境界あるいは外円境界 を更新する。  [0016] The panoramic developed image capturing apparatus according to the present invention has the following features in addition to the above-described components of the present invention. In other words, the user update means, when the size after the change of the inner circle boundary or the outer circle boundary is a radius greater than or equal to the predetermined number of pixels, Update the inner circle boundary or outer circle boundary stored in memory.
[0017] 仮にたとえば内円境界の半径が 1画素になると、 360度のパノラマ展開画像の最上 段の複数の表示画素の画素値を得るために、撮像画像中の同じ画素が何度も利用 されることになる。この場合、 360度のノ Vラマ展開画像の画質が向上することなぐ 3 60度のパノラマ展開画像の生成負荷が増大する。この構成のように、内円境界ある いは外円境界の半径が所定の画素数より小さくなつてしまうことを防止することで、そ のような不利益が発生してしまうことを防止することができる。  [0017] For example, if the radius of the inner circle boundary becomes one pixel, the same pixel in the captured image is used many times in order to obtain pixel values of a plurality of display pixels at the top of the 360-degree panorama development image. Will be. In this case, the load of generating a 360-degree panorama development image increases without improving the image quality of the 360-degree rama development image. By preventing the radius of the inner circle boundary or outer circle boundary from becoming smaller than the predetermined number of pixels as in this configuration, it is possible to prevent such disadvantages from occurring. Can do.
[0018] 本発明に係るパノラマ展開画像撮像装置は、上述した発明の各構成に加えて、以 下の特徴を有するものである。すなわち、ノ Vラマ展開画像生成手段は、 360度のパ ノラマ展開画像の各表示画素の、円形画像の中心からの距離および方角を演算し、 演算した距離および方角を用いて、各表示画素の、撮像画像での座標を演算し、演 算した各座標に位置する撮像画像の画素の画素値を、各表示画素の画素値として 取得する。  [0018] The panoramic developed image capturing apparatus according to the present invention has the following features in addition to the above-described components of the present invention. That is, the no-rama development image generation means calculates the distance and the direction from the center of the circular image of each display pixel of the 360-degree panorama development image, and uses the calculated distance and the direction of each display pixel. Then, the coordinates in the captured image are calculated, and the pixel value of the pixel of the captured image located at each calculated coordinate is acquired as the pixel value of each display pixel.
[0019] この構成を採用すれば、撮像画像中の円形画像の画素の画素値をそのまま用いた 、 360度のノ Vラマ展開画像を生成することができる。 360度のノ Vラマ展開画像の 各表示画素の画素値そのものを演算により求める必要がない。演算により各表示画 素の画素値を得る場合のように、演算による画質劣化が発生することはなぐ 1つの ノ Vラマ展開画像の生成時間を短縮することができる。その結果、 360度のパノラマ 展開画像を短時間で生成することができる。また、ノ Vラマ展開画像の生成処理を連 続的に繰り返すことで、 360度のノ Vラマ展開画像による、連続的な動きのある動画 を得ることができる。  By adopting this configuration, it is possible to generate a 360-degree NORMAL development image using the pixel values of the pixels of the circular image in the captured image as they are. There is no need to calculate the pixel value of each display pixel in the 360-degree llama image. As in the case of obtaining the pixel value of each display pixel by calculation, it is possible to reduce the generation time of a single V-Rama development image without causing image quality degradation due to calculation. As a result, a 360-degree panoramic image can be generated in a short time. In addition, by continuously repeating the generation process of the no-V llama development image, it is possible to obtain a moving image with a 360-degree no-rama development image.
[0020] 本発明に係るパノラマ展開画像撮像装置は、上述した発明の各構成に加えて、以 下の特徴を有するものである。すなわち、ノ Vラマ展開画像生成手段は、撮像画像 中の、内円境界と外円境界とで挟まれる範囲を 180度毎に 2つの範囲に分割し、そ の分割した画像の範囲毎の 2つの横長のパノラマ分割展開画像を生成する。また、 2 つの横長のパノラマ分割展開画像を上下に並べて表示させる表示データを生成する 表示データ生成手段を有する。 [0020] A panoramic developed image imaging device according to the present invention includes the following components in addition to the above-described components of the present invention. It has the following characteristics. In other words, the no-rama development image generation means divides the range between the inner circle boundary and the outer circle boundary in the captured image into two ranges every 180 degrees, and 2 for each range of the divided images. Two horizontally long panoramic divided development images are generated. In addition, there is a display data generating means for generating display data for displaying two horizontally long panoramic divided developed images side by side.
[0021] この構成を採用すれば、仮にたとえばこれらが 1つの展開画像である場合に比べて 、表示データを表示する表示手段の表示画面を有効に利用することができる。表示 手段に、 360度のノ Vラマ展開画像を大きく表示することができる。  By adopting this configuration, it is possible to effectively use the display screen of the display means for displaying the display data as compared with, for example, a case where these are one developed image. The display means can display a large 360-degree image of the llama.
[0022] 本発明に係るノ Vラマ展開画像撮像システムは、撮像による 360度のパノラマ展開 画像を生成するパノラマ展開画像撮像装置と、パノラマ展開画像撮像装置が生成す るノ Vラマ展開画像を表示あるいは保存するコンピュータ装置とを有するものである。 そして、パノラマ展開画像撮像装置は、魚眼レンズと、複数の受光素子が配列される 受光面に上記魚眼レンズによる円形の像が結像し、円形画像を有する撮像画像の データを生成する撮像手段と、撮像画像中の、 360度のノ Vラマ展開画像の生成に 使用するリング形状の範囲を規定する内円境界および外円境界の位置あるいはサイ ズを記憶するメモリと、ユーザの操作に基づいて、メモリに記憶される内円境界および 外円境界の中の少なくとも一方の位置ある!ヽはサイズを更新するユーザ更新手段と 、撮像手段が生成する撮像画像中の、メモリに記憶される内円境界と外円境界とで 挟まれるリング形状の範囲内の画像に基づいて、 360度のノ Vラマ展開画像を生成 するパノラマ展開画像生成手段と、を有する。  [0022] A NORMAL development image imaging system according to the present invention displays a panorama development image imaging device that generates a 360-degree panorama development image by imaging, and a NORMAL development image generated by the panorama development image imaging device. Or it has a computer apparatus which preserve | saves. Then, the panoramic developed image imaging device includes a fish-eye lens, an imaging unit that forms a circular image by the fish-eye lens on a light-receiving surface on which a plurality of light-receiving elements are arranged, and generates data of a captured image having a circular image; A memory that stores the position or size of the inner and outer circle boundaries that define the range of the ring shape used to generate a 360-degree image of the 360 ° llama in the image, and a memory based on user operations. Is the position of at least one of the inner circle boundary and the outer circle boundary stored in the !! is the user update means for updating the size, and the inner circle boundary stored in the memory in the captured image generated by the imaging means Panorama unfolded image generating means for generating a 360 degree NOV llama unfolded image based on an image within a range of a ring shape sandwiched between the outer circle boundaries.
[0023] この構成を採用すれば、 360度のノ Vラマ展開画像を表示し、あるいは保存するこ とがでさる。  [0023] By adopting this configuration, it is possible to display or save a 360-degree image of a llama developed.
発明の効果  The invention's effect
[0024] 本発明では、被写体が切れてしまうことなぐ 360度のパノラマ展開画像を撮影する ことができる。  [0024] In the present invention, a 360-degree panoramic developed image can be taken without the subject being cut off.
図面の簡単な説明  Brief Description of Drawings
[0025] [図 1]本発明の実施の形態に係るパノラマ展開画像撮像システムを示す斜視図であ る。 [図 2]図 1中の 360度ビデオカメラ装置の電気回路を示すブロック図である。 FIG. 1 is a perspective view showing a panoramic developed image imaging system according to an embodiment of the present invention. 2 is a block diagram showing an electric circuit of the 360-degree video camera device in FIG.
[図 3]図 1中の魚眼レンズとイメージセンサとの光学的な配置を示す説明図である。 FIG. 3 is an explanatory view showing an optical arrangement of the fisheye lens and the image sensor in FIG.
[図 4]図 1中の色変換処理部がイメージセンサの輝度分布データに基づいて生成す る撮像画像の一例を示す図である。 4 is a diagram illustrating an example of a captured image generated by the color conversion processing unit in FIG. 1 based on luminance distribution data of the image sensor.
[図 5]図 1の 360度ビデオカメラ装置において、撮像画像に、内円境界の位置を示す 円形のマークと、外円境界の位置を示す円形のマークとを重ねた画像を示す説明図 である。  5 is an explanatory diagram showing an image obtained by superimposing a circular mark indicating the position of the inner circle boundary and a circular mark indicating the position of the outer circle boundary on the captured image in the 360-degree video camera device of FIG. is there.
[図 6]図 1中の立体射影方式の魚眼レンズの像高 (画角差)特性図である。  FIG. 6 is an image height (field angle difference) characteristic diagram of the three-dimensional projection type fisheye lens in FIG.
[図 7]図 1の 360度ビデオカメラ装置において生成される、 1つの分割画像範囲に関 する、展開静止画生成部による分割展開画像の生成処理の流れを示すフローチヤ ートである。 FIG. 7 is a flow chart showing the flow of a divided expanded image generation process by the expanded still image generation unit for one divided image range generated in the 360-degree video camera device of FIG.
[図 8]図 1の 360度ビデオカメラ装置において生成される分割展開画像での複数の表 示画素の配列の一例を示す図である。  8 is a diagram showing an example of an array of a plurality of display pixels in a divided expanded image generated in the 360-degree video camera device of FIG.
[図 9]図 8の各展開画素列の、リング形状の画像における方向(分割画像範囲におけ る方向)を図示する説明図である。  FIG. 9 is an explanatory diagram illustrating directions in a ring-shaped image (directions in a divided image range) of each developed pixel row in FIG. 8.
[図 10]図 1の 360度ビデオカメラ装置における特定表示画素の元画像(円形画像)で の座標位置の説明図である。  FIG. 10 is an explanatory diagram of coordinate positions in an original image (circular image) of a specific display pixel in the 360-degree video camera device of FIG.
[図 11]図 1中の PCの液晶表示デバイスに表示される、撮像による 360度のパノラマ 展開画像の表示例である。  [FIG. 11] A display example of a 360-degree panoramic developed image displayed on the liquid crystal display device of the PC in FIG.
[図 12]図 1の 360度ビデオカメラ装置の変更モードにおける処理管理部の動作を示 すフローチャートである。 符号の説明  FIG. 12 is a flowchart showing the operation of the process management unit in the change mode of the 360-degree video camera device of FIG. Explanation of symbols
1 360度ビデオカメラ装置 (パノラマ展開画像撮像装置) 1 360-degree video camera device (Panorama image capturing device)
3 パーソナルコンピュータ(コンピュータ装置) 3 Personal computer (computer equipment)
22 魚眼レンズ 22 Fisheye lens
27 イメージセンサ (撮像手段の一部)  27 Image sensor (part of imaging means)
33 受光面 33 Photosensitive surface
36 色変換処理部 (撮像手段の一部) 41 EEPROM (メモリ) 36 color conversion processing part (part of imaging means) 41 EEPROM (memory)
45 像高補正テーブル  45 Image height correction table
52 展開静止画生成部 (パノラマ展開画像生成手段)  52 Expanded still image generator (panorama expanded image generator)
54 ストリーミング生成部(表示データ生成手段)  54 Streaming generator (display data generator)
55 処理管理部 (ユーザ更新手段)  55 Process Management Department (User update means)
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0027] 以下、本発明の実施の形態に係るパノラマ展開画像撮像装置およびパノラマ展開 画像撮像システムを、図面に基づいて説明する。パノラマ展開画像撮像装置は、会 議用の 360度ビデオカメラ装置を例として説明する。ノ Vラマ展開画像撮像システム は、 360度ビデオカメラ装置とパーソナルコンピュータとを接続したものを例として説 明する。 Hereinafter, a panorama developed image imaging apparatus and a panorama developed image imaging system according to an embodiment of the present invention will be described with reference to the drawings. The panorama development image capturing apparatus will be described by taking a conference 360-degree video camera apparatus as an example. The V-Lama unfolding imaging system will be described using an example of a 360-degree video camera device connected to a personal computer.
[0028] 図 1は、本発明の実施の形態に係るパノラマ展開画像撮像システムを示す斜視図 である。ノ Vラマ展開画像撮像システムは、ノ Vラマ展開画像撮像装置としての 360 度ビデオカメラ装置 1と、コンピュータ装置としての PC (パーソナルコンピュータ) 3とを 有する。 360度ビデオカメラ装置 1は、 USB (Universal Serial Bus)ケーブル 2に より、 PC3に接続されている。  [0028] FIG. 1 is a perspective view showing a panoramic developed image imaging system according to an embodiment of the present invention. The NOV llama unfolded image capturing system includes a 360 degree video camera device 1 as a NOV llama unfolded image capturing device and a PC (personal computer) 3 as a computer device. The 360-degree video camera device 1 is connected to the PC 3 by a USB (Universal Serial Bus) cable 2.
[0029] PC3は、 USBコネクタ 11、液晶表示デバイス 12、スピーカ 13、入力デバイス 14な どを有する。また、図示外の CPU (Central Processing Unit:中央処理装置)が 、図示外の記憶デバイス力 プログラムを読み込んで実行することで、 PC3には、通 信処理部 15、再生処理部 16、コマンド生成部 17などが実現される。  [0029] The PC 3 includes a USB connector 11, a liquid crystal display device 12, a speaker 13, an input device 14, and the like. In addition, a CPU (Central Processing Unit) (not shown) reads and executes a storage device power program (not shown) so that the PC 3 has a communication processing unit 15, a reproduction processing unit 16, a command generation unit. 17 is realized.
[0030] 通信処理部 15は、 USBコネクタ 11を用いたデータ通信を制御する。再生処理部 1 6は、液晶表示デバイス 12で表示される内容などを制御し、スピーカ 13に音を出力さ せる。コマンド生成部 17は、入力デバイス 14力も入力される入力データに基づいて コマンドを生成する。入力デバイス 14には、たとえばキーボード、ポインティングデバ イスなどがある。  The communication processing unit 15 controls data communication using the USB connector 11. The reproduction processing unit 16 controls the content displayed on the liquid crystal display device 12 and causes the speaker 13 to output sound. The command generation unit 17 generates a command based on input data to which the input device 14 is also input. Examples of the input device 14 include a keyboard and a pointing device.
[0031] 360度ビデオカメラ装置 1は、立方体形状のハウジング 21を有する。ハウジング 21 には、魚眼レンズ 22、 USBコネクタ 23、ビデオ出力コネクタ 24、音声出力コネクタ 25 などが配設される。魚眼レンズ 22は、ハウジング 21の上面に配設される。ハウジング 21の上面には、マイクロフォン 26用の通気孔 20が形成される。 USBコネクタ 23、ビ デォ出力コネクタ 24および音声出力コネクタ 25は、ハウジング 21の側面に配設され る。 The 360-degree video camera device 1 has a cubic-shaped housing 21. The housing 21 is provided with a fisheye lens 22, a USB connector 23, a video output connector 24, an audio output connector 25, and the like. The fisheye lens 22 is disposed on the upper surface of the housing 21. housing A ventilation hole 20 for the microphone 26 is formed on the upper surface of 21. The USB connector 23, the video output connector 24 and the audio output connector 25 are arranged on the side surface of the housing 21.
[0032] 図 2は、図 1中の 360度ビデオカメラ装置 1の電気回路を示すブロック図である。 36 0度ビデオカメラ装置 1のハウジング 21の内部には、撮像による 360度のノ Vラマ展 開画像を生成するための電気回路が組み込まれている。  FIG. 2 is a block diagram showing an electric circuit of 360-degree video camera device 1 in FIG. In the housing 21 of the 360 ° video camera device 1, an electric circuit for generating a 360-degree expanded image of a 360-degree llama by imaging is incorporated.
[0033] 360度ビデオカメラ装置 1は、魚眼レンズ 22、 USBコネクタ 23、ビデオ出力コネクタ 24、音声出力コネクタ 25の他に、撮像手段の一部としてのイメージセンサ 27、 FPG A (Field Programable Gate Array) 28、 DSP (Digital Signal Processor) 29、外部メモリ 30、音声 IC (Integrated Circuit) 31、ビデオエンコーダ 32などを 有する。  [0033] In addition to the fish-eye lens 22, the USB connector 23, the video output connector 24, and the audio output connector 25, the 360-degree video camera device 1 includes an image sensor 27 as a part of the imaging means, an FPG A (Field Programmable Gate Array). 28, DSP (Digital Signal Processor) 29, external memory 30, audio IC (Integrated Circuit) 31, video encoder 32, etc.
[0034] 図 3は、魚眼レンズ 22と、イメージセンサ 27との光学的な配置を示す説明図である  FIG. 3 is an explanatory diagram showing an optical arrangement of the fisheye lens 22 and the image sensor 27.
[0035] イメージセンサ 27には、たとえば CMOS (Complementary Metal -Oxide Se miconductor )イメージセンサがある。イメージセンサ 27は、受光面 33を有する。 受光面 33には、図示外の複数の受光素子がたとえば縦横 3 :4の比率でマトリックス 状に配列される。各受光素子は、受光光量に応じた値を出力する。イメージセンサ 2 7は、複数の受光素子力 出力される複数の受光光量値を有する輝度分布データを 生成する。イメージセンサ 27は、所定の周期で、輝度分布データを生成する。 The image sensor 27 is, for example, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor. The image sensor 27 has a light receiving surface 33. On the light receiving surface 33, a plurality of light receiving elements (not shown) are arranged in a matrix at a ratio of, for example, 3: 4. Each light receiving element outputs a value corresponding to the amount of received light. The image sensor 27 generates luminance distribution data having a plurality of received light amount values output from a plurality of light receiving element forces. The image sensor 27 generates luminance distribution data at a predetermined cycle.
[0036] 魚眼レンズ 22は、たとえば 180度以上の広視野角を有するものであり、立体射影方 式のものである。立体射影方式の魚眼レンズ 22は、一般的な等距離射影方式の魚 眼レンズに比べて、それにより結像される画像中の周辺部のゆがみが少なぐ且つ、 周辺部の情報量が多くなる。魚眼レンズ 22の射影方式としては、立体射影方式や等 距離射影方式の他にも、等立体射影方式や正射影方式などを採用することが可能 である。但し、 360度ビデオカメラ装置 1を会議室のテーブル上に魚眼レンズ 22が上 向きとなる姿勢で置いて撮像する場合、周辺部に写る人物や黒板などの情報量が多 くなるため、立体射影方式のものが最適である。  [0036] The fish-eye lens 22 has a wide viewing angle of, for example, 180 degrees or more, and has a stereoscopic projection method. Compared with a general equidistant projection fisheye lens, the three-dimensional projection fisheye lens 22 causes less distortion in the peripheral portion of the image formed and increases the amount of information in the peripheral portion. As the projection method of the fish-eye lens 22, in addition to the stereoscopic projection method and the equidistant projection method, an equal stereoscopic projection method, an orthographic projection method, and the like can be adopted. However, when the 360-degree video camera device 1 is placed on the table in the conference room with the fisheye lens 22 facing upward, the amount of information such as a person or blackboard that appears in the surrounding area increases, so the stereoscopic projection method The one is the best.
[0037] この魚眼レンズ 22は、イメージセンサ 27の受光面 33の上方に配設される。これによ り、イメージセンサ 27の受光面 33には、魚眼レンズ 22による円形の像 (以下、円形画 像とよぶ)が結像する。イメージセンサ 27は、輝度分布データを周期的に生成し、 FP GA28へ出力する。 FPGA28では、撮像手段の一部としての色変換処理部 36が実 現される。 The fisheye lens 22 is disposed above the light receiving surface 33 of the image sensor 27. This Thus, a circular image (hereinafter referred to as a circular image) by the fisheye lens 22 is formed on the light receiving surface 33 of the image sensor 27. The image sensor 27 periodically generates luminance distribution data and outputs it to the FP GA28. In the FPGA 28, a color conversion processing unit 36 is realized as a part of the imaging means.
[0038] 色変換処理部 36は、図示外の色変換テーブルを用いて、輝度分布データの各画 素のデータを置き換える。色変換処理部 36は、たとえば円形画像中の画素のデータ を、輝度分布データにおけるその画素の値および周辺画素の値を用いて、所定の色 データへ置き換える。これにより、適当な色データを有する撮像画像データが生成さ れる。  [0038] The color conversion processing unit 36 replaces the data of each pixel of the luminance distribution data using a color conversion table (not shown). The color conversion processing unit 36 replaces, for example, pixel data in a circular image with predetermined color data using the pixel value and the peripheral pixel values in the luminance distribution data. Thereby, captured image data having appropriate color data is generated.
[0039] 図 4は、色変換処理部 36がイメージセンサ 27の輝度分布データに基づいて生成 する撮像画像の一例を示す図である。図 4に示すように、撮像画像は、その中央部に 、円形画像を有する。円形画像の内部には、被写体の像が結像する。色変換処理部 36は、生成した撮像画像データを、 DSP29へ出力する。  FIG. 4 is a diagram illustrating an example of a captured image that the color conversion processing unit 36 generates based on the luminance distribution data of the image sensor 27. As shown in FIG. 4, the captured image has a circular image at the center. An image of the subject is formed inside the circular image. The color conversion processing unit 36 outputs the generated captured image data to the DSP 29.
[0040] マイクロフォン 26は、音声に応じた波形信号を生成する。波形信号は、音声 IC31 により、音声信号へ変換され、音声出力コネクタ 25へ供給される。音声出力コネクタ 25には、たとえばスピーカユニット、ヘッドフォンなどが接続可能である。音声出力コ ネクタ 25に接続されるスピーカユニットやヘッドフォンなどにより、音声を聞くことがで きる。また、音声 IC31には、音声保存処理部 37が実現される。音声保存処理部 37 は、マイクロフォン 26から供給される波形信号をサンプリングし、そのサンプリングに より生成した音声データ 38を、外部メモリ 30に保存する。  [0040] The microphone 26 generates a waveform signal corresponding to the sound. The waveform signal is converted into an audio signal by the audio IC 31 and supplied to the audio output connector 25. For example, a speaker unit or headphones can be connected to the audio output connector 25. Audio can be heard through the speaker unit or headphones connected to the audio output connector 25. In addition, a voice storage processing unit 37 is realized in the voice IC 31. The sound storage processing unit 37 samples the waveform signal supplied from the microphone 26 and stores the sound data 38 generated by the sampling in the external memory 30.
[0041] DSP29は、メモリとしての EEPROM (Electrically Erasable Programmable Read-Only Memory:不揮発性メモリの一種) 41を有する。 EEPROM41は、内 円境界データ 42、外円境界データ 43、入射角像高テーブル 44、像高補正テーブル 45、方角テーブル 46などを記憶する。  The DSP 29 has an EEPROM (Electrically Erasable Programmable Read-Only Memory) 41 as a memory. The EEPROM 41 stores inner circle boundary data 42, outer circle boundary data 43, an incident angle image height table 44, an image height correction table 45, a direction table 46, and the like.
[0042] 図 5は、撮像画像に、内円境界の位置を示す円形のマーク 47と、外円境界の位置 を示す円形のマーク 48とを重ねた画像を示す説明図である。内円境界および外円 境界は、パノラマ展開画像として展開する画像範囲を指定する境界である。内円境 界と外円境界との間の画像部分が、パノラマ展開画像として展開される。内円境界デ ータ 42は、内円境界の、撮像画像中の位置やサイズを示すデータである。外円境界 データ 43は、外円境界の、撮像画像中の位置やサイズを示すデータである。 FIG. 5 is an explanatory diagram showing an image in which a circular mark 47 indicating the position of the inner circle boundary and a circular mark 48 indicating the position of the outer circle boundary are superimposed on the captured image. The inner circle boundary and the outer circle boundary are boundaries that specify an image range to be expanded as a panoramic expansion image. The image portion between the inner circle boundary and the outer circle boundary is developed as a panoramic image. Inner circle boundary The data 42 is data indicating the position and size of the inner circle boundary in the captured image. The outer circle boundary data 43 is data indicating the position and size of the outer circle boundary in the captured image.
[0043] 図 6は、立体射影方式の魚眼レンズ 22の像高(画角差)特性図である。横軸は、魚 眼レンズ 22の光軸方向を画角 0度としたときの相対画角であり、縦軸は、像高(画角 差)である。図 6の特性の立体射影方式の魚眼レンズ 22を用いると、たとえば魚眼レ ンズ 22の画角 0度の方向に存在する被写体の像高(画角差)が約 0. 1の像高(画角 差)となるとき、画角 90度の方向に存在する被写体の像高(画角差)は、約 0. 2の大 きさとなる。これは、魚眼レンズ 22の周辺方向にある被写体は、中央方向にある被写 体に比べて約 2倍の大きさで結像することを意味する。また、被写体が周辺部から中 央部にかけて写るとき、そのバランスが崩れることを意味する。入射角像高テーブル 4 4は、この魚眼レンズ 22の画角に対する像高(画角差)の特性をデータ化したもので ある。 FIG. 6 is an image height (field angle difference) characteristic diagram of the fisheye lens 22 of the three-dimensional projection method. The horizontal axis is the relative field angle when the optical axis direction of the fish-eye lens 22 is 0 degrees, and the vertical axis is the image height (field angle difference). When the fisheye lens 22 of the three-dimensional projection method having the characteristics shown in FIG. 6 is used, for example, the image height (field angle difference) of the subject existing in the direction of the angle of view of the fisheye lens 22 is about 0.1. When the angle difference is reached, the image height (angle difference) of the subject that exists in the direction of 90 degrees is about 0.2. This means that a subject in the peripheral direction of the fish-eye lens 22 forms an image with a size approximately twice that of a subject in the central direction. It also means that when the subject is photographed from the peripheral part to the central part, the balance is lost. The incident angle image height table 44 is obtained by converting the image height (view angle difference) characteristics with respect to the angle of view of the fisheye lens 22 into data.
[0044] 魚眼レンズを使用したとき、一般的に、 1つの被写体の画像は周辺へ行く程ひずむ 。このひずみ具合を軽減し、周辺の情報量を多くする方式として立体射影方式が存 在する。この立体射影方式の魚眼レンズ 22を使用すると、被写体は、画角的に中心 部に撮像される場合に比べて、周辺部に撮像される場合の方が大きく撮像される。 中心部に撮像される被写体は、周辺部に撮像される被写体より画角的に小さく結像 する。このことは、図 6にイメージセンサ 27の中心部寄りの 4つの受光素子と、周辺部 寄りの 4つの受光素子とを図示するように、イメージセンサ 27の画角 0度に近い部位 、すなわち天頂付近の受光素子には、イメージセンサ 27の画角 90度に近い部位、 すなわち真横方向の受光素子に比べて、広い画角の像が結像することを意味する。  [0044] When a fisheye lens is used, an image of one subject is generally distorted as it goes to the periphery. There is a three-dimensional projection method as a method of reducing the degree of distortion and increasing the amount of surrounding information. When this stereoscopic projection type fisheye lens 22 is used, the subject is imaged more in the case of being imaged in the peripheral area than in the case of being imaged in the central area in terms of angle of view. The subject imaged in the central part forms an image with a smaller angle of view than the subject imaged in the peripheral part. This is because, as shown in FIG. 6, the four light receiving elements near the center of the image sensor 27 and the four light receiving elements near the periphery of the image sensor 27, that is, a portion near the angle of view of 0 degrees, that is, the zenith. This means that an image with a wider angle of view is formed on a nearby light receiving element than a portion near 90 degrees of view angle of the image sensor 27, that is, a light receiving element in the lateral direction.
[0045] このように立体射影方式は、従来の等距離射影方式の魚眼レンズと比べれば、周 辺の情報が多いため、死角が少なくなり、ゆがみが少ない画像となる。しかし、ゆがみ は少なくなるが、零とはならない。これに対して、ノ Vラマ展開画像では、被写体は、 本来の大きさのバランスで表示されることが望ましい。像高補正テーブル 45は、立体 射影方式の魚眼レンズ 22によりイメージセンサ 27に結像する画像から、被写体が本 来のバランスとして写る展開画像を得るための補正データを記憶する。  In this way, the stereoscopic projection method has a larger amount of peripheral information than the conventional equi-distance projection fish-eye lens, so that the blind spots are reduced and the image is less distorted. However, the distortion is reduced, but not zero. On the other hand, it is desirable for the subject to be displayed with a balance of the original size in the no-rama development image. The image height correction table 45 stores correction data for obtaining a developed image in which the subject appears as a natural balance from the image formed on the image sensor 27 by the fisheye lens 22 of the three-dimensional projection method.
[0046] また、 DSP29は、図示外の CPUを有する。この CPUがプログラムを実行することで 、 DSP29には、静止画保存処理部 51、 ノ Vラマ展開画像生成手段としての展開静 止画生成部 52、圧縮静止画生成部 53、表示データ生成手段としてのストリーミング 生成部 54、ユーザ更新手段としての処理管理部 55、通信処理部 56などが実現され る。 [0046] The DSP 29 has a CPU (not shown). This CPU executes the program The DSP 29 includes a still image storage processing unit 51, a developed still image generation unit 52 as a V Lama development image generation unit, a compressed still image generation unit 53, a streaming generation unit 54 as a display data generation unit, and a user update unit. The processing management unit 55, the communication processing unit 56, and the like are realized.
[0047] また、 DSP29には、外部メモリ 30が接続される。外部メモリ 30は、 SRAM (Static  Further, an external memory 30 is connected to the DSP 29. External memory 30 is SRAM (Static
RAM)や DDR— SDRAM (Double Data Rate SDRAM)などの記憶デバィ スにより構成され、 DSP29の CPUがアクセス可能である。  It consists of storage devices such as RAM) and DDR—SDRAM (Double Data Rate SDRAM), and is accessible to the DSP29 CPU.
[0048] 外部メモリ 30は、静止画データ 39、音声データ 38、 2つの圧縮展開静止画データ 61, 62などを記憶する。また、外部メモリ 30は、第一 VRAM (VideoRAM)エリア 63 と、第二 VRAMエリア 64と、を有する。 2つの圧縮展開静止画データ 61, 62の中の 一方は、第一 VRAMエリア 63に記憶され、他方は、第二 VRAMエリア 64に記憶さ れる。図 2では、第一 VRAMエリア 63に、第 n—l圧縮展開静止画データ 61が記憶 され、第二 VRAMエリア 64に、第 n圧縮展開静止画データ 62が記憶されている状 態である。  [0048] The external memory 30 stores still image data 39, audio data 38, two compressed and expanded still image data 61 and 62, and the like. The external memory 30 has a first VRAM (VideoRAM) area 63 and a second VRAM area 64. One of the two compressed decompressed still image data 61 and 62 is stored in the first VRAM area 63, and the other is stored in the second VRAM area 64. In FIG. 2, the first VRAM area 63 stores the nth compressed decompression still image data 61, and the second VRAM area 64 stores the nth compressed decompression still image data 62.
[0049] DSP29に実現される静止画保存処理部 51は、色変換処理部 36から、新たな撮像 画像データが供給されると、その撮像画像データを、外部メモリ 30に静止画データ 3 9として保存する。  [0049] When new captured image data is supplied from the color conversion processing unit 36, the still image storage processing unit 51 implemented in the DSP 29 stores the captured image data in the external memory 30 as still image data 39. save.
[0050] 展開静止画生成部 52は、外部メモリ 30に記憶される静止画データ 39から、展開静 止画データを生成する。展開静止画データは、撮像画像中の円形画像を、 360度の ノ Vラマ展開画像に展開した画像のデータである。展開静止画生成部 52は、第一 V RAMエリア 63および第二 VRAMエリア 64に交互に展開静止画データを生成し、保 存する。  The developed still image generation unit 52 generates expanded still image data from the still image data 39 stored in the external memory 30. Expanded still image data is image data obtained by expanding a circular image in a captured image into a 360-degree V-Rama image. The expanded still image generating unit 52 generates and stores expanded still image data alternately in the first VRAM area 63 and the second VRAM area 64.
[0051] 圧縮静止画生成部 53は、第一 VRAMエリア 63あるいは第二 VRAMエリア 64に 記憶される展開静止画データを圧縮する。展開静止画データの圧縮には、たとえ〖 PEG (Joint Photographic Coding Experts Group)圧縮方式や MPEG (Mo ving Picture Coding Experts Group)圧縮方式などの画像圧縮方式を用い ればよい。これにより、第一 VRAMエリア 63には、圧縮展開静止画データ 61が記憶 され、第二 VRAMエリア 64には、圧縮展開静止画データ 62が記憶される。 [0052] ストリーミング生成部 54は、外部メモリ 30から、圧縮展開静止画データ 61, 62およ び音声データ 38を読み込み、これらのコンテンツデータを有するストリーミングデータ を生成する。ストリーミングデータとしては、 MPEG方式、 div—X(ディビックス)方式 などのストリーミングフォーマットを採用すればよい。 [0051] The compressed still image generation unit 53 compresses the expanded still image data stored in the first VRAM area 63 or the second VRAM area 64. For compressing the developed still image data, an image compression method such as a PEG (Joint Photographic Coding Experts Group) compression method or an MPEG (Moving Picture Coding Experts Group) compression method may be used. As a result, the compressed expanded still image data 61 is stored in the first VRAM area 63, and the compressed expanded still image data 62 is stored in the second VRAM area 64. [0052] The streaming generation unit 54 reads the compressed and expanded still image data 61 and 62 and the audio data 38 from the external memory 30, and generates streaming data having these content data. For streaming data, a streaming format such as MPEG or div-X (divix) may be adopted.
[0053] 通信処理部 56は、 USBコネクタ 23を用いたデータ通信を制御する。  The communication processing unit 56 controls data communication using the USB connector 23.
[0054] 処理管理部 55は、静止画保存処理部 51、展開静止画生成部 52、圧縮静止画生 成部 53、ストリーミング生成部 54、通信処理部 56などの実行を管理する。処理管理 部 55は、静止画保存処理部 51、展開静止画生成部 52、圧縮静止画生成部 53、ス トリーミング生成部 54、通信処理部 56などへ、それらの起動や停止などを指示する。  The process management unit 55 manages the execution of the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, the streaming generation unit 54, the communication processing unit 56, and the like. The process management unit 55 instructs the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, the streaming generation unit 54, the communication processing unit 56, and the like to start or stop them.
[0055] ビデオエンコーダ 32は、外部メモリ 30から、圧縮展開静止画データ 61, 62を読み 込み、ビデオ信号を生成する。ビデオ信号には、たとえば NTSC (National TV S tandards Committee)方式や PAL (Phase Alternating Line)方式のものが ある。ビデオエンコーダ 32は、生成したビデオ信号を、ビデオ出力コネクタ 24へ出力 する。ビデオ出力コネクタ 24には、テレビジョン受信機などが接続可能である。ビデ ォ出力コネクタ 24に接続されるテレビジョン受信機などにより、ビデオ信号を再生し、 その映像を鑑賞することができる。  [0055] The video encoder 32 reads the compressed decompressed still image data 61 and 62 from the external memory 30, and generates a video signal. Examples of video signals include NTSC (National TV Standards Committee) and PAL (Phase Alternating Line). The video encoder 32 outputs the generated video signal to the video output connector 24. A television receiver or the like can be connected to the video output connector 24. With a television receiver connected to the video output connector 24, the video signal can be reproduced and the video can be viewed.
[0056] 次に、以上の構成を有するパノラマ展開画像撮像システムの動作を説明する。  Next, the operation of the panoramic unfolded image capturing system having the above configuration will be described.
[0057] 360度ビデオカメラ装置 1にたとえば USBケーブル 2から電力が供給されると、 DP S29などが動作し、 360度ビデオカメラ装置 1に、色変換処理部 36、音声保存処理 部 37、静止画保存処理部 51、展開静止画生成部 52、圧縮静止画生成部 53、ストリ 一ミング生成部 54、処理管理部 55などが実現される。処理管理部 55は、静止画保 存処理部 51、展開静止画生成部 52、圧縮静止画生成部 53、ストリーミング生成部 5 4へそれらの起動を指示する。  [0057] When power is supplied to the 360-degree video camera device 1 from, for example, the USB cable 2, the DP S29 and the like operate, and the 360-degree video camera device 1 includes a color conversion processing unit 36, an audio storage processing unit 37, and a stationary state. An image storage processing unit 51, a developed still image generation unit 52, a compressed still image generation unit 53, a streaming generation unit 54, a process management unit 55, and the like are realized. The process management unit 55 instructs the still image storage processing unit 51, the developed still image generation unit 52, the compressed still image generation unit 53, and the streaming generation unit 54 to start them.
[0058] 360度ビデオカメラ装置 1のイメージセンサ 27には、魚眼レンズ 22により集光される 光による像が結像する。イメージセンサ 27は、円形画像の輝度分布を含む輝度分布 データを生成する。色変換処理部 36は、図示外の色変換テーブルを用いて、輝度 分布データから、図 4に例示するような、円形画像を有する撮像画像データを生成す る。静止画保存処理部 51は、撮像画像データを、静止画データ 39として外部メモリ 3 0に保存する。 [0058] On the image sensor 27 of the 360-degree video camera device 1, an image formed by the light collected by the fisheye lens 22 is formed. The image sensor 27 generates luminance distribution data including the luminance distribution of the circular image. The color conversion processing unit 36 generates captured image data having a circular image as illustrated in FIG. 4 from the luminance distribution data using a color conversion table (not shown). The still image storage processing unit 51 converts the captured image data into the external memory 3 as still image data 39. Save to 0.
[0059] また、イメージセンサ 27は、周期的に輝度分布データを生成する。そのため、外部 メモリ 30の静止画データ 39は、所定の周期毎に、新たな撮像画像データへ更新さ れる。  [0059] Further, the image sensor 27 periodically generates luminance distribution data. Therefore, the still image data 39 in the external memory 30 is updated to new captured image data every predetermined period.
[0060] 外部メモリ 30の静止画データ 39が更新されると、展開静止画生成部 52は、更新さ れた静止画データ 39から、展開静止画データを生成する。  When the still image data 39 in the external memory 30 is updated, the developed still image generating unit 52 generates expanded still image data from the updated still image data 39.
[0061] 展開静止画生成部 52は、まず、 EEPROM41から、内円境界データ 42、外円境 界データ 43を読込み、更新された静止画データ 39による撮像画像中の、パノラマ展 開画像を生成するためのリング形状の範囲を決定する。また、展開静止画生成部 52 は、そのリング形状の画像範囲を 180度ずつに均等に二分割した 2つの分割範囲を 決定する。図 5中に、リング形状の画像範囲を水平に二分割する点線 71, 72を示す 。展開静止画生成部 52は、外円境界と内円境界との間のリング形状の画像を、たと えばこの 2つの点線 71, 72で 180度毎に 2つに分割し、上下 2つの分割画像範囲 73 , 74毎に、分割展開画像の生成処理を実行する。  [0061] The expanded still image generation unit 52 first reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41, and generates a panoramic expanded image in the captured image based on the updated still image data 39. Determine the range of the ring shape to do. In addition, the developed still image generation unit 52 determines two division ranges obtained by equally dividing the ring-shaped image range into two at 180 degrees. In FIG. 5, dotted lines 71 and 72 for horizontally dividing the ring-shaped image range into two are shown. The developed still image generation unit 52 divides the ring-shaped image between the outer circle boundary and the inner circle boundary into two parts every 180 degrees with these two dotted lines 71 and 72, for example, and two upper and lower divided images. For each of the ranges 73 and 74, the divided development image generation processing is executed.
[0062] 図 7は、 1つの分割画像範囲 73 (74)に関する、展開静止画生成部 52による分割 展開画像の生成処理の流れを示すフローチャートである。展開静止画生成部 52は、 まず、最後に展開画像を生成した後に、 EEPROM41に記憶される内円境界データ 42や外円境界データ 43が更新された力否かを確認する (ステップ ST1)。  FIG. 7 is a flowchart showing the flow of a divided expanded image generation process by the expanded still image generation unit 52 for one divided image range 73 (74). The developed still image generating unit 52 first confirms whether or not the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 have been updated after the last generated generated image (step ST1).
[0063] そして、最後の展開画像生成後に内円境界データ 42や外円境界データ 43が更新 されている場合 (ステップ STで Noの場合)、展開静止画生成部 52は、分割展開画 像の各展開画素列の、分割画像範囲 73 (74)における方向を演算し、方角テーブル 46を生成する(ステップ ST2)。  [0063] When the inner circle boundary data 42 and the outer circle boundary data 43 have been updated after the last developed image is generated (in the case of No in step ST), the developed still image generating unit 52 displays the divided developed image. The direction in each divided pixel range 73 (74) of each developed pixel column is calculated to generate the direction table 46 (step ST2).
[0064] 図 8は、分割展開画像の複数の表示画素の配列の一例を示す図である。説明を簡 便にするため、この分割展開画像には複数の表示画素が縦 3行 X横 4列に配列され ている。この場合、展開静止画生成部 52は、 4つの展開画素列(1, Z) 81、 (2, Z) 8 2、 (3, Z) 83、 (4, Z) 84毎に、リング形状の画像における方向(分割画像範囲 73 (7 4)における方向)を演算する。  FIG. 8 is a diagram illustrating an example of an array of a plurality of display pixels of the divided development image. In order to simplify the explanation, a plurality of display pixels are arranged in 3 rows x 4 columns in this divided expanded image. In this case, the developed still image generation unit 52 has a ring shape for each of four developed pixel columns (1, Z) 81, (2, Z) 8 2, (3, Z) 83, (4, Z) 84. The direction in the image (direction in the divided image range 73 (74)) is calculated.
[0065] 図 9は、図 8の各展開画素列 81, 82, 83, 84の、リング形状の画像における方向( 分割画像範囲 73 (74)における方向)を図示する説明図である。展開静止画生成部 52は、図 9の 2つの軸の交点を円形画像の中心と想定し、 180度( = 360度 ÷ 2)を 列数( = 4)で均等に分割した角度を演算する。 [0065] FIG. 9 shows the direction of the developed pixel columns 81, 82, 83, 84 in FIG. FIG. 16 is an explanatory diagram illustrating a direction in a divided image range 73 (74). The developed still image generation unit 52 calculates the angle obtained by equally dividing 180 degrees (= 360 degrees ÷ 2) by the number of columns (= 4), assuming the intersection of the two axes in Fig. 9 as the center of the circular image. .
[0066] 図 9に示すように、たとえば図 8の第 1列の展開画素列(1, Z) 81が角度 θ 1の方向 であるとすると、展開静止画生成部 52は、第 2列の展開画素列(2, Z) 82の角度を「 Θ 1 +均等分割角度」として演算する。展開静止画生成部 52は、第 3列の展開画素 列 (3, Z) 83の角度を「 Θ 1 +均等分割角度 X 2」として演算する。展開静止画生成 部 52は、第 4列の展開画素列 (4, Z) 84の角度を「 Θ 1 +均等分割角度 X 3」として 演算する。 As shown in FIG. 9, if, for example, the first developed pixel row (1, Z) 81 in FIG. 8 is in the direction of the angle θ 1, the developed still image generating unit 52 The angle of the expanded pixel row (2, Z) 82 is calculated as “Θ 1 + uniform division angle”. The expanded still image generation unit 52 calculates the angle of the third expanded pixel column (3, Z) 83 as “Θ 1 + equal division angle X 2”. The expanded still image generating unit 52 calculates the angle of the fourth expanded pixel column (4, Z) 84 as “Θ 1 + equal division angle X 3”.
[0067] 展開静止画生成部 52は、このように展開画素列 81, 82, 83, 84毎に、それぞれ の方向を演算する。展開静止画生成部 52は、複数の展開画素列 81, 82, 83, 84 にそれぞれの方向を対応付けた方角テーブル 46を生成する。展開静止画生成部 5 2は、生成した方角テーブル 46を EEPROM41に保存する。なお、方角テーブル 46 は、外部メモリ 30などに保存してもよい。  The developed still image generating unit 52 calculates the direction of each of the developed pixel columns 81, 82, 83, and 84 as described above. The developed still image generating unit 52 generates a direction table 46 in which each direction is associated with a plurality of developed pixel columns 81, 82, 83, and 84. The developed still image generation unit 52 stores the generated direction table 46 in the EEPROM 41. The direction table 46 may be stored in the external memory 30 or the like.
[0068] 方角テーブル 46を生成した後、展開静止画生成部 52は、分割展開画像の最初の 表示画素を特定する (ステップ ST3)。展開静止画生成部 52は、たとえば分割展開 画像の左上の表示画素(図 8では(1, 1) )を特定する。内円境界データ 42や外円境 界データ 43が更新されていない場合 (ステップ ST1で Yesの場合)にも、展開静止画 生成部 52は、分割展開画像の最初の表示画素を特定する (ステップ ST3)。  [0068] After generating the direction table 46, the expanded still image generating unit 52 specifies the first display pixel of the divided expanded image (step ST3). The expanded still image generation unit 52 specifies, for example, the upper left display pixel ((1, 1) in FIG. 8) of the divided expanded image. Even when the inner circle boundary data 42 and the outer circle boundary data 43 are not updated (Yes in step ST1), the developed still image generation unit 52 identifies the first display pixel of the divided developed image (step ST3).
[0069] そして、展開静止画生成部 52は、特定した表示画素に対応する、撮像画像中の画 素を特定し、その特定した撮像画像中の画素の画素値を取得し、取得した画素値を 特定した表示画素の画素値として保存する。  [0069] Then, the developed still image generation unit 52 specifies a pixel in the captured image corresponding to the specified display pixel, acquires a pixel value of the pixel in the specified captured image, and acquires the acquired pixel value. Is stored as the pixel value of the specified display pixel.
[0070] 具体的には、展開静止画生成部 52は、まず、像高補正テーブル 45を参照し、特 定した表示画素についての、元画像(円形画像)の中心からの距離 Rを演算する (ス テツプ ST4)。展開静止画生成部 52は、たとえば、像高補正テーブル 45を参照しな がら、分割展開画像の 1つの展開画素列と、撮像画像のリング形状の範囲内の半径 方向の画素配列とを比較し、特定表示画素に対応する元画像(円形画像)の位置の 、ドーナッツの中心からの距離 Rを演算する。 [0071] 次に、展開静止画生成部 52は、方角テーブル 46に記憶される特定表示画素の方 角と、演算した距離 Rとから、特定表示画素の、元画像(円形画像)での横座標を演 算する (ステップ ST5)。展開静止画生成部 52は、方角テーブル 46に記憶される特 定表示画素の方角と、演算した距離 Rとから、特定表示画素の、元画像(円形画像) での縦座標を演算する (ステップ ST6)。 [0070] Specifically, the developed still image generation unit 52 first refers to the image height correction table 45 and calculates the distance R from the center of the original image (circular image) for the specified display pixel. (Step ST4). For example, referring to the image height correction table 45, the developed still image generation unit 52 compares one developed pixel row of the divided developed image with a pixel array in the radial direction within the ring shape range of the captured image. The distance R from the center of the donut at the position of the original image (circular image) corresponding to the specific display pixel is calculated. Next, the developed still image generation unit 52 uses the direction of the specific display pixel stored in the direction table 46 and the calculated distance R to perform the horizontal display of the specific display pixel in the original image (circular image). Calculate the coordinates (step ST5). The developed still image generation unit 52 calculates the ordinate in the original image (circular image) of the specific display pixel from the direction of the specific display pixel stored in the direction table 46 and the calculated distance R (step) ST6).
[0072] 図 10は、特定表示画素の元画像(円形画像)での座標位置の説明図である。図 10 では、元画像(円形画像)の中心 (IOX, IOY)力 内円境界の中心であり、且つ、外 円境界の中心である。つまり、 (IOX, IOY)は、内円境界および外円境界によるドー ナッツの中心である。特定表示画素の中心からの距離が Rであり、方角が Θであると すると、特定表示画素の、元画像(円形画像)での座標位置 (IX, IY)は、以下の式 1 および式 2により演算される。座標位置 (IX, IY)は、元画像(円形画像)の左上が基 準である。  FIG. 10 is an explanatory diagram of the coordinate position in the original image (circular image) of the specific display pixel. In Fig. 10, the center of the original image (circular image) (IOX, IOY) force is the center of the inner circle boundary and the center of the outer circle boundary. In other words, (IOX, IOY) is the center of the donut with the inner and outer circle boundaries. If the distance from the center of the specific display pixel is R and the direction is Θ, the coordinate position (IX, IY) of the specific display pixel in the original image (circular image) is Is calculated by The coordinate position (IX, IY) is based on the upper left of the original image (circular image).
[0073] IX = IOX+RX cos Θ · · ·式 1  [0073] IX = IOX + RX cos Θ · · · · Equation 1
IY = IOY-R X sin 0 …式 2  IY = IOY-R X sin 0… Formula 2
[0074] 特定表示画素の、元画像(円形画像)での座標位置 (IX, IY)を演算すると、展開 静止画生成部 52は、その座標にある元画像(円形画像)の画素 91の画素値を取得 し、特定した表示画素の画素値として保存する。展開静止画生成部 52は、外部メモ リ 30の第一 VRAMエリア 63あるいは第二 VRAMエリア 64に、特定した表示画素の 画素値を保存する (ステップ ST7)。  [0074] When the coordinate position (IX, IY) of the specific display pixel in the original image (circular image) is calculated, the developed still image generating unit 52 generates the pixel 91 of the original image (circular image) at that coordinate. The value is acquired and stored as the pixel value of the specified display pixel. The developed still image generation unit 52 stores the pixel value of the specified display pixel in the first VRAM area 63 or the second VRAM area 64 of the external memory 30 (step ST7).
[0075] 以上のステップ ST4から ST7までの処理により、展開静止画生成部 52は、ステップ ST3で特定した 1つの表示画素の画素値を決定し、保存する。その後、展開静止画 生成部 52は、分割展開画像の複数の表示画素の内、画素値を決定していない表示 画素が残って 、る力否かを判断し、画素値を決定して 、な 、表示画素が無くなるま で、以上の表示画素の画素値の決定処理を繰り返す。  [0075] Through the processing from step ST4 to ST7, the developed still image generation unit 52 determines and stores the pixel value of one display pixel specified in step ST3. Thereafter, the developed still image generating unit 52 determines whether or not the display pixels for which the pixel value has not been determined remain among the plurality of display pixels of the divided expanded image, determines the pixel value, and determines the pixel value. Until the number of display pixels is exhausted, the process for determining the pixel value of the above display pixels is repeated.
[0076] 具体的にはたとえば、展開静止画生成部 52は、まず、分割展開画像の一列分の 表示画素について画素値の決定が終了したか否かを判断する(ステップ ST8)。そし て、一列分が終了してない場合 (ステップ ST8で Noの場合)、展開静止画生成部 52 は、同じ列の次の行の表示画素を特定し (ステップ ST9)、その特定表示画素の画素 値を決定する(ステップ ST4〜ST7)。 Specifically, for example, the developed still image generating unit 52 first determines whether or not the determination of the pixel value for the display pixels for one column of the divided expanded image has been completed (step ST8). If one column has not been completed (No in step ST8), the developed still image generation unit 52 identifies the display pixel of the next row in the same column (step ST9), and Pixel Determine the value (steps ST4 to ST7).
[0077] 一列分の表示画素につ 、ての画素値の決定が終了して!/、る場合 (ステップ ST8で Yesの場合)、展開静止画生成部 52は、さらに分割展開画像のすべての列について 画素値の決定が終了した力否かを判断する (ステップ ST10)。そして、すべての列に ついて終了してない場合 (ステップ ST10で Noの場合)、展開静止画生成部 52は、 次の列の上端 (最も上の行)の表示画素を特定し (ステップ ST11)、その特定表示画 素の画素値を決定する (ステップ ST4〜ST7)。  [0077] When the determination of all the pixel values for the display pixels for one column is completed! /, (When YES in step ST8), the expanded still image generation unit 52 further determines all of the divided expanded images. It is determined whether or not the force has been determined for the pixel value for the column (step ST10). If the processing has not been completed for all columns (No in step ST10), the developed still image generation unit 52 identifies the display pixel at the upper end (top row) of the next column (step ST11). Then, the pixel value of the specific display pixel is determined (steps ST4 to ST7).
[0078] 分割展開画像のすべての列にっ 、て画素値の決定が終了した場合 (ステップ ST1 0で Yesの場合)、展開静止画生成部 52は、図 7の分割展開画像の生成処理を終了 する。これ〖こより、 1つの分割画像範囲 73 (74)に関する、分割展開画像の生成処理 が終了する。  [0078] When the determination of the pixel values is completed for all the columns of the divided expanded image (Yes in step ST10), the expanded still image generation unit 52 performs the divided expanded image generation processing of FIG. finish. Thus, the divided expanded image generation process for one divided image range 73 (74) is completed.
[0079] その後、展開静止画生成部 52は、残りの 1つの分割画像範囲 74 (73)に関する、 分割展開画像の生成処理を実行する。これにより、外部メモリ 30の第一 VRAMエリ ァ 63あるいは第二 VRAMエリア 64には、静止画データ 39に基づぐ 2つの分割展 開画像が保存される。  [0079] Thereafter, the developed still image generation unit 52 executes a divided expanded image generation process for the remaining one divided image range 74 (73). As a result, two divided expanded images based on the still image data 39 are stored in the first VRAM area 63 or the second VRAM area 64 of the external memory 30.
[0080] 外部メモリ 30の第一 VRAMエリア 63あるいは第二 VRAMエリア 64に 2つの分割 展開画像が保存されると、圧縮静止画生成部 53は、その 2つの分割展開画像を圧 縮する。これにより、第一 VRAMエリア 63あるいは第二 VRAMエリア 64には、圧縮 展開静止画データ 61, 62が記憶される。  [0080] When two divided expanded images are stored in the first VRAM area 63 or the second VRAM area 64 of the external memory 30, the compressed still image generating unit 53 compresses the two divided expanded images. As a result, the compressed and expanded still image data 61 and 62 are stored in the first VRAM area 63 or the second VRAM area 64.
[0081] ストリーミング生成部 54は、第一 VRAMエリア 63および第二 VRAMエリア 64から 、圧縮展開静止画データ 61, 62を順番に読込み、且つ、外部メモリ 30から音声デー タ 38を読み込み、これらのコンテンツデータを有するストリーミングデータを生成する 。ストリーミングデータにおいて、 2つの分割展開画像は、上下に並べられている。  [0081] The streaming generation unit 54 reads the compressed decompressed still image data 61 and 62 in order from the first VRAM area 63 and the second VRAM area 64, and also reads the audio data 38 from the external memory 30. Generate streaming data with content data. In the streaming data, the two divided expanded images are arranged one above the other.
[0082] ストリーミング生成部 54は、生成したストリーミングデータを通信処理部 56へ供給す る。 360度ビデオカメラ装置 1の通信処理部 56は、 USBコネクタ 23、 USBケーブル 2および PC3の USBコネクタ 11を介して、 PC3の通信処理部 15ヘストリーミングデ ータを送信する。  The streaming generation unit 54 supplies the generated streaming data to the communication processing unit 56. The communication processing unit 56 of the 360-degree video camera apparatus 1 transmits streaming data to the communication processing unit 15 of the PC 3 via the USB connector 23, the USB cable 2, and the USB connector 11 of the PC 3.
[0083] PC3の通信処理部 15は、受信したストリーミングデータを再生処理部 16へ供給す る。 PC3の再生処理部 16は、ストリーミングデータから、圧縮された展開静止画デー タを抽出して復号し、 2つの分割展開画像のデータを表示データとして液晶表示デ バイス 12へ供給する。液晶表示デバイス 12は、 2つの分割展開画像を表示する。ま た、 PC3の再生処理部 16は、ストリーミングデータ力も音声データを抽出し、スピーカ 13へ供給する。スピーカ 13は、抽出された音声データに基づく音を出力する。 [0083] The communication processing unit 15 of the PC 3 supplies the received streaming data to the reproduction processing unit 16. The The playback processing unit 16 of the PC 3 extracts compressed decompressed still image data from the streaming data and decodes it, and supplies the data of the two divided expanded images to the liquid crystal display device 12 as display data. The liquid crystal display device 12 displays two divided expanded images. In addition, the playback processing unit 16 of the PC 3 extracts the audio data with the streaming data power and supplies it to the speaker 13. The speaker 13 outputs a sound based on the extracted audio data.
[0084] 図 11は、 PC3の液晶表示デバイス 12に表示される、撮像による 360度のパノラマ 展開画像の表示例である。液晶表示デバイス 12には、撮像による 360度のノ Vラマ 展開画像が、上下に並んだ 2つの分割展開画面 101, 102により表示される。 2つの 分割展開画面 101, 102が上下に並べられているので、仮にたとえばこれらが 1つの 展開画像である場合に比べて、液晶表示デバイス 12の表示画面を有効に利用する ことができる。また、液晶表示デバイス 12に、 360度のパノラマ展開画像を大きく表示 することができる。 FIG. 11 is a display example of a 360-degree panoramic developed image displayed on the liquid crystal display device 12 of the PC 3 by imaging. On the liquid crystal display device 12, a 360-degree image expanded by imaging is displayed by two divided expanded screens 101 and 102 arranged vertically. Since the two divided development screens 101 and 102 are arranged one above the other, the display screen of the liquid crystal display device 12 can be used more effectively than when these are one development image, for example. In addition, a 360-degree panoramic image can be displayed large on the liquid crystal display device 12.
[0085] なお、図 11中の 2つの分割展開画面 101, 102には、図 5のリング形状の画像範囲 を点線 71, 72で 2つに分割して得られる 180度毎の 2つの分割展開画像が表示され ている。 2つの分割展開画像は、その両端において画像が互いに連続しており、全 体として 360度のパノラマ画像を切れ目なく表示する。この表示により、 PC3のユー ザは、 360度ビデオカメラ装置 1により撮像された 360度のノ Vラマ展開画像を閲覧 することができる。また、 PC3のユーザは、スピーカ 13が発する音により、 360度ビデ ォカメラ装置 1により取得された音を聞くことができる。  [0085] It should be noted that the two split development screens 101 and 102 in FIG. 11 have two split developments every 180 degrees obtained by dividing the ring-shaped image range of FIG. 5 into two by dotted lines 71 and 72. An image is displayed. The two split developed images are continuous with each other at both ends, and a 360-degree panoramic image is displayed seamlessly as a whole. With this display, the user of the PC 3 can view the 360-degree video image developed by the 360-degree video camera device 1. Further, the user of the PC 3 can hear the sound acquired by the 360-degree video camera device 1 by the sound emitted from the speaker 13.
[0086] なお、上述したように 360度ビデオカメラ装置 1のイメージセンサ 27は、周期毎に輝 度分布データを生成する。色変換処理部 36は、輝度分布データから撮像画像デー タを生成する。静止画保存処理部 51は、外部メモリ 30の静止画データ 39をこの周 期毎に生成される撮像画像データで更新する。展開静止画生成部 52は、外部メモリ 30から静止画データ 39を読込み、読み込んだ静止画データ 39に基づく 2つの分割 展開画像のデータを、第一 VRAMエリア 63および第二 VRAMエリア 64に交互に書 込む。ストリーミング生成部 54は、この第一 VRAMエリア 63あるいは第二 VRAMェ リア 64に書込まれた圧縮済みの 2つの分割展開画像のデータを読み込んで、ストリ 一ミングデータを生成する。 [0087] したがって、ストリーミング生成部 54力 第一 VRAMエリア 63および第二 VRAMェ リア 64の中の一方の VRAMエリアから、圧縮済みの 2つの分割展開画像のデータを 読み込む間に、展開静止画生成部 52は他方の VRAMエリアに 2つの分割展開画 像を生成し、且つ、圧縮静止画生成部 53はその他方の VRAMエリアの 2つの分割 展開画像を圧縮することができる。ストリーミング生成部 54は、イメージセンサ 27が周 期毎に生成する輝度分布データに基づく 2つの分割展開画像を、画像抜け (フレー ム抜け)を生じることなくストリーミングデータに割り付けることができる。これにより、 P C3の液晶表示デバイス 12には、イメージセンサ 27により周期的に生成される輝度分 布データに基づく動画が表示される。 Note that, as described above, the image sensor 27 of the 360-degree video camera device 1 generates brightness distribution data for each period. The color conversion processing unit 36 generates captured image data from the luminance distribution data. The still image storage processing unit 51 updates the still image data 39 in the external memory 30 with the captured image data generated every period. The developed still image generation unit 52 reads still image data 39 from the external memory 30, and alternately generates two divided expanded image data based on the read still image data 39 in the first VRAM area 63 and the second VRAM area 64. Write. The streaming generation unit 54 reads the compressed data of the two divided expanded images written in the first VRAM area 63 or the second VRAM area 64, and generates streaming data. [0087] Therefore, the streaming generation unit 54 power generates a decompressed still image while reading the data of two compressed expanded images from one of the first VRAM area 63 and the second VRAM area 64. The unit 52 can generate two divided expanded images in the other VRAM area, and the compressed still image generating unit 53 can compress the two divided expanded images in the other VRAM area. The streaming generation unit 54 can assign the two divided development images based on the luminance distribution data generated by the image sensor 27 every period to the streaming data without causing image omission (frame omission). Accordingly, the moving image based on the luminance distribution data periodically generated by the image sensor 27 is displayed on the liquid crystal display device 12 of PC3.
[0088] 次に、ノ Vラマ展開画像をオートパンさせる動作を説明する。  [0088] Next, an operation for auto-panning the V-llama development image will be described.
[0089] PC3のコマンド生成部 17は、入力デバイス 14から入力される入力データに基づい てオートパン指示コマンドを生成する。 PC3の通信処理部 15は、オートパン指示コマ ンドを、 USBケーブル 2を介して 360度ビデオカメラ装置 1の通信処理部 56へ送信 する。 360度ビデオカメラ装置 1の通信処理部 56は、受信したオートパン指示コマン ドを処理管理部 55へ供給する。  The command generation unit 17 of the PC 3 generates an auto pan instruction command based on the input data input from the input device 14. The communication processing unit 15 of the PC 3 transmits the auto pan instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2. The communication processing unit 56 of the 360-degree video camera device 1 supplies the received autopan command to the processing management unit 55.
[0090] オートパン指示コマンドが供給されると、処理管理部 55は、 EEPROM41に記憶さ れる方角テーブル 46を周期的に更新する。具体的には、処理管理部 55は、周期的 に、方角テーブル 46の展開画素列毎の複数の方向のデータに、所定の一定角度を 加算する処理を繰り返す。  When the auto pan instruction command is supplied, the process management unit 55 periodically updates the direction table 46 stored in the EEPROM 41. Specifically, the process management unit 55 periodically repeats the process of adding a predetermined constant angle to the data in a plurality of directions for each developed pixel column in the direction table 46.
[0091] これにより、展開静止画生成部 52が図 7のフローチャートにおいて特定表示画素の 方角は、周期的に、所定の一定角度ずつ増加する。たとえば図 9中の第 1列の展開 画素列(1, Z) 81の角度 0 1は、周期的に、所定の一定角度ずつ増加する。図 7中 のステップ ST5および ST6で使用する特定表示画素の方角も、周期的に、所定の一 定角度ずつ増加する。  [0091] Thereby, the developed still image generation unit 52 periodically increases the direction of the specific display pixel by a predetermined constant angle in the flowchart of FIG. For example, the angle 01 of the developed pixel row (1, Z) 81 in the first row in FIG. 9 periodically increases by a predetermined constant angle. The direction of the specific display pixel used in steps ST5 and ST6 in FIG. 7 also periodically increases by a predetermined constant angle.
[0092] その結果、各特定表示画素の元画像(円形画像)での座標位置は一定角度ずつ 回転し、各特定表示画素の画素値もその回転にしたがって変化する。その結果、分 割展開画像中において、元画像(円形画像)中の被写体は、横へ移動する。 360度 のノ Vラマ展開画像においてオートパン動作が実現される。 [0093] 次に、パノラマ展開画像の高さ調整動作について説明する。 As a result, the coordinate position of each specific display pixel in the original image (circular image) rotates by a certain angle, and the pixel value of each specific display pixel also changes according to the rotation. As a result, the subject in the original image (circular image) moves sideways in the divided development image. Auto-pan operation is realized in 360-degree images. Next, the operation for adjusting the height of the panoramic image will be described.
[0094] PC3のコマンド生成部 17は、入力デバイス 14から入力される入力データに基づい て変更モード開始指示コマンドを生成する。 PC3の通信処理部 15は、変更モード開 始指示コマンドを、 USBケーブル 2を介して 360度ビデオカメラ装置 1の通信処理部 56へ送信する。 360度ビデオカメラ装置 1の通信処理部 56は、受信した変更モード 開始指示コマンドを処理管理部 55へ供給する。  The command generation unit 17 of the PC 3 generates a change mode start instruction command based on the input data input from the input device 14. The communication processing unit 15 of the PC 3 transmits a change mode start instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2. The communication processing unit 56 of the 360-degree video camera device 1 supplies the received change mode start instruction command to the processing management unit 55.
[0095] 図 12は、変更モードにおける処理管理部 55の動作を示すフローチャートである。  FIG. 12 is a flowchart showing the operation of the process management unit 55 in the change mode.
処理管理部 55は、変更モード開始指示コマンドが供給される (ステップ ST21)と、ス トリーミング生成部 54へ、境界表示指示を供給する (ステップ ST22)。その後、処理 管理部 55は、境界変更指示コマンドと、変更モード終了指示コマンドとの受信待ち 状態となる(ステップ ST23および ST24)。  When the change mode start instruction command is supplied (step ST21), the process management unit 55 supplies a boundary display instruction to the stream generation unit 54 (step ST22). After that, the process management unit 55 waits to receive a boundary change instruction command and a change mode end instruction command (steps ST23 and ST24).
[0096] 境界表示指示が供給されたストリーミング生成部 54は、外部メモリ 30から、静止画 データ 39を読み込む。ストリーミング生成部 54は、この静止画データ 39を読み込む 度に、その読込みに続けて、 EEPROM41から、内円境界データ 42および外円境 界データ 43を読み込む。ストリーミング生成部 54は、図 5に示すように、静止画デー タ 39の撮像画像に、内円境界のマーク 47および外円境界のマーク 48を重ねて表示 する表示画像を生成する。なお、ストリーミング生成部 54は、所定の複数回の静止画 データ 39の読み込みの度に、内円境界データ 42および外円境界データ 43を読み 込むようにしてもよい。  The streaming generation unit 54 to which the boundary display instruction is supplied reads the still image data 39 from the external memory 30. Every time the still image data 39 is read, the streaming generation unit 54 reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41 following the reading. As shown in FIG. 5, the streaming generation unit 54 generates a display image in which the inner circle boundary mark 47 and the outer circle boundary mark 48 are superimposed on the captured image of the still image data 39. Note that the streaming generation unit 54 may read the inner circle boundary data 42 and the outer circle boundary data 43 each time the still image data 39 is read a plurality of times.
[0097] さらに、ストリーミング生成部 54は、生成した表示画像のデータを有するストリーミン グデータを生成し、通信処理部 56へ供給する。 360度ビデオカメラ装置 1の通信処 理部 56は、 USBケーブル 2を介して、 PC3の通信処理部 15ヘストリーミングデータ を送信する。  Further, the streaming generation unit 54 generates streaming data including the generated display image data, and supplies the streaming data to the communication processing unit 56. The communication processing unit 56 of the 360-degree video camera device 1 transmits the streaming data to the communication processing unit 15 of the PC 3 via the USB cable 2.
[0098] PC3の通信処理部 15は、受信したストリーミングデータを再生処理部 16へ供給す る。 PC3の再生処理部 16は、ストリーミングデータから、表示画像のデータを抽出し、 表示データとして液晶表示デバイス 12へ供給する。液晶表示デバイス 12は、図 5に 示すように、内円境界の円形のマーク 47と、外円境界の円形のマーク 48とが重ねら れた撮像画像を表示する。 [0099] PC3のコマンド生成部 17は、入力デバイス 14に対するユーザの操作に基づいて、 内円境界および外円境界の位置を移動したり、内円境界および外円境界の半径を 拡縮したりする境界変更指示コマンドを生成する。 PC3の通信処理部 15は、境界変 更指示コマンドを、 USBケーブル 2を介して 360度ビデオカメラ装置 1の通信処理部 56へ送信する。 360度ビデオカメラ装置 1の通信処理部 56は、受信した境界変更指 示コマンドを処理管理部 55へ供給する。 The communication processing unit 15 of the PC 3 supplies the received streaming data to the reproduction processing unit 16. The reproduction processing unit 16 of the PC 3 extracts display image data from the streaming data and supplies it to the liquid crystal display device 12 as display data. As shown in FIG. 5, the liquid crystal display device 12 displays a captured image in which a circular mark 47 on the inner circle boundary and a circular mark 48 on the outer circle boundary are superimposed. [0099] The command generation unit 17 of the PC 3 moves the positions of the inner circle boundary and the outer circle boundary, and enlarges / reduces the radii of the inner circle boundary and the outer circle boundary based on a user operation on the input device 14. A boundary change instruction command is generated. The communication processing unit 15 of the PC 3 transmits a boundary change instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2. The communication processing unit 56 of the 360 degree video camera apparatus 1 supplies the received boundary change instruction command to the processing management unit 55.
[0100] 境界変更指示コマンドを受信する (ステップ ST23で Yesとなる)と、処理管理部 55 は、まず、その指示の適否を判断する (ステップ ST25)。処理管理部 55は、具体的 にはたとえば、変更後の内円境界の半径あるいは変更後の外円境界の半径が、所 定の半径 (たとえば 144ピクセル)以上であるか否かを判断する。  [0100] When the boundary change instruction command is received (Yes in step ST23), the process management unit 55 first determines whether the instruction is appropriate (step ST25). Specifically, the process management unit 55 determines whether, for example, the radius of the inner circle boundary after the change or the radius of the outer circle boundary after the change is equal to or larger than a predetermined radius (for example, 144 pixels).
[0101] 指示が適当であると判断する (ステップ ST25で Yesとなる)と、処理管理部 55は、 その変更指示に基づ 、て、 EEPROM41に記憶される内円境界データ 42や外円境 界データ 43を更新する (ステップ ST26)。また、処理管理部 55は、その新たな内円 境界と外円境界との組合せに基づいて、 EEPROM41に記憶される像高補正テー ブル 45を更新する(ステップ ST27)。  [0101] If it is determined that the instruction is appropriate (Yes in step ST25), the process management unit 55 determines that the inner circle boundary data 42 and the outer circle boundary stored in the EEPROM 41 are based on the change instruction. The boundary data 43 is updated (step ST26). Further, the process management unit 55 updates the image height correction table 45 stored in the EEPROM 41 based on the new combination of the inner circle boundary and the outer circle boundary (step ST27).
[0102] 内円境界と外円境界との間隔が変更により変化する場合、撮像画像中の、内円境 界と外円境界との間の複数の画素数は変化する。その結果、その複数の画素により 撮像される空間の範囲が変化し、その撮像空間の、複数の画素による分割割合も変 化する。  [0102] When the distance between the inner circle boundary and the outer circle boundary changes due to the change, the number of pixels between the inner circle boundary and the outer circle boundary in the captured image changes. As a result, the range of the space imaged by the plurality of pixels changes, and the division ratio of the imaging space by the plurality of pixels also changes.
[0103] また、立体射影方式の魚眼レンズ 22を使用すると、被写体は、画角に応じて異なる 大きさで結像する。したがって、図 6において説明したように、内円境界と外円境界と の間隔が更新により変化していない場合であっても、内円境界および外円境界の位 置が移動するだけで、撮像空間の、複数の画素による分割割合が変化する。  [0103] When the fisheye lens 22 of the three-dimensional projection method is used, the subject forms an image with a different size depending on the angle of view. Therefore, as described in FIG. 6, even if the distance between the inner circle boundary and the outer circle boundary has not changed due to the update, the position of the inner circle boundary and the outer circle boundary only moves, and the imaging is performed. The division ratio of a plurality of pixels in the space changes.
[0104] 処理管理部 55は、これら内円境界と外円境界との変更に伴なう複数の画素の分割 割合の変化に応じて、像高補正テーブル 45を更新する。その後、処理管理部 55は 、境界変更指示コマンドおよび変更モード終了指示コマンドの受信待ち状態へ戻る( ステップ ST23および ST24)。  The processing management unit 55 updates the image height correction table 45 in accordance with the change in the division ratio of the plurality of pixels accompanying the change between the inner circle boundary and the outer circle boundary. Thereafter, the process management unit 55 returns to the reception waiting state for the boundary change instruction command and the change mode end instruction command (steps ST23 and ST24).
[0105] 指示が適当でない場合 (ステップ ST25で Noの場合)、たとえば内円境界の半径が 所定の半径より小さい場合、処理管理部 55は、その変更指示を無視し、境界変更指 示コマンドおよび変更モード終了指示コマンドの受信待ち状態へ戻る (ステップ ST2 3および ST24)。 [0105] If the indication is not appropriate (No in step ST25), for example, the radius of the inner circle boundary is If the radius is smaller than the predetermined radius, process management unit 55 ignores the change instruction and returns to a waiting state for receiving a boundary change instruction command and a change mode end instruction command (steps ST23 and ST24).
[0106] ところで、境界表示指示が供給されたストリーミング生成部 54は、外部メモリ 30から 静止画データ 39を読み込む度に、 EEPROM41から、内円境界データ 42および外 円境界データ 43を読み込んでいる。したがって、処理管理部 55が内円境界データ 4 2や外円境界データ 43を更新した場合、ストリーミング生成部 54が、円形マーク 47, 48の、撮像画像に対する重なりの位置やサイズも更新される。液晶表示デバイス 12 において撮像画像と重ねて表示させる円形マーク 47, 48の位置およびサイズは、境 界変更指示コマンドに応じて変更される。ユーザは、液晶表示デバイス 12の表示に より、内円境界データ 42や外円境界データ 43が更新されたことを知ることができる。  By the way, the streaming generation unit 54 to which the boundary display instruction is supplied reads the inner circle boundary data 42 and the outer circle boundary data 43 from the EEPROM 41 each time the still image data 39 is read from the external memory 30. Therefore, when the process management unit 55 updates the inner circle boundary data 42 and the outer circle boundary data 43, the streaming generation unit 54 also updates the position and size of the overlap of the circular marks 47 and 48 with respect to the captured image. The positions and sizes of the circular marks 47 and 48 displayed on the liquid crystal display device 12 so as to overlap the captured image are changed according to the boundary change instruction command. The user can know that the inner circle boundary data 42 and the outer circle boundary data 43 have been updated by the display on the liquid crystal display device 12.
[0107] PC3のコマンド生成部 17は、入力デバイス 14に対するユーザの操作に基づいて、 変更モード終了指示コマンドを生成する。 PC3の通信処理部 15は、変更モード終了 指示コマンドを、 USBケーブル 2を介して 360度ビデオカメラ装置 1の通信処理部 56 へ送信する。 360度ビデオカメラ装置 1の通信処理部 56は、受信した変更モード終 了指示コマンドを処理管理部 55へ供給する。変更モード終了指示コマンドを受信す る(ステップ ST24において Yesとなる)と、処理管理部 55は、図 12の変更モードの処 理を終了する。この変更モードの処理後には、内円境界データ 42、外円境界データ 43および像高補正テーブル 45が新たなものへ変更され、 360度ビデオカメラ装置 1 の展開静止画生成部 52は、この変更されたデータを用いて、 360度のノ Vラマ展開 画像を生成するようになる。  The command generation unit 17 of the PC 3 generates a change mode end instruction command based on a user operation on the input device 14. The communication processing unit 15 of the PC 3 transmits a change mode end instruction command to the communication processing unit 56 of the 360-degree video camera device 1 via the USB cable 2. The communication processing unit 56 of the 360-degree video camera apparatus 1 supplies the received change mode end instruction command to the process management unit 55. When the change mode end instruction command is received (Yes in step ST24), the process management unit 55 ends the process of the change mode in FIG. After the processing in this change mode, the inner circle boundary data 42, the outer circle boundary data 43, and the image height correction table 45 are changed to new ones, and the developed still image generation unit 52 of the 360-degree video camera device 1 performs this change. Using this data, a 360-degree no-V llama expansion image is generated.
[0108] 以上のように、この実施の形態によれば、 360度ビデオカメラ装置 1のイメージセン サ 27には、魚眼レンズ 22による円形の像が結像する。静止画保存処理部 51は、こ のイメージセンサ 27の輝度分布データに基づ ヽて、円形画像を有する撮像画像デ ータを生成する。この円形画像には、被写体の頭などが切れたりすることなく含まれる  As described above, according to this embodiment, a circular image by the fisheye lens 22 is formed on the image sensor 27 of the 360-degree video camera device 1. The still image storage processing unit 51 generates captured image data having a circular image based on the luminance distribution data of the image sensor 27. This circular image includes the subject's head without being cut off.
[0109] また、展開静止画生成部 52は、この撮像画像の中の、内円境界と外円境界とに挟 まれているリング形状の範囲内の画像に基づいて、 2つの分割展開画像による 360 度のパノラマ展開画像を生成する。内円境界および外円境界の位置およびサイズは[0109] Further, the developed still image generating unit 52 uses two divided developed images based on an image within the range of the ring shape sandwiched between the inner circle boundary and the outer circle boundary in the captured image. 360 A panorama unfolded image is generated. The position and size of the inner and outer circle boundaries are
、処理管理部 55によりユーザが自由に設定することができる。 The processing management unit 55 can freely set the user.
[0110] したがって、処理管理部 55により内円境界あるいは外円境界の位置あるいはサイ ズを更新することで、円形画像内の被写体がリング形状の範囲内に収まるように調整 することができる。その結果、たとえば被写体が魚眼レンズ 22に近い位置に存在し、 被写体に対する魚眼レンズ 22の物理的な距離を確保することがでない場合であって も、内円境界の位置などを調整することで、被写体の頭などが切れていない 360度 のノ Vラマ展開画像を生成することができる。また、たとえば被写体としての人の頭と 体とがバランス良く表示される 360度のノ Vラマ展開画像を生成することができる。 Therefore, by updating the position or size of the inner circle boundary or outer circle boundary by the process management unit 55, it is possible to adjust the subject in the circular image to be within the ring shape range. As a result, for example, even when the subject is close to the fisheye lens 22 and the physical distance of the fisheye lens 22 to the subject cannot be ensured, by adjusting the position of the inner circle boundary, etc. It can generate a 360-degree image with no head cut. In addition, for example, it is possible to generate a 360-degree expanded image that displays a person's head and body as a subject in a well-balanced manner.
[0111] また、 180度毎の 2つの横長の分割展開画像は、上下に並べて PC3の液晶表示デ バイス 12に表示される。したがって、仮にたとえば 360度のパノラマ展開画像が 1つ の展開画像である場合に比べて、液晶表示デバイス 12の表示画面を有効に利用す ることができる。液晶表示デバイス 12に、 360度のノ Vラマ展開画像を大きく表示す ることがでさる。 [0111] In addition, two horizontally long divided developed images of every 180 degrees are displayed on the liquid crystal display device 12 of the PC 3 side by side. Therefore, the display screen of the liquid crystal display device 12 can be used more effectively than if, for example, a 360-degree panoramic development image is a single development image. The liquid crystal display device 12 can display a large 360-degree image of the V llama.
[0112] また、この実施の形態では、魚眼レンズ 22は、立体射影方式のものである。立体射 影方式の魚眼レンズ 22は、一般的な等距離射影方式の魚眼レンズを使用する場合 に比べて、魚眼レンズ 22により結像される画像中の周辺部のゆがみが少なぐ且つ、 周辺部の情報量が多くなる。また、 EEPROM41は、この立体射影方式の魚眼レン ズ 22の入射角と像高との関係に基づ!/、て得られる被写体の結像サイズのバランスを 補正する像高補正テーブル 45を記憶する。展開静止画生成部 52は、像高補正テー ブル 45を参照し、被写体の結像サイズのバランスを整えた 360度のパノラマ展開画 像を生成する。  [0112] In this embodiment, the fisheye lens 22 is a stereoscopic projection system. Compared to the case of using a regular equidistant projection fisheye lens, the stereoscopic projection fisheye lens 22 has less distortion at the periphery in the image formed by the fisheye lens 22, and the amount of information at the periphery. Will increase. The EEPROM 41 also stores an image height correction table 45 that corrects the balance of the imaging size of the subject obtained based on the relationship between the incident angle and the image height of the fisheye lens 22 of this stereoscopic projection method! To do. The developed still image generation unit 52 refers to the image height correction table 45 and generates a 360-degree panoramic developed image in which the balance of the imaging size of the subject is adjusted.
[0113] したがって、 360度ビデオカメラ装置 1を、その魚眼レンズ 22を上向きにテーブルに 載置したとき、その周隨こいる人物や黒板などの画像を、高分解能で得ることができ る。黒板の文字などを、 360度のノ Vラマ展開画像中に再現することができる。また、 魚眼レンズ 22の周囲にいる人物や黒板などの画像として、バランスが整ったものを得 ることがでさる。  [0113] Therefore, when the 360-degree video camera device 1 is placed on the table with the fisheye lens 22 facing upward, it is possible to obtain an image of a person or blackboard around the high-resolution image. Characters on the blackboard can be reproduced in a 360-degree image. It is also possible to obtain a well-balanced image of a person or blackboard around the fisheye lens 22.
[0114] また、この実施の形態では、 EEPROM41に記憶される像高補正テーブル 45は、 内円境界データ 42あるいは外円境界データ 43が更新される度に、更新される。内円 境界と外円境界との間隔が変更により変化する場合、それらの間の画像空間の、複 数の画素による分割割合が変化する。また、内円境界および外円境界の位置が移動 するだけであっても、それらの間の画像空間の、複数の画素による分割割合が変化 する。このように内円境界データ 42あるいは外円境界データ 43の更新の度に像高 補正テーブル 45を更新することで、像高補正テーブル 45は、これらと適合するもの に維持される。 In this embodiment, the image height correction table 45 stored in the EEPROM 41 is It is updated each time the inner circle boundary data 42 or the outer circle boundary data 43 is updated. When the distance between the inner circle boundary and the outer circle boundary changes due to the change, the division ratio of the image space between them changes by a plurality of pixels. Even if the positions of the inner circle boundary and the outer circle boundary only move, the division ratio of the plurality of pixels in the image space between them changes. Thus, by updating the image height correction table 45 each time the inner circle boundary data 42 or the outer circle boundary data 43 is updated, the image height correction table 45 is maintained to be compatible with these.
[0115] また、この実施の形態では、 EEPROM41に記憶される内円境界データ 42および 外円境界データ 43は、変更後のサイズが 144ピクセル以上の半径である場合に、更 新される。仮にたとえば内円境界の半径が 1画素になると、 360度のパノラマ展開画 像の最上段の複数の表示画素の画素値を得るために、撮像画像中の同じ画素が何 度も利用されること〖こなる。この場合、 360度のノ Vラマ展開画像の画質が向上する ことなぐ 360度のパノラマ展開画像の生成負荷が増大してしまう。この実施の形態の ように、内円境界あるいは外円境界の半径が 144ピクセルより小さくなつてしまうことを 防止することで、そのような不利益が発生してしまうことを防止することができる。  In this embodiment, the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 are updated when the changed size is a radius of 144 pixels or more. For example, if the radius of the inner circle boundary becomes 1 pixel, the same pixel in the captured image will be used many times to obtain the pixel values of the top display pixels of the 360 ° panorama development image. It's a little bit. In this case, the load of generating a 360-degree panoramic image increases without improving the image quality of the 360-degree image. By preventing the radius of the inner circle boundary or outer circle boundary from becoming smaller than 144 pixels as in this embodiment, it is possible to prevent such disadvantages from occurring.
[0116] また、この実施の形態では、展開静止画生成部 52は、 360度のパノラマ展開画像 の各表示画素の、円形画像の中心からの距離 Rおよび方角を演算し、演算した距離 Rおよび方角を用いて、各表示画素の、撮像画像での座標 (IX, IY)を演算し、演算 した各座標 (IX, IY)に位置する撮像画像の画素 91の画素値を、各表示画素の画素 値として取得する。  In this embodiment, the developed still image generation unit 52 calculates the distance R and the direction from the center of the circular image of each display pixel of the 360-degree panoramic developed image, and calculates the calculated distance R and Using the direction, the coordinates (IX, IY) in the captured image of each display pixel are calculated, and the pixel value of the pixel 91 in the captured image located at each calculated coordinate (IX, IY) is calculated for each display pixel. Obtained as a pixel value.
[0117] したがって、撮像画像中の円形画像の画素の画素値をそのまま用いた、 360度の ノ Vラマ展開画像を生成することができる。 360度のノ Vラマ展開画像の各表示画素 の画素値そのものを演算により求める必要がない。演算により各表示画素の画素値 を得る場合のように、演算による画質劣化が発生することはない。また、 1つのパノラ マ展開画像の生成時間を短縮することができる。その結果、 360度のパノラマ展開画 像を短時間で生成することができる。また、パノラマ展開画像の生成処理を連続的に 繰り返すことで、 360度のノ Vラマ展開画像による、連続的な動きのある動画を得るこ とがでさる。 [0118] 以上に説明したように、この実施の形態は、本発明の好適な実施の形態の例であるTherefore, it is possible to generate a 360-degree no Vrama developed image using the pixel values of the pixels of the circular image in the captured image as they are. There is no need to calculate the pixel value of each display pixel in the 360-degree llama image. As in the case of obtaining the pixel value of each display pixel by calculation, image quality deterioration due to calculation does not occur. In addition, the generation time of one panorama development image can be shortened. As a result, a 360-degree panoramic image can be generated in a short time. In addition, by continuously repeating the panorama development image generation process, it is possible to obtain a moving image with a 360 degree normal image. [0118] As described above, this embodiment is an example of a preferred embodiment of the present invention.
1S 本発明は、これに限定されるものではなぐ発明の要旨を逸脱しない範囲におい て種々の変形や変更が可能である。 1S The present invention is not limited thereto, and various modifications and changes can be made without departing from the scope of the invention.
[0119] たとえば上記実施の形態では、魚眼レンズ 22として、立体射影方式のものを利用し ている。この他にもたとえば、一般的な等距離射影方式のもの、等立体射影方式のも の、正射影方式のものなどを魚眼レンズ 22として利用してもよい。  [0119] For example, in the above-described embodiment, the fish-eye lens 22 uses a stereoscopic projection type. In addition to this, for example, a general equidistant projection method, an isometric projection method, an orthographic projection method, or the like may be used as the fisheye lens 22.
[0120] 上記実施の形態では、展開静止画生成部 52は、 360度のパノラマ展開画像として 、 180度毎の 2つの分割展開画像を生成している。この他にもたとえば、展開静止画 生成部 52は、 1つの 360度のパノラマ展開画像を生成するようにしたり、三分割とし たり、四分割としたりしてもよい。  [0120] In the above-described embodiment, the developed still image generating unit 52 generates two divided expanded images every 180 degrees as a 360-degree panoramic expanded image. In addition to this, for example, the developed still image generating unit 52 may generate one 360-degree panoramic expanded image, or may be divided into three or four.
[0121] 上記実施の形態では、処理管理部 55は、ユーザの操作に基づいて、 EEPROM4 1に記憶される内円境界データ 42および外円境界データ 43の両方を更新する。この 他にもたとえば、処理管理部 55は、ユーザの操作に基づいて、 EEPROM41に記憶 される内円境界データ 42および外円境界データ 43の中の一方のみを、更新するよう にしてもよい。  In the above embodiment, the process management unit 55 updates both the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 based on a user operation. In addition, for example, the process management unit 55 may update only one of the inner circle boundary data 42 and the outer circle boundary data 43 stored in the EEPROM 41 based on a user operation.
[0122] 上記実施の形態では、 360度ビデオカメラ装置 1には、 USBケーブル 2により、 PC 3が接続されている。この他にもたとえば、 360度ビデオカメラ装置 1には、 USBケー ブル 2などにより、たとえばコマンドを出力する入力ボードなどが接続されていてもよ い。  In the above embodiment, the PC 3 is connected to the 360 degree video camera apparatus 1 by the USB cable 2. In addition to this, for example, the 360 degree video camera apparatus 1 may be connected to an input board for outputting a command, for example, via a USB cable 2 or the like.
[0123] 上記実施の形態では、 360度ビデオカメラ装置 1に接続される PC3は、液晶表示デ バイス 12に、 360度のパノラマ展開画像による動画を表示する。この他にもたとえば 、 PC3は、図示外の記憶デバイスに、 360度のノ Vラマ展開画像による動画を保存 するようにしてちょい。  [0123] In the above embodiment, the PC 3 connected to the 360-degree video camera device 1 displays a moving image based on a 360-degree panoramic image on the liquid crystal display device 12. In addition to this, for example, PC3 should store a 360-degree video image in a storage device (not shown).
産業上の利用可能性  Industrial applicability
[0124] 本発明は、会議の様子などを撮像するためのパノラマ展開画像撮像装置およびパ ノラマ展開画像撮像システムなどとして、広く利用することができる。 [0124] The present invention can be widely used as a panorama developed image capturing apparatus, a panorama expanded image capturing system, and the like for capturing an image of a meeting.

Claims

請求の範囲 The scope of the claims
[1] 撮像による 360度のパノラマ展開画像を生成するパノラマ展開画像撮像装置であ つて、  [1] A panoramic unfolding image capturing device that generates 360-degree panoramic unfolding images.
魚眼レンズと、  With a fisheye lens,
複数の受光素子が配列される受光面に上記魚眼レンズによる円形の像が結像し、 円形画像を有する撮像画像のデータを生成する撮像手段と、  An imaging means for forming a circular image by the fisheye lens on a light receiving surface on which a plurality of light receiving elements are arranged, and generating data of a captured image having a circular image;
上記撮像画像中の、 360度のノ Vラマ展開画像の生成に使用するリング形状の範 囲を規定する内円境界および外円境界の位置あるいはサイズを記憶するメモリと、 ユーザの操作に基づいて、上記メモリに記憶される上記内円境界および上記外円 境界の中の少なくとも一方の位置あるいはサイズを更新するユーザ更新手段と、 上記撮像手段が生成する上記撮像画像中の、上記メモリに記憶される上記内円境 界と上記外円境界とで挟まれるリング形状の範囲内の画像に基づいて、 360度のパ ノラマ展開画像を生成するパノラマ展開画像生成手段と、  Based on the user's operation and memory that stores the position or size of the inner and outer circle boundaries that define the range of the ring shape used to generate the 360-degree V-Rama development image in the captured image. User update means for updating the position or size of at least one of the inner circle boundary and the outer circle boundary stored in the memory, and stored in the memory in the captured image generated by the imaging means. Panorama unfolded image generating means for generating a 360-degree panorama unfolded image based on an image within a ring-shaped range sandwiched between the inner circle boundary and the outer circle boundary;
を有することを特徴とするパノラマ展開画像撮像装置。  A panoramic developed image imaging device characterized by comprising:
[2] 前記魚眼レンズは、立体射影方式のものであり、 [2] The fisheye lens is of a stereoscopic projection type,
前記メモリは、前記立体射影方式の魚眼レンズの入射角と像高との関係に基づい て得られる被写体の結像サイズのバランスを補正する像高補正テーブルを記憶し、 前記パノラマ展開画像生成手段は、上記像高補正テーブルを参照し、被写体の結 像サイズのバランスを整えた 360度のノ Vラマ展開画像を生成すること、  The memory stores an image height correction table for correcting a balance of the imaging size of a subject obtained based on a relationship between an incident angle and an image height of the stereoscopic projection fisheye lens, and the panoramic developed image generation unit includes: Referencing the above image height correction table, generating a 360 degree no-llama development image that balances the image size of the subject,
を特徴とする請求項 1記載のパノラマ展開画像撮像装置。  The panoramic unfolded image capturing apparatus according to claim 1, wherein:
[3] 前記ユーザ更新手段は、前記メモリに記憶される前記内円境界および前記外円境 界の中の少なくとも一方の位置あるいはサイズを更新した後に、前記メモリに記憶さ れる前記像高補正テーブルを、更新した前記内円境界および前記外円境界と、前記 立体射影方式の魚眼レンズの入射角と像高との関係とに基づいて更新することを特 徴とする請求項 2記載のパノラマ展開画像撮像装置。 [3] The user update means updates the image height correction table stored in the memory after updating the position or size of at least one of the inner circle boundary and the outer circle boundary stored in the memory. 3. The panorama development image according to claim 2, wherein the panorama development image is updated based on the updated inner circle boundary and outer circle boundary, and a relationship between an incident angle and an image height of the stereoscopic eye fisheye lens. Imaging device.
[4] 前記ユーザ更新手段は、前記内円境界あるいは前記外円境界の変更後のサイズ 力 所定の画素数以上の半径である場合、前記内円境界あるいは前記外円境界の 変更指示に基づ 、て、前記メモリに記憶される前記内円境界ある 、は前記外円境界 を更新することを特徴とする請求項 1記載のパノラマ展開画像撮像装置。 [4] The user updating means, when the size force after the change of the inner circle boundary or the outer circle boundary is a radius greater than a predetermined number of pixels, is based on an instruction to change the inner circle boundary or the outer circle boundary. The inner circle boundary stored in the memory is the outer circle boundary. 2. The panoramic unfolded image capturing apparatus according to claim 1, wherein
[5] 前記パノラマ展開画像生成手段は、前記 360度のパノラマ展開画像の各表示画素 の、前記円形画像の中心からの距離および方角を演算し、演算した距離および方角 を用いて、上記各表示画素の、前記撮像画像での座標を演算し、演算した上記各座 標に位置する前記撮像画像の画素の画素値を、上記各表示画素の画素値として取 得することを特徴とする請求項 1記載のパノラマ展開画像撮像装置。 [5] The panoramic expanded image generation means calculates a distance and a direction of each display pixel of the 360-degree panoramic expanded image from the center of the circular image, and uses the calculated distance and direction to The coordinates of the pixel in the captured image are calculated, and the pixel value of the pixel of the captured image located at each calculated coordinate is obtained as the pixel value of each display pixel. The panoramic development image capturing apparatus described.
[6] 前記パノラマ展開画像生成手段は、前記撮像画像中の、前記内円境界と前記外 円境界とで挟まれる範囲を 180度毎に 2つの範囲に分割し、その分割した画像の範 囲毎の 2つの横長のノ Vラマ分割展開画像を生成し、 [6] The panoramic expanded image generation means divides a range between the inner circle boundary and the outer circle boundary in the captured image into two ranges every 180 degrees, and the range of the divided image Generate two horizontally wide V llama divided unfolded images for each
上記 2つの横長のノ Vラマ分割展開画像を縦に並べて表示させる表示データを生 成する表示データ生成手段を有すること、  Having display data generating means for generating display data for displaying the above two horizontally elongated V-llama divided development images vertically;
を特徴とする請求項 1記載のパノラマ展開画像撮像装置。  The panoramic unfolded image capturing apparatus according to claim 1, wherein:
[7] 撮像による 360度のパノラマ展開画像を生成するパノラマ展開画像撮像装置と、上 記パノラマ展開画像撮像装置が生成する上記パノラマ展開画像を表示あるいは保存 するコンピュータ装置とを有するノ Vラマ展開画像撮像システムであって、 [7] A panoramic unfolded image pickup device that generates a 360-degree panoramic unfolded image by imaging and a computer device that displays or stores the panorama unfolded image generated by the panorama unfolded image pick-up device. An imaging system,
上記パノラマ展開画像撮像装置は、  The panoramic expanded image imaging device
魚眼レンズと、  With a fisheye lens,
複数の受光素子が配列される受光面に上記魚眼レンズによる円形の像が結像し、 円形画像を有する撮像画像のデータを生成する撮像手段と、  An imaging means for forming a circular image by the fisheye lens on a light receiving surface on which a plurality of light receiving elements are arranged, and generating data of a captured image having a circular image;
上記撮像画像中の、 360度のノ Vラマ展開画像の生成に使用するリング形状の範 囲を規定する内円境界および外円境界の位置あるいはサイズを記憶するメモリと、 ユーザの操作に基づいて、上記メモリに記憶される上記内円境界および上記外円 境界の中の少なくとも一方の位置あるいはサイズを更新するユーザ更新手段と、 上記撮像手段が生成する上記撮像画像中の、上記メモリに記憶される上記内円境 界と上記外円境界とで挟まれるリング形状の範囲内の画像に基づいて、 360度のパ ノラマ展開画像を生成するパノラマ展開画像生成手段と、  Based on the user's operation and the memory that stores the position or size of the inner circle boundary and outer circle boundary that define the range of the ring shape used to generate the 360-degree NORMAL development image in the captured image. User update means for updating the position or size of at least one of the inner circle boundary and the outer circle boundary stored in the memory, and stored in the memory in the captured image generated by the imaging means. Panorama unfolded image generating means for generating a 360-degree panoramic unfolded image based on an image within a ring-shaped range sandwiched between the inner circle boundary and the outer circle boundary;
を有することを特徴とするパノラマ展開画像撮像システム。  A panoramic unfolded image capturing system characterized by comprising:
PCT/JP2007/060193 2006-07-20 2007-05-18 Panorama image photographing system and panorama image photographing method WO2008010345A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006197848A JP2008028606A (en) 2006-07-20 2006-07-20 Imaging device and imaging system for panoramically expanded image
JP2006-197848 2006-07-20

Publications (1)

Publication Number Publication Date
WO2008010345A1 true WO2008010345A1 (en) 2008-01-24

Family

ID=38956681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/060193 WO2008010345A1 (en) 2006-07-20 2007-05-18 Panorama image photographing system and panorama image photographing method

Country Status (2)

Country Link
JP (1) JP2008028606A (en)
WO (1) WO2008010345A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016076853A (en) * 2014-10-07 2016-05-12 キヤノン株式会社 Image processing apparatus, image processing method, and imaging apparatus
US9349195B2 (en) 2012-03-19 2016-05-24 Google Inc. Apparatus and method for spatially referencing images
US9811946B1 (en) 2016-05-30 2017-11-07 Hong Kong Applied Science and Technology Research Institute Company, Limited High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image
CN113612976A (en) * 2021-10-11 2021-11-05 成都派沃特科技股份有限公司 Security plan optimization method and security plan optimization device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4787292B2 (en) * 2008-06-16 2011-10-05 富士フイルム株式会社 Omni-directional imaging device
JP2010068071A (en) * 2008-09-09 2010-03-25 Fujifilm Corp Imaging device and method, image processor and method, and image processing program
JP5573349B2 (en) 2010-05-17 2014-08-20 パナソニック株式会社 Panorama development image photographing system and method
WO2014091736A1 (en) * 2012-12-10 2014-06-19 パナソニック株式会社 Display device for panoramically expanded image
JP6271917B2 (en) * 2013-09-06 2018-01-31 キヤノン株式会社 Image recording apparatus and imaging apparatus
US10587799B2 (en) 2015-11-23 2020-03-10 Samsung Electronics Co., Ltd. Electronic apparatus and method for controlling electronic apparatus thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000242773A (en) * 1999-02-19 2000-09-08 Fitto:Kk Image data converting device
JP2002325246A (en) * 2001-04-25 2002-11-08 Univ Waseda Relay system for athletic field
JP2002374519A (en) * 2001-06-12 2002-12-26 Sharp Corp Image monitor system, image monitor method and image monitor program
JP2006098942A (en) * 2004-09-30 2006-04-13 Elmo Co Ltd Fisheye lens and photographing apparatus with the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000242773A (en) * 1999-02-19 2000-09-08 Fitto:Kk Image data converting device
JP2002325246A (en) * 2001-04-25 2002-11-08 Univ Waseda Relay system for athletic field
JP2002374519A (en) * 2001-06-12 2002-12-26 Sharp Corp Image monitor system, image monitor method and image monitor program
JP2006098942A (en) * 2004-09-30 2006-04-13 Elmo Co Ltd Fisheye lens and photographing apparatus with the same

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9349195B2 (en) 2012-03-19 2016-05-24 Google Inc. Apparatus and method for spatially referencing images
US9740962B2 (en) 2012-03-19 2017-08-22 Google Inc. Apparatus and method for spatially referencing images
US10262231B2 (en) 2012-03-19 2019-04-16 Google Llc Apparatus and method for spatially referencing images
US10891512B2 (en) 2012-03-19 2021-01-12 Google Inc. Apparatus and method for spatially referencing images
JP2016076853A (en) * 2014-10-07 2016-05-12 キヤノン株式会社 Image processing apparatus, image processing method, and imaging apparatus
US9811946B1 (en) 2016-05-30 2017-11-07 Hong Kong Applied Science and Technology Research Institute Company, Limited High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image
CN113612976A (en) * 2021-10-11 2021-11-05 成都派沃特科技股份有限公司 Security plan optimization method and security plan optimization device

Also Published As

Publication number Publication date
JP2008028606A (en) 2008-02-07

Similar Documents

Publication Publication Date Title
US20200296282A1 (en) Imaging system, imaging apparatus, and system
WO2008010345A1 (en) Panorama image photographing system and panorama image photographing method
US20210084221A1 (en) Image processing system and image processing method
JP6927382B2 (en) Imaging systems, methods, programs, video display devices and image processing devices.
US8570334B2 (en) Image processing device capable of efficiently correcting image data and imaging apparatus capable of performing the same
JP3992045B2 (en) Video signal processing apparatus and method, and virtual reality generation apparatus
JP2007531333A (en) Panoramic video system for real-time display of undistorted images
JP2008129903A (en) Image processor and image processing method
US20100020202A1 (en) Camera apparatus, and image processing apparatus and image processing method
JP7424076B2 (en) Image processing device, image processing system, imaging device, image processing method and program
JP7306089B2 (en) Image processing system, imaging system, image processing device, imaging device, and program
JPH11308608A (en) Dynamic image generating method, dynamic image generator, and dynamic image display method
JP2016171577A (en) Imaging apparatus and imaging system
WO2008012983A1 (en) Imaging device and imaging system
KR20000058739A (en) Panorama video surveillance system and controlling method therefore
JP2023139983A (en) Photographing apparatus, photographing method, and program
JP3934345B2 (en) Imaging device
JPH10285474A (en) Image-pickup device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07743628

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07743628

Country of ref document: EP

Kind code of ref document: A1