WO2018087856A1 - Dispositif de synthèse d'image et procédé de synthèse d'image - Google Patents

Dispositif de synthèse d'image et procédé de synthèse d'image Download PDF

Info

Publication number
WO2018087856A1
WO2018087856A1 PCT/JP2016/083316 JP2016083316W WO2018087856A1 WO 2018087856 A1 WO2018087856 A1 WO 2018087856A1 JP 2016083316 W JP2016083316 W JP 2016083316W WO 2018087856 A1 WO2018087856 A1 WO 2018087856A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
imaging
pixel
imaging device
imaging devices
Prior art date
Application number
PCT/JP2016/083316
Other languages
English (en)
Japanese (ja)
Inventor
浩平 岡原
古木 一朗
司 深澤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2016/083316 priority Critical patent/WO2018087856A1/fr
Priority to JP2018549688A priority patent/JP6513305B2/ja
Publication of WO2018087856A1 publication Critical patent/WO2018087856A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a video composition device and a video composition method for generating one composite video from a plurality of videos (that is, a plurality of video data) acquired by a plurality of imaging devices.
  • a video synthesis process in which a plurality of videos acquired by shooting with a plurality of imaging devices (that is, a plurality of cameras) are combined to generate one combined video.
  • video processing such as lens distortion correction processing, viewpoint conversion processing, projection conversion processing, etc. is performed on each of a plurality of videos output from a plurality of imaging devices. Done. Since the processing load of these video processes is very large, it is difficult to perform these video processes in real time by a normal arithmetic unit (CPU: Central Processing Unit). Therefore, in the conventional apparatus, the video composition processing is performed by a GPU (Graphics Processing Unit) which is a parallel arithmetic apparatus that can operate in parallel with a normal arithmetic apparatus.
  • GPU Graphics Processing Unit
  • the present invention has been made in order to solve the above-described conventional problems, and the object of the present invention is to obtain one image from a plurality of images acquired by a plurality of imaging devices even when the number of imaging devices is increased.
  • An object of the present invention is to provide a video composition device and a video composition method capable of performing a video composition process for generating a composite image in a short time.
  • a video composition device is a video composition device that generates one composite video from a plurality of videos acquired by a plurality of imaging devices, and a video reception unit that receives the plurality of videos; A parameter input unit to which camera parameters of the plurality of imaging devices are input; and a video processing unit that generates the composite video from the plurality of videos, wherein the video processing unit receives the camera parameters input in advance.
  • first imaging device identification information for identifying a corresponding imaging device among the plurality of imaging devices, and an imaging device identified by the first imaging device identification information
  • a reference table including a corresponding first pixel position and a first weighted coefficient at the corresponding first pixel position is created, and the composite video is referred to by referring to the reference table
  • the first value obtained by multiplying the pixel value of the corresponding first pixel position in the imaging device specified by the first imaging device specifying information by the first weighting coefficient is obtained.
  • the synthesized video is generated.
  • a video composition method is a video composition method for generating one composite video from a plurality of videos acquired by a plurality of imaging devices, the cameras being input in advance for the plurality of imaging devices.
  • first imaging device specifying information for specifying a corresponding imaging device among the plurality of imaging devices, and imaging specified by the first imaging device specifying information
  • Creating a first reference table including a corresponding first pixel position in the device and a first weighted coefficient at the corresponding first pixel position and referring to the first reference table
  • the pixel value of the corresponding first pixel position in the imaging device specified by the first imaging device specifying information is multiplied by the first weighted coefficient for each pixel of the composite video.
  • video composition processing for generating one composite video from a plurality of videos acquired by a plurality of imaging devices can be performed in a short time.
  • FIG. 1 is a hardware configuration diagram schematically showing a video composition device according to Embodiment 1.
  • FIG. 4 is a diagram illustrating an example of a correspondence relationship between a synthesized video pixel and pixels of a plurality of imaging devices in the video synthesis device according to Embodiment 1.
  • FIG. 6 is a diagram illustrating an example of an overlapping area of imaging ranges of a plurality of imaging devices in the video composition device according to Embodiment 1.
  • FIG. 4 is a diagram illustrating a pixel range of an imaging device included in a first reference table in the video composition device according to Embodiment 1.
  • FIG. 1 is a hardware configuration diagram schematically showing a video composition device according to Embodiment 1.
  • FIG. 4 is a diagram illustrating an example of a correspondence relationship between a synthesized video pixel and pixels of a plurality of imaging devices in the video synthesis device according to Embodiment 1.
  • FIG. 6 is a diagram illustrating an example of an overlapping area of imaging ranges of
  • FIG. 6 is a diagram illustrating a pixel range of an imaging device included in a second reference table in the video composition device according to Embodiment 1.
  • FIG. 6 is a flowchart showing an operation of the video composition device according to the first embodiment (that is, a video composition method according to the first embodiment).
  • 6 is a diagram illustrating an example of a superposed region of trapezoidal imaging ranges of a plurality of imaging devices in a video composition device according to Embodiment 2.
  • FIG. FIG. 10 is a diagram illustrating an example in which imaging ranges of a plurality of imaging devices in a video composition device according to Embodiment 2 are simplified.
  • FIG. 10 is a diagram illustrating a pixel range of an imaging device included in a first reference table in a video composition device according to Embodiment 2.
  • FIG. 10 is a diagram illustrating a pixel range (superimposed region) of an imaging device included in a second reference table in the video composition device according to Embodiment 2.
  • FIG. 10 is a diagram illustrating a pixel range (superimposed region) of an imaging device included in a third reference table in the video composition device according to Embodiment 2. It is a figure which shows the range (superimposition area
  • FIG. 1 is a functional block diagram schematically showing a configuration of a video composition device 1 according to Embodiment 1 of the present invention.
  • the video composition apparatus 1 is an apparatus that can perform the video composition method according to the first embodiment.
  • the video synthesizing device 1 is a composite video (ie, a plurality of video data) output from a plurality of imaging devices (ie, a plurality of cameras) Cam1, ..., Cami, ..., CamN. 1 composite video data) is generated.
  • N is an integer of 2 or more
  • i is an arbitrary integer of 1 or more and N or less.
  • the video synthesizing device 1 When the video is a moving image, the video synthesizing device 1 performs processing for creating one synthesized video frame from N video frames output from the N imaging devices Cam1, ..., CamN. By repeatedly performing each time a video frame is input from Cam1,..., CamN, moving image data as composite video data is generated. The generated composite video data is output to the display device 2.
  • the display device 2 displays a video based on the received composite video data.
  • Examples of composite video include panoramic video that is a horizontally long video with a wide field of view and an overhead video that is a video looking down from a high position.
  • a synthesized video generated by synthesizing a plurality of videos arranged in the left-right direction (one-dimensional direction) acquired by a plurality of imaging devices is a panoramic video.
  • a synthesized video generated by synthesizing a plurality of videos arranged in the vertical and horizontal directions (two-dimensional directions) acquired by a plurality of imaging devices is an overhead video.
  • the video composition device 1 creates in advance a reference table having information regarding the pixels of the imaging devices Cam1,..., CamN corresponding to the composite video pixels, and sets the pixel values of the composite video pixels using this reference table ( substitute.
  • the video composition device 1 includes a video receiving unit 4, a parameter input unit 5, a video processing unit 6 having a storage unit 6a, and a display processing unit 7. is doing.
  • the storage unit 6 a may be provided outside the video processing unit 6.
  • the video composition apparatus 1 shown in FIG. 1 uses a memory as a storage unit 6a that stores a program as software and a processor as an information processing unit that executes a program stored in the memory (for example, by a computer). ) Can be realized. 1 may be realized by a memory that stores a program and a processor that executes the program.
  • the video reception unit 4 receives a plurality of video data output from the plurality of imaging devices Cam1,..., CamN, and outputs the received video data to the video processing unit 6.
  • the video data decoding process may be performed by the video receiving unit 4 and the decoded video data may be output to the video processing unit 6.
  • the parameter input unit 5 receives information indicating camera parameters for a plurality of imaging devices Cam1,..., CamN obtained by calibration performed in advance, that is, parameter estimation of imaging elements of the lens and the image sensor, and performs video processing. Output to unit 6.
  • the camera parameters include, for example, internal parameters that are camera parameters unique to the imaging devices Cam1,..., CamN, external parameters that are camera parameters indicating the positions and orientations of the imaging devices Cam1,. ..., including lens distortion correction coefficient (for example, lens distortion correction map) used to correct distortion specific to the CamN lens (for example, distortion in the radial direction of the lens and distortion in the circumferential direction of the lens) .
  • the video processing unit 6 creates a reference table for video composition at the time of initialization using the camera parameters calculated by the calibration performed in advance, and stores this reference table in the storage unit 6a. Store.
  • the video processing unit 6 refers to the reference table and generates composite video data from a plurality of video data (video frames) output from the video receiving unit 4.
  • the display processing unit 7 outputs the composite video data generated by the video processing unit 6 to the display device 2.
  • FIG. 2 is a hardware configuration diagram schematically showing the video composition device 1 according to the first embodiment.
  • the video composition device 1 includes a main processor 10, a main memory 11, an auxiliary memory 12, a video processing processor 13 that is a parallel processing device such as a GPU, a video processing memory 14, an input interface 15, and a file interface 16.
  • FIG. 1 includes a main processor 10, a main memory 11, an auxiliary memory 12, a video processing processor 13, and a video processing memory 14 shown in FIG.
  • the storage unit 6a in FIG. 1 includes the main memory 11, the auxiliary memory 12, and the video processing memory 14 shown in FIG.
  • the parameter input unit 5 of FIG. 1 includes a file interface 16 shown in FIG. 1 includes a video input interface 18 shown in FIG.
  • the display processing unit 7 in FIG. 1 includes a display interface 17 shown in FIG.
  • FIG. 2 only shows an example of the hardware configuration of the video composition apparatus 1 shown in FIG. 1, and the hardware configuration can be variously changed. Further, the correspondence relationship between the functional blocks 4 to 7 shown in FIG. 1 and the hardware configurations 10 to 18 shown in FIG. 2 is not limited to the above example.
  • the parameter input unit 5 in FIG. 1 acquires the camera parameter information calculated by the calibration executed in advance from the auxiliary memory 12 and writes it to the main memory 11.
  • the auxiliary memory 12 may store camera parameters calculated by a previously executed calibration.
  • the main processor 10 may store the camera parameters in the main memory 11 through the file interface 16.
  • the main processor 10 may store a still image file in the auxiliary memory 12 when creating a composite video from a still image.
  • the input interface 15 receives device input such as mouse input, keyboard input, touch panel input, and the like, and sends input information to the main processor 10.
  • the video processing memory 14 stores the input video data transferred from the main memory 11 and the composite video data created by the video processing processor 13.
  • the display interface 17 and the display device 2 are connected by an HDMI (registered trademark) (High-Definition Multimedia Interface) cable or the like.
  • the synthesized video is output to the display device 2 via the display interface 17 as the display processing unit 7.
  • the video input interface 18 as the video receiver 4 receives video inputs from the imaging devices Cam1,..., CamN connected to the video synthesizer 1 and stores the input video in the main memory 11.
  • the imaging devices Cam1,..., CamN are, for example, network cameras, analog cameras, USB (Universal Serial Bus) cameras, HD-SDI (High Definition Serial Digital Interface) cameras, and the like. Note that the video input interface 18 uses a standard conforming to the connected device.
  • the video processing unit 6 in FIG. 1 determines the resolution W_synth ⁇ H_synth of the composite video to be created, and reserves a memory area for storing the composite video in the storage unit 6a in FIG.
  • W_synth indicates the number of pixels in the horizontal direction of the rectangular composite video
  • H_synth indicates the number of pixels in the vertical direction of the composite video.
  • the video processor 13 determines the resolution W_synth ⁇ H_synth of the composite video to be created, and reserves a memory area for storing the composite video in the video processing memory 14.
  • the video processing unit 6 in FIG. 1 uses the camera parameters (internal parameters, external parameters, lens distortion correction data, projection plane, etc.) of the imaging devices Cam1,..., CamN input from the parameter input unit 5 in FIG. ,
  • a reference table for the imaging devices Cam1,..., CamN is created and stored in the storage unit 6a.
  • the video processor 13 creates a reference table for the imaging devices Cam1,..., CamN from the camera parameters of the imaging devices Cam1,..., CamN input from the file interface 16. And stored in the video processing memory 14.
  • FIG. 3 is a diagram illustrating an example of a correspondence relationship between a synthesized video pixel and pixels of a plurality of imaging devices Cam1,..., CamN in the video synthesis device 1 according to the first embodiment.
  • the reference table for the imaging devices Cam1,..., CamN as shown in FIG. 3, the pixels (x_cam1, y_cam1), ..., (x_camN, y_camN) of the imaging devices Cam1,.
  • the ⁇ value is a weighted coefficient used for the blending process of the overlapping region of the imaging range of the imaging devices Cam1,.
  • FIG. 4 is a diagram illustrating an example of a superposed region of the imaging ranges of the plurality of imaging devices Cam1,..., Cam4 in the video composition device 1 according to the first embodiment.
  • a superimposition region in the imaging range of the adjacent imaging devices Cam1,.
  • Blend processing is applied to the superimposition area, but the pixel values of different imaging devices Cam1,..., Cam4 are referred to, weighted by multiplying the pixel values by a weighted coefficient ⁇ , and the weighted pixel values are synthesized.
  • a processing waiting time occurs for the video data output from the imaging devices Cam1,. Note that since the blending process is performed in the overlapping region, the overlapping region is also referred to as a blending region.
  • FIG. 5 is a diagram illustrating pixel ranges of the imaging devices Cam1,..., Cam4 included in the reference table (first reference table) in the video composition device 1 according to the first embodiment.
  • FIG. 6 is a diagram showing the pixel ranges of the imaging devices Cam1,..., Cam4 included in another reference table (second reference table) in the video composition device 1 according to the first embodiment.
  • the video processing unit 6 creates a reference table for video composition from the reference tables of the imaging devices Cam1, ..., CamN.
  • This reference table for video composition is two reference tables holding information on the upper side (left imaging device) of the blend region and the lower side (right imaging device) of the blend region, that is, the first table shown in FIG. 1 reference table and a second reference table shown in FIG.
  • the first reference table for video composition includes the camera number i as the first imaging device identification information, the pixel (x_cami, y_cami) of the corresponding imaging device Cami, and the ⁇ value of the pixel of the corresponding imaging device Cami. Hold. Note that the ⁇ value of the pixels other than the overlapping region is 1.
  • FIG. 5 shows an example, and the pixel ranges of the imaging devices Cam1,..., Cam4 included in the first reference table are not limited to the example of FIG.
  • the second reference table for video composition is used as second imaging device specifying information for specifying an imaging device having an overlapping area among a plurality of imaging ranges captured by the plurality of imaging devices Cam1,..., CamN.
  • the camera number i, the pixel (x_cami, y_cami) of the corresponding imaging device Cami, and the ⁇ value of the pixel in the overlapping region of the corresponding imaging device Cami are held.
  • FIG. 6 shows an example, and the pixel ranges of the imaging devices Cam1,..., Cam4 included in the second reference table are not limited to the example of FIG.
  • Pixels other than the overlapping region of the imaging range of the imaging devices Cam1,..., CamN and the pixels corresponding to the imaging device on the left side of the overlapping region (or the imaging device on the right side) can be simultaneously assigned to the synthesized video pixels. .
  • the upper reference table (first reference table) shown in FIG. The information stored in the pixel corresponding to the imaging device on the right side (or the imaging device on the left side) of the overlapping region of the imaging range of the imaging devices Cam1,..., CamN is stored in the lower reference table (first table) shown in FIG. 2 reference table).
  • the pixel values of the pixels in the imaging range of the imaging devices Cam1, Cam2, Cam3, and Cam4 shown in FIG. 5 can be simultaneously substituted into the synthesized video pixels using the first reference table.
  • the pixel values of the pixels in the overlapping region of the imaging ranges of the imaging devices Cam2, Cam3, and Cam4 illustrated in FIG. 6 can be simultaneously assigned to the synthesized video pixels using the second reference table.
  • a video processor that is a parallel processing device such as a GPU is used, and the first and second reference tables are used, so that no processing wait occurs in the video composition processing, and the imaging device Cam1, ..., regardless of the number of CamNs, a composite image can be generated in two steps, that is, the substitution process using the first reference table and the substitution process using the second reference table.
  • the video input interface 18 in the video receiver 4 acquires video data for one frame of the imaging devices Cam1,..., CamN and stores the video data in the main memory 11. The acquired video data is transferred from the main memory 11 to the video processing memory 14.
  • the video processor 13 in the video processing unit 6 combines the pixel values of the input video transferred to the video processing memory 14 using the first reference table and the second reference table, corresponding to the pixels of the input video. Substitute as the pixel value of the image pixel. This processing procedure will be described below.
  • the following video composition processing is executed by the video processor 13 in parallel with the processing of the main processor 10. ⁇ 1> First, as the first process, the video processor 13 determines from the first reference table that the camera number i corresponding to each pixel (x_synth, y_synth) in the synthesized video and the camera device i corresponding to the camera number i correspond to each other. The pixel position (x_cami, y_cami) to be performed and the weighted coefficient ⁇ are extracted.
  • the video processor 13 refers to the pixel value of the input video (x_cami, y_cami) of the camera number i on the video processing memory 14 and assigns a weighting coefficient ⁇ to this pixel value. Is substituted for the pixel of the composite video (x_synth, y_synth) on the video processing memory 14.
  • the video processor 13 executes the following video composition processing in parallel with the processing of the main processor 10.
  • the video processor 13 determines from the second reference table that the camera number i corresponding to each pixel (x_synth, y_synth) in the synthesized video and the imaging device Cami corresponding to the camera number i.
  • the pixel position (x_cami, y_cami) to be performed and the weighted coefficient ⁇ are extracted.
  • the video processor 13 refers to the pixel value of the input video (x_cami, y_cami) of the camera number i on the video processing memory 14 and assigns a weighting coefficient ⁇ to this pixel value. Is substituted for the pixel of the composite video (x_synth, y_synth) on the video processing memory 14. As a result, blend processing is performed on the pixels in the superimposed region of the composite video.
  • FIG. 7 is a flowchart showing the operation of the video composition apparatus according to the first embodiment (that is, the video composition method according to the first embodiment).
  • the video processing unit 6 After creating the reference table in the initialization process (step S1), the video processing unit 6 performs the video input process (step S2) and the video composition process (step S4) until the video input is completed (step S4). , Repeat.
  • the video processing unit 6 corrects the positional shift using the feature points on the video and creates a new reference table in the background. By replacing the currently used reference table with a new reference table, an aligned composite video can be created.
  • the display processing unit 7 transmits the panoramic composite video data as the composite video data created by the video processing unit 6 to the display device 2.
  • the display device 2 displays a video based on the received panoramic composite video data. Note that the display device 2 may display the panoramic composite video on a single display screen, or may display it over a plurality of display screens. The display device 2 may cut out and display only a partial area of the panoramic composite video.
  • ⁇ 1-3 Effect As described above, according to the video composition device 1 and the video composition method according to the first embodiment, the decoding load of the input video data according to the number of the imaging devices Cam1,. However, the load of the video composition processing of the video acquired by the imaging devices Cam1,..., CamN hardly increases.
  • the processing time increases as the number of imaging devices Cam1,. .
  • a reference table is prepared for each of the imaging devices Cam1,..., CamN and lens distortion correction processing, viewpoint conversion processing, and projection conversion processing are combined, processing in the overlapping region of the imaging devices Cam1,. Since waiting occurs, the processing time increases as the number of imaging devices Cam1,..., CamN increases.
  • the first reference table is configured only by data that can be substituted with pixels simultaneously.
  • the video composition process can be realized with the maximum number of steps of the imaging devices Cam1,. That is, in the first embodiment, since the maximum number of imaging devices Cam1,..., CamN related to the overlapping region is a panoramic composite video, the steps using the first reference table and the second reference table are used.
  • the synthesizing process can be executed in two steps consisting of steps using.
  • Embodiment 2 ⁇ 2-1 Configuration
  • the video composition apparatus and the video composition method for generating one composite video (panoramic video) from a plurality of videos arranged in the left-right direction have been described.
  • a video composition device and a video composition method for generating one composite video (overhead video) from a plurality of videos arranged in the vertical and horizontal directions will be described.
  • the difference from Embodiment 1 is that the video composition processing is performed using the reference tables (second to fourth reference tables). Except for these points, the second embodiment is the same as the first embodiment. Therefore, in the description of the second embodiment, reference is also made to FIGS. 1, 2 and 7 used in the description of the first embodiment.
  • FIG. 8 is a diagram illustrating an example of the overlapping region of the trapezoidal imaging ranges of the plurality of imaging devices Cam1,..., Cam9 in the video composition device according to the second embodiment.
  • FIG. 8 shows an example of the arrangement of the plurality of imaging devices Cam1,..., Cam9, and does not limit the arrangement method of the plurality of imaging devices.
  • the imaging device when a plurality of imaging devices Cam1,..., Cam9 are arranged, the maximum number of imaging devices corresponding to the same pixel (41 in FIG. 8) in the superimposed region of the composite video is 4 in the vertical and horizontal directions. It becomes a stand.
  • the imaging device generates four types of reference tables, that is, first to fourth reference tables, as reference tables composed of only pixels that can be simultaneously substituted for the synthesized video pixels. Regardless of the number of images, the video composition process can be executed in four steps.
  • FIG. 9 is a diagram illustrating an example of an overlapping area of the imaging ranges of the plurality of imaging devices Cam1,..., Cam9 in the video composition device according to the second embodiment.
  • the imaging range is drawn in a rectangle.
  • FIG. 10 is a diagram illustrating a first reference table for the imaging range in the video composition device according to Embodiment 2.
  • FIG. 11 to FIG. 13 show the second reference table, the third reference table, and the fourth reference table, which are other reference tables for the imaging range (superimposed region) in the video composition device according to the second embodiment.
  • FIG. 9 to 13 show the images after projection conversion of the imaging devices Cam1,..., CamN as rectangles for the sake of simplicity, but the process is the same for trapezoids or other shapes.
  • the first to fourth reference tables shown in FIGS. 9 to 13 are examples, and the shape and number of the reference tables are not limited to the examples of FIGS. 9 to 13.
  • Video composition processing The pixel value of the input video transferred to the video processing memory 14 is substituted into the composite video using the first to fourth reference tables. The processing procedure is shown below.
  • the video processor 13 of the video processing unit 6 executes the following operations in parallel. ⁇ 11> In the first processing, the video processor 13 determines from the first reference table shown in FIG. 10 the imaging device Cami of the camera number i and camera number i corresponding to each pixel (x_synth, y_synth) in the synthesized video. A corresponding pixel position (x_cami, y_cami) and a weighted coefficient ⁇ are extracted.
  • the video processor 13 refers to the pixel value of the pixel (x_cami, y_cami) of the input video of the imaging device Cami with the camera number i on the video processing memory 14 and weights this pixel value The added coefficient ⁇ is multiplied and assigned to the pixel of the composite video (x_synth, y_synth) on the video processing memory 14.
  • the video processor 13 performs the same process as the eleventh process and the twelfth process on the second reference table.
  • the video processor 13 executes the same process as the eleventh process and the twelfth process for the third reference table.
  • the video processor 13 also executes the same processes as the eleventh process and the twelfth process for the fourth reference table.
  • the processing procedure of the entire video processing unit is the same as that in FIG.
  • the operation of the display processing unit is the same as that of the first embodiment.
  • ⁇ 2-3 Effect According to the video synthesizing apparatus and the video synthesizing method according to the second embodiment, the decoding load of the input video data increases according to the number of the imaging devices Cam1, ..., CamN, but the imaging device Cam1. ,..., The load of video composition processing of the video acquired by CamN hardly increases.
  • the first reference table is composed of only data that can be assigned to pixels at the same time, paying attention to the processing wait between the images of the imaging devices that have adjacent overlapping regions.
  • processing can be executed in a maximum of four steps.
  • 1 video composition device 2 display device, 4 video reception unit, 5 parameter input unit, 6 video processing unit, 6a storage unit, 7 display processing unit, 10 main processor, 11 main memory, 12 auxiliary memory, 13 video processing processor, 14 video processing memory, 15 input interface, 16 file interface, 17 display interface, 18 video input interface, Cam1, ..., Cami, ..., CamN imaging device (camera).

Abstract

Selon l'invention, un dispositif de synthèse d'image (1) comprend une unité de réception d'image (4), une unité d'entrée de paramètre (5) et une unité de traitement d'image (6). L'unité de traitement d'image (6) utilise des paramètres de caméra entrés précédemment pour créer une table de référence comprenant, pour chaque pixel d'une image synthétisée : des premières informations d'identification de dispositif de capture d'image (Cami) identifiant un dispositif de capture d'image correspondant parmi une pluralité de dispositifs de capture d'image (Cam1, ..., CamN) ; une première position de pixel correspondante (x_cami, y_cami) dans le dispositif de capture d'image identifié par les premières informations de dispositif de capture d'image ; et un premier coefficient de pondération (α) à la première position de pixel correspondante. L'unité de traitement d'image (6) consulte ensuite la table de référence et produit une image synthétisée en substituant, pour chaque pixel (x_synth, y_synth) de l'image synthétisée, une première valeur obtenue en multipliant la valeur de pixel de la première position de pixel correspondante dans le dispositif de capture d'image, identifiée par les premières informations d'identification de dispositif de capture d'image, par le premier coefficient de pondération (α).
PCT/JP2016/083316 2016-11-10 2016-11-10 Dispositif de synthèse d'image et procédé de synthèse d'image WO2018087856A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2016/083316 WO2018087856A1 (fr) 2016-11-10 2016-11-10 Dispositif de synthèse d'image et procédé de synthèse d'image
JP2018549688A JP6513305B2 (ja) 2016-11-10 2016-11-10 映像合成装置及び映像合成方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/083316 WO2018087856A1 (fr) 2016-11-10 2016-11-10 Dispositif de synthèse d'image et procédé de synthèse d'image

Publications (1)

Publication Number Publication Date
WO2018087856A1 true WO2018087856A1 (fr) 2018-05-17

Family

ID=62109506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/083316 WO2018087856A1 (fr) 2016-11-10 2016-11-10 Dispositif de synthèse d'image et procédé de synthèse d'image

Country Status (2)

Country Link
JP (1) JP6513305B2 (fr)
WO (1) WO2018087856A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019225255A1 (ja) * 2018-05-21 2021-02-18 富士フイルム株式会社 画像補正装置、画像補正方法、及び画像補正プログラム
WO2021192096A1 (fr) * 2020-03-25 2021-09-30 三菱電機株式会社 Dispositif de traitement d'images, procédé de traitement d'images, et programme de traitement d'images
US11198393B2 (en) * 2019-07-01 2021-12-14 Vadas Co., Ltd. Method and apparatus for calibrating a plurality of cameras

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008048266A (ja) * 2006-08-18 2008-02-28 Matsushita Electric Ind Co Ltd 車載画像処理装置及びその視点変換情報生成方法
WO2015029934A1 (fr) * 2013-08-30 2015-03-05 クラリオン株式会社 Dispositif, système et procédé d'étalonnage de caméras

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008048266A (ja) * 2006-08-18 2008-02-28 Matsushita Electric Ind Co Ltd 車載画像処理装置及びその視点変換情報生成方法
WO2015029934A1 (fr) * 2013-08-30 2015-03-05 クラリオン株式会社 Dispositif, système et procédé d'étalonnage de caméras

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019225255A1 (ja) * 2018-05-21 2021-02-18 富士フイルム株式会社 画像補正装置、画像補正方法、及び画像補正プログラム
US11198393B2 (en) * 2019-07-01 2021-12-14 Vadas Co., Ltd. Method and apparatus for calibrating a plurality of cameras
WO2021192096A1 (fr) * 2020-03-25 2021-09-30 三菱電機株式会社 Dispositif de traitement d'images, procédé de traitement d'images, et programme de traitement d'images
JPWO2021192096A1 (fr) * 2020-03-25 2021-09-30
JP7038935B2 (ja) 2020-03-25 2022-03-18 三菱電機株式会社 画像処理装置、画像処理方法、及び画像処理プログラム

Also Published As

Publication number Publication date
JP6513305B2 (ja) 2019-05-15
JPWO2018087856A1 (ja) 2019-04-11

Similar Documents

Publication Publication Date Title
US9196022B2 (en) Image transformation and multi-view output systems and methods
JP2009124685A (ja) 複数のビデオを結合してリアルタイムで表示する方法及びシステム
KR101521008B1 (ko) 어안 렌즈를 사용하여 얻은 왜곡영상에 대한 보정방법 및 이를 구현하기 위한 영상 디스플레이 시스템
JP2005339313A (ja) 画像提示方法及び装置
JP6882868B2 (ja) 画像処理装置、画像処理方法、システム
JP6735908B2 (ja) パノラマビデオ圧縮方法および装置
JP2007089110A (ja) テレビウォール用画像分割方法
WO2018087856A1 (fr) Dispositif de synthèse d'image et procédé de synthèse d'image
TW200839734A (en) Video compositing device and video output device
JP2002014611A (ja) プラネタリウムのまたは球面スクリーンへのビデオ投映方法と装置
US20090059018A1 (en) Navigation assisted mosaic photography
WO2009090727A1 (fr) Affichage
JP2004056359A (ja) 画像合成装置及び画像合成プログラム
KR101819984B1 (ko) 실시간 영상 합성 방법
JP2009065519A (ja) 画像処理装置
JPH10108003A (ja) 画像合成装置および画像合成方法
Shete et al. Real-time panorama composition for video surveillance using GPU
JP5645448B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP4676385B2 (ja) 画像合成方法及び画像合成装置並びに画像合成プログラム
KR102074072B1 (ko) 듀얼 카메라가 장착된 이동형 디바이스를 이용한 포커스 콘텍스트 영상 처리 장치 및 그 방법
JP6417204B2 (ja) 画像処理装置および画像処理方法
JP2023550764A (ja) 大型ディスプレイに基づいてパノラマ画像を作成する方法、装置、スマート端末及び媒体
JP2012150614A (ja) 自由視点画像生成装置
CN110519530B (zh) 基于硬件的画中画显示方法及装置
JP5387276B2 (ja) 画像処理装置及び画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16921180

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018549688

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16921180

Country of ref document: EP

Kind code of ref document: A1