WO2022264418A1 - Système, procédé et programme de combinaison de vidéos - Google Patents

Système, procédé et programme de combinaison de vidéos Download PDF

Info

Publication number
WO2022264418A1
WO2022264418A1 PCT/JP2021/023247 JP2021023247W WO2022264418A1 WO 2022264418 A1 WO2022264418 A1 WO 2022264418A1 JP 2021023247 W JP2021023247 W JP 2021023247W WO 2022264418 A1 WO2022264418 A1 WO 2022264418A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
correction
video
overlapping partial
Prior art date
Application number
PCT/JP2021/023247
Other languages
English (en)
Japanese (ja)
Inventor
広夢 宮下
真二 深津
英一郎 松本
麻衣子 井元
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2021/023247 priority Critical patent/WO2022264418A1/fr
Publication of WO2022264418A1 publication Critical patent/WO2022264418A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to technology for synthesizing multiple videos.
  • Patent document 1 analyzes input images in which part of the shooting area overlaps, detects an object in each input image, and integrates the detection results of the object in the overlapping area between the input images, so that object tracking information is displayed on the panoramic image. is disclosed.
  • Patent Document 1 proposes a video effect in which a synthesized image and object tracking information are output separately, and the object tracking information is overlaid on the synthesized image in subsequent processing.
  • it is possible to change the entirety of the composite video as a video effect, but it is impossible to apply the video effect in units of the original video (video that is input to the combining process).
  • Can not For example, when synthesizing a first input image and a second input image to generate a panoramic image, the first input image and the second input image are blended in the process. It is not possible to implement change processing such as correcting the luminance of only the pixels corresponding to the first input image.
  • the purpose of the present invention is to provide a technology that ensures real-time video viewing and controls the process of changing to a synthesized video for each input video.
  • a video compositing system is a video compositing system that synthesizes a first video and a second video in which photographing areas partially overlap, wherein a first image included in the first video is and an acquisition unit that acquires a second image included in the second video, a combining unit that combines the first image and the second image to generate a combined image, the first image and an analysis unit that analyzes the second image and generates correction information for correcting the synthesized image; an overlapping partial image that is a portion of the first image that overlaps a portion of the second image; and the correction. and an output unit for outputting the corrected synthetic image.
  • a technology ensures real-time video viewing and controls the process of changing to a synthesized video for each input video.
  • FIG. 1 is a block diagram showing a video synthesizing system according to one embodiment of the present invention.
  • FIG. 2 is a diagram for explaining processing in the image synthesizing unit shown in FIG.
  • FIG. 3 is a diagram for explaining processing in the image correction unit shown in FIG. 1;
  • FIG. 4 is a block diagram showing the hardware configuration of a computer according to one embodiment of the invention.
  • FIG. 5 is a flow chart illustrating a video composition method according to an embodiment of the present invention.
  • panorama video synthesis in which a panorama video is generated by synthesizing two videos (moving images) whose shooting areas partially overlap will be taken.
  • panorama image synthesis two images with partially overlapping photographing areas are input, and the images are combined so that the overlapping portions overlap. Note that it is also possible to generate a panorama video by synthesizing three or more videos.
  • FIG. 1 schematically shows a configuration example of a video synthesizing system 100 according to one embodiment of the present invention.
  • the video synthesizing system 100 includes a transmission server 110 and a reception server 120 as information processing devices.
  • Sending server 110 is communicatively connected to receiving server 120 .
  • the sending server 110 is connected to the receiving server 120 by a video transmission cable or an IP transmission network.
  • the transmission server 110 is connected to the imaging devices 101 and 102 that capture images, and receives images from the imaging devices 101 and 102 .
  • the imaging devices 101 and 102 for example, video cameras that output video signals in real time can be used. A video player or the like may be used instead of the imaging devices 101 and 102 .
  • the imaging devices 101 and 102 are arranged so that images with partially overlapping imaging areas can be obtained.
  • the receiving server 120 is connected to the display device 103 .
  • Display device 103 may be a liquid crystal display (LCD) or an organic light-emitting diode (OLED) display.
  • the transmission server 110 includes an image acquisition unit 111, an image synthesis unit 112, an image encoding unit (also referred to as a compression unit) 113, an image transmission unit 114, an image analysis unit 115, and a correction information transmission unit 116.
  • the image acquisition unit 111 acquires two images whose imaging regions partially overlap. These images are referred to as input image A and input image B hereinafter.
  • the input image A is a frame included in the video obtained by the imaging device 101
  • the input image B is the frame included in the video obtained by the imaging device 102 .
  • Each of the imaging devices 101 and 102 generates frames at a predetermined frame rate and sequentially transmits the frames to the transmission server 110 .
  • the image acquisition unit 111 sequentially receives pairs of the input image A and the input image B from the imaging devices 101 and 102 .
  • the image synthesizing unit 112 synthesizes the input image A and the input image B acquired by the image acquiring unit 111 to generate a synthesized image. Image composition processing will be described later.
  • the image encoding unit 113 compresses the synthesized image generated by the image synthesizing unit 112 in order to reduce the data amount of the synthesized image. Specifically, the image encoding unit 113 encodes the synthesized image to obtain encoded data. Furthermore, the image encoding unit 113 compresses the overlapping partial image in order to reduce the data amount of the overlapping partial image, which is the overlapping portion in one of the input image A and the input image B.
  • the image transmission unit 114 transmits the encoded data of the composite image and the overlapping partial image A obtained by the image encoding unit 113 to the receiving server 120 .
  • the composite image and the overlapping partial image A are encoded and transmitted from the transmission server 110 to the reception server 120 .
  • the image analysis unit 115 analyzes the input image A and the input image B acquired by the image acquisition unit 111 and generates correction information for correcting the synthesized image generated by the image synthesis unit 112 . Image analysis processing and correction information will be described later.
  • the correction information transmission section 116 transmits the correction information generated by the image analysis section 115 to the reception server 120 .
  • the reception server 120 includes an image reception unit 121 , an image decoding unit (also referred to as a restoration unit) 122 , a correction information reception unit 123 , an image correction unit 124 and an output unit 125 .
  • the image receiving unit 121 receives encoded data of the composite image and the overlapping partial image A from the transmission server 110 .
  • the image decoding unit 122 restores the combined image and the overlapping partial image A. Specifically, the image decoding unit 122 decodes the encoded data received by the image receiving unit 121 to obtain the composite image and the overlapping partial image A.
  • the correction information receiving unit 123 receives correction information from the transmission server 110 .
  • the image correcting unit 124 corrects the composite image obtained by the image decoding unit 122 using the overlapping partial image A obtained by the image decoding unit 122 and the correction information received by the correction information receiving unit 123. do. The correction processing will be described later.
  • the output unit 125 outputs the synthesized image corrected by the image correction unit 124. FIG. For example, the output unit 125 displays the corrected composite image on the display device 103 .
  • the image synthesis unit 112 receives the input image A and the input image B from the image acquisition unit 111 .
  • the image synthesizing unit 112 deforms the input image A and the input image B according to the deformation matrix, which is the synthesizing parameter.
  • the right portion of the input image A overlaps the left portion of the input image B.
  • Input image A is subjected to a global left translation and input image B is subjected to a global right translation.
  • the image synthesizing unit 112 seamlessly combines the deformed input image A and the input image B by alpha blending to generate one synthesized image.
  • the alpha value referred to in alpha blending is determined for each coordinate of the synthesized image, and indicates the ratio at which the pixel values of the input image A and the pixel values of the input image B are mixed.
  • Alpha values range from 0 to 1.
  • a (m, n) be the pixel value of input image A at coordinates (m, n)
  • B (m, n) be the pixel value of input image B at coordinates (m, n)
  • alpha at coordinates (m, n).
  • the composite image has the same size as input image A and input image B.
  • the left portion of input image A and the right portion of input image B are deleted during image synthesis.
  • the composite image may be a different size than the input images A,B.
  • the size of the input image A and the input image B is 1920 ⁇ 1080 and the size of the overlapping portion is 320 ⁇ 1080, a composite image with a size of 3520 ⁇ 1080 can be obtained.
  • Overlaps are areas where pixels overlap in alpha blending, as shown in FIG.
  • the image encoding unit 113 encodes the composite image and the overlapping partial image A, and the image transmitting unit 114 transmits the coded data of the composite image and the overlapping partial image A to the reception server 120 .
  • the image encoding unit 113 may make adjustments such as lowering the resolution and increasing the compression rate in order to further reduce the amount of data of the overlapping partial image A.
  • FIG. For example, the image encoding unit 113 may lower the resolution of the overlapping partial image A and then encode the overlapping partial image A.
  • the image encoding unit 113 may reduce the size of the overlapping partial image A to, for example, 1/2 or 1/4.
  • the color space of the image is YCbCr, which is often used in video processing
  • only the Y signal which is a luminance signal indicating luminance
  • the image A may be transmitted from the transmission server 110 to the reception server 120 .
  • the image encoding unit 113 extracts the Y signal from the overlapping partial image A.
  • an uncompressed transmission method that transmits the data of the composite image and the overlapping partial image A as they are may be used.
  • the image encoder 113 is eliminated.
  • the imaging devices 101 and 102 have different image sensor sensitivity settings.
  • camera parameters such as gain, shutter speed, and color temperature between imaging devices in order to eliminate luminance differences between input images.
  • correction functions such as auto gain control and auto shutter, it is not possible to unify the sensitivity settings of image sensors in a plurality of imaging apparatuses.
  • Input image A and input image B shown in FIG. 2 were captured with different camera parameters, and input image A is relatively brighter than input image B. Therefore, in the synthesized image, the left side portion corresponding to the input image A is relatively brighter than the right side portion corresponding to the input image B.
  • FIG. 1 Input image A and input image B shown in FIG. 2 were captured with different camera parameters, and input image A is relatively brighter than input image B. Therefore, in the synthesized image, the left side portion corresponding to the input image A is relatively brighter than the right side portion corresponding to the input image B.
  • the receiving server 120 corrects the composite image.
  • the image analysis unit 115 generates information used for image correction performed in the receiving server 120 .
  • the image analysis unit 115 receives the input image A and the input image B from the image acquisition unit 111 and calculates the luminance difference between the input image A and the input image B.
  • FIG. When the reference luminance is lum and the luminance average of the input image A is ave A , the luminance difference dA of the input image A is calculated as follows.
  • the image analysis unit 115 calculates the correction value fA as follows.
  • the image analysis unit 115 performs the same calculation as described above on the input image B to obtain the correction value fB.
  • the correction value fA is a correction value for adjusting the input image A to the reference luminance
  • the correction value fB is a correction value for adjusting the input image B to the reference luminance.
  • the correction information transmission unit 116 transmits correction information including the correction values f A and f B obtained by the image analysis unit 115 to the receiving server 120 .
  • Correction information is very small compared to image data. Therefore, the time required for transmission processing in the correction information transmission unit 116 is much shorter than the sum of the time required for encoding processing in the image encoding unit 113 and the time required for transmission processing in the image transmission unit 114. is assumed.
  • the image receiving unit 121 receives encoded data of the composite image and the overlapping partial image A from the transmission server 110 .
  • the image decoding unit 122 decodes the encoded data of the synthesized image to obtain the synthesized image, and decodes the encoded data of the overlapping partial image A to obtain the overlapping partial image A. If the overlapping partial image A has been reduced in the transmission server 110, the image decoding unit 122 enlarges the overlapping partial image A to its original size.
  • the correction information receiving unit 123 receives correction information including the correction values f A and f B from the transmission server 110 .
  • the image correction unit 124 uses the overlapping partial image A and the correction values f A and f B to perform luminance correction on the composite image. Due to the luminance difference between the input image A and the input image B, luminance shading occurs on the synthesized image. Brightness correction is performed to eliminate brightness gradation on the composite image.
  • the image correction unit 124 first divides the composite image into an area corresponding to the input image A, an overlapping area, and an area corresponding to the input image B.
  • the overlapping area is an area between the area corresponding to the input image A and the area corresponding to the input image B, and corresponds to the portion where the input image A and the input image B are overlapped.
  • the image correction unit 124 performs luminance correction on the area corresponding to the input image A based on the correction value fA. Specifically, the image correction unit 124 multiplies each pixel value in the region corresponding to the input image A by the correction value fA. For each coordinate (m, n) in the region corresponding to the input image A, when the pixel value before correction is CA (m, n) , the pixel value after correction DA (m, n) is calculated as follows. be.
  • the image correction unit 124 performs luminance correction on the area corresponding to the input image B based on the correction value fB. Specifically, the image correction unit 124 multiplies each pixel value in the region corresponding to the input image B by the correction value fB.
  • the image correction unit 124 performs luminance correction on the overlapping area based on the overlapping partial image A and the correction values f A and f B . Specifically, for each coordinate (m, n) in the overlap region, Ap (m, n) is the pixel value of the overlapping partial image A, CW (m, n) is the pixel value before correction, and CW (m, n) is the pixel value after correction.
  • the image correction unit 124 calculates the post-correction pixel value DW (m, n) as follows.
  • Brightness correction for each region may be performed by parallel processing.
  • the image correction unit 124 integrates the luminance-corrected areas (specifically, the area corresponding to the input image A, the overlapping area, and the area corresponding to the input image B) to obtain an output image.
  • the image correction unit 124 divides the composite image into three regions, performs luminance correction on each region, and integrates the three luminance-corrected regions.
  • the image correction unit 124 may virtually divide the composite image by limiting the range on the composite image. In this case, the integration process is omitted.
  • FIG. 4 schematically shows a hardware configuration example of a computer 400 according to one embodiment of the invention.
  • a computer 400 shown in FIG. 4 corresponds to the transmission server 110 or the reception server 120 shown in FIG.
  • the computer 400 comprises a processing circuit 401, a memory 402, an input/output interface 403, and a communication interface 404.
  • Processing circuitry 401 is communicatively coupled to memory 402 , input/output interface 403 , and communication interface 404 .
  • processing circuitry 401 is configured to perform the sequence of operations described with respect to transmission server 110 .
  • processing circuitry 401 is configured to perform the sequence of operations described with respect to receiving server 120 .
  • processing circuitry 401 may include a general-purpose processor such as a CPU (central processing unit).
  • Memory 402 may include random access memory (RAM) and storage devices.
  • RAM includes volatile memory such as SDRAM.
  • RAM is used by general-purpose processors as working memory.
  • Storage devices include non-volatile memory such as flash memory.
  • the storage device stores various data including a video synthesizing program.
  • the video compositing program includes computer-executable instructions.
  • the general-purpose processor expands the video composition program stored in the storage device to RAM, and interprets and executes the video composition program.
  • the video synthesizing program when executed by the general-purpose processor, causes the general-purpose processor to perform the series of processes described with respect to the transmission server 110 .
  • the video synthesizing program when executed by a general-purpose processor, causes the general-purpose processor to perform a series of processes described with respect to receiving server 120 .
  • the program may be provided to the computer 400 while being stored in a computer-readable recording medium.
  • the computer 400 has a drive for reading data from the recording medium and obtains the program from the recording medium.
  • Examples of recording media include magnetic disks, optical disks (CD-ROM, CD-R, DVD-ROM, DVD-R, etc.), magneto-optical disks (MO, etc.), and semiconductor memories.
  • the program may be distributed through a network. Specifically, the program may be stored in a server on a network, and computer 400 may download the program from the server.
  • the processing circuit 401 may include a dedicated processor such as an ASIC (application specific integrated circuit) or FPGA (field programmable gate array).
  • Memory 402 may store configuration data that define the operation of the dedicated processor. Memory 402 may be internal to a dedicated processor.
  • the input/output interface 403 is an interface for connecting peripheral devices.
  • a communication interface 404 is an interface for communicating with an external device.
  • the processing circuit 401 includes a video capture card, receives images from the imaging devices 101 and 102 via the input/output interface 403 , and synthesizes them to the reception server 120 via the communication interface 404 .
  • the coded data of the image, the coded data of the overlapping partial image, and the correction information are transmitted.
  • the processing circuit 401 receives the encoded data of the composite image, the encoded data of the overlapping partial image, and the correction information from the transmission server 110 via the communication interface 404, and receives the input/output interface. 403 to the display device 103 .
  • FIG. 5 schematically shows an example of a video compositing method executed by the video compositing system 100.
  • FIG. The flow shown in FIG. 5 is executed each time the video synthesizing system 100 acquires a video frame.
  • the processing shown in steps S501 to S505 in FIG. 5 is executed by transmission server 110.
  • FIG. The processes shown in steps S502 and S503 and the processes shown in steps S504 and S505 may be executed in parallel.
  • the processing shown in steps S506 to S509 is executed by the reception server 120.
  • FIG. The processing shown in step S506 and the processing shown in step S507 may be executed in parallel.
  • step S ⁇ b>501 the image acquisition unit 111 acquires the input image A from the imaging device 101 and acquires the input image B from the imaging device 102 .
  • step S502 the image synthesizing unit 112 synthesizes the input image A and the input image B to generate a synthetic image.
  • the image synthesizing unit 112 transforms the input image A and the input image B according to the transformation matrix, and combines the transformed input image A and the input image B by alpha blending.
  • step S ⁇ b>503 the image encoding unit 113 encodes the synthesized image to obtain encoded data, and the image transmission unit 114 transmits the encoded data of the synthesized image to the receiving server 120 . Further, the image coding unit 113 reduces the resolution of the overlapping partial image A, specifically, the image coding unit 113 reduces the size of the overlapping partial image A. Subsequently, the image encoding unit 113 encodes the overlapping partial image A to obtain encoded data, and the image transmitting unit 114 transmits the encoded data of the overlapping partial image A to the receiving server 120 .
  • step S504 the image analysis unit 115 compares the pixels included in the overlapping portion of the input image A with the pixels included in the overlapping portion of the input image B to determine the luminance difference of the input image A with respect to the input image B. calculate. For example, the image analysis unit 115 obtains the luminance difference of the input image A by dividing the average luminance of the overlapping portion of the input image A by the average luminance of the overlapping portion of the input image B.
  • step S ⁇ b>505 the image analysis unit 115 generates correction information from the luminance difference of the input image A, and the correction information transmission unit 116 transmits the correction information to the receiving server 120 .
  • the image analysis unit 115 obtains the reciprocal of the luminance difference of the input image A as the correction value fA.
  • the correction information transmission unit 116 transmits the correction value f A to the reception server 120 as correction information.
  • step S506 the image receiving unit 121 receives the encoded data of the composite image and the overlapping partial image A from the transmission server 110, and the image decoding unit 122 decodes the encoded data to generate the composite image and the overlapping partial image A. obtain.
  • the image decoding unit 122 restores the overlapping partial image A to its original size.
  • step S ⁇ b>507 the correction information receiving unit 123 receives correction information including the correction value f A from the transmission server 110 .
  • step S508 the image correction unit 124 divides the composite image into an area corresponding to the input image A, an area corresponding to the input image B, and an overlapping area, and performs luminance correction on these areas based on the correction information.
  • the image correction unit 124 performs luminance correction on the region corresponding to the input image A using the correction value f A according to the above equation (4), and corrects the correction value f A and the overlapping partial image A according to the above equation (5). is used to perform luminance correction for overlapping regions.
  • step S509 the image correction unit 124 integrates the area corresponding to the input image A, the area corresponding to the input image B, and the overlapping area to generate an output image. to display.
  • the video synthesizing system 100 executes the above-described flow for each frame, thereby obtaining a synthesized video including multiple synthesized images. As a result, the composite image is displayed on the display device 103 in real time.
  • the image synthesizing system 100 acquires an input image A that is a frame included in the image obtained by the imaging device 101 and an input image B that is a frame included in the image obtained by the imaging device 102, and generates the input image A and the input image B. to generate a composite image, analyze the input image A and the input image B to generate correction information, and correct the composite image using the overlapping portion image that is the overlapping portion of the input image A and the correction information and output the corrected composite image.
  • Equation (5) can be modified as follows.
  • Bp (m, n) represents the pixel value of the overlapping partial image B. In this manner, correction processing is performed for each input image in the overlap region in the composite image.
  • panoramic video synthesis it is assumed that the sensitivity settings of the image sensors in the imaging device are manually unified, and it is almost impossible to correct or process each input image individually. No. Even if correction or processing is required, it is basically performed as preprocessing for image synthesis. Specifically, panorama video synthesis is performed in a sequence of analysis processing, correction processing, and synthesis processing.
  • correction processing is performed on the composite image. This enables the synthesizing process and the analyzing process to be performed in parallel, thereby shortening the processing delay.
  • the video synthesizing system 100 having the above configuration can control the process of changing to a synthetic video for each input video while ensuring real-time video viewing.
  • the video synthesis system 100 generates correction information including a correction value f A for adjusting the input image A to the reference luminance and a correction value f B for adjusting the input image B to the reference luminance, and the correction values f A , Using fB and the overlapping subimage A , perform brightness correction on the composite image.
  • the video synthesizing system 100 uses the correction value f A to perform luminance correction on the region corresponding to the input image A in the synthesized image, and corrects the correction values f A and f B and the overlapping partial image A to is used to perform luminance correction on the region corresponding to the portion where the first image and the second image are superimposed in the composite image, and the correction value f B is used to correct the input image B in the composite image. Perform brightness correction for the corresponding region. Thereby, it is possible to perform luminance correction for each of the input images A and B on the composite image. As a result, even when automatic correction is applied, luminance unevenness that occurs in the composite image can be eliminated without a device for distributing settings.
  • the video synthesizing system 100 includes a transmission server 110 and a reception server 120 connected in series, and a series of processes are executed by the transmission server 110 and the reception server 120 .
  • the transmission server 110 is installed at the shooting base
  • the reception server 120 is installed at the projection base. There is no guarantee that the network bandwidth from the shooting base to the projection base will be abundant, and it is desirable that the transmission capacity be as small as possible.
  • the transmission server 110 may compress the overlapping partial image A and transmit it to the reception server 120 .
  • Transmission server 110 may reduce the resolution of overlapping partial image A in order to compress overlapping partial image A.
  • FIG. For example, the transmission server 110 reduces the size of the overlapping partial image A to 1/2 or 1/4. By compressing the overlapping partial image A, the transmission capacity can be reduced. As a result, the transmission band can be saved.
  • the overlapping partial image is only referenced in the correction process, and is only indirectly involved in the quality of the final output image.
  • the size of the overlapping partial image is reduced to 1/4 (80 A test was conducted to compare the output image with the case where the image was reduced to 270 ⁇ 270), but there was no noticeable deterioration in viewing.
  • the video compositing system 100 includes two information processing devices, specifically the transmission server 110 and the reception server 120 .
  • video composition system 100 may be implemented by a single information processing device.
  • the image encoding unit 113, the image transmission unit 114, the correction information transmission unit 116, the image reception unit 121, the image decoding unit 122, and the correction information reception unit 123 may be deleted.
  • a processing unit may be provided between the image correction unit 124 and the output unit 125 to perform additional correction or processing on the synthesized image.
  • a processing unit may be provided in a further server which is connected to the receiving server 120 via the video transmission network. In that case, the output unit 125 transmits the composite image to another server.
  • the present invention is not limited to the above-described embodiments, and can be variously modified in the implementation stage without departing from the gist of the present invention. Further, each embodiment may be implemented in combination as appropriate, in which case the combined effect can be obtained. Furthermore, various inventions are included in the above embodiments, and various inventions can be extracted by combinations selected from the disclosed plurality of components. For example, even if some components are deleted from all the components shown in the embodiment, if the problem can be solved and effects can be obtained, the configuration in which these components are deleted can be extracted as an invention.
  • DESCRIPTION OF SYMBOLS 100 Video synthesis system 101... Imaging device 102... Imaging device 103... Display device 110... Transmission server 111... Image acquisition part 112... Image synthesis part 113... Image encoding part 114... Image transmission part 115... Image analysis part 116... Correction Information transmission unit 120 Reception server 121 Image reception unit 122 Image decoding unit 123 Correction information reception unit 124 Image correction unit 125 Output unit 400 Computer 401 Processing circuit 402 Memory 403 Input/output interface 404 Communication interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Selon un aspect de la présente invention, un système de combinaison de vidéos combine des première et seconde vidéos comportant des régions d'imagerie se chevauchant partiellement. Le système comprend : une unité d'acquisition qui acquiert une première image intégrée dans la première vidéo et une seconde image intégrée dans la seconde vidéo ; une unité de combinaison qui combine les première et seconde images de façon à former une image composite ; une unité d'analyse qui analyse les première et seconde images de façon à générer des informations de correction permettant de corriger l'image composite ; une unité de correction qui corrige l'image composite en utilisant les informations de correction et une image d'une section en chevauchement, qui est une section de la première image chevauchant une section de la seconde image ; et une unité de sortie qui sort l'image composite corrigée.
PCT/JP2021/023247 2021-06-18 2021-06-18 Système, procédé et programme de combinaison de vidéos WO2022264418A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/023247 WO2022264418A1 (fr) 2021-06-18 2021-06-18 Système, procédé et programme de combinaison de vidéos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/023247 WO2022264418A1 (fr) 2021-06-18 2021-06-18 Système, procédé et programme de combinaison de vidéos

Publications (1)

Publication Number Publication Date
WO2022264418A1 true WO2022264418A1 (fr) 2022-12-22

Family

ID=84525993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/023247 WO2022264418A1 (fr) 2021-06-18 2021-06-18 Système, procédé et programme de combinaison de vidéos

Country Status (1)

Country Link
WO (1) WO2022264418A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014033886A1 (fr) * 2012-08-30 2014-03-06 富士通株式会社 Appareil de traitement d'image, procédé de traitement d'image, et programme
US20140267593A1 (en) * 2013-03-14 2014-09-18 Snu R&Db Foundation Method for processing image and electronic device thereof
JP2020086651A (ja) * 2018-11-19 2020-06-04 朝日航洋株式会社 画像処理装置および画像処理方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014033886A1 (fr) * 2012-08-30 2014-03-06 富士通株式会社 Appareil de traitement d'image, procédé de traitement d'image, et programme
US20140267593A1 (en) * 2013-03-14 2014-09-18 Snu R&Db Foundation Method for processing image and electronic device thereof
JP2020086651A (ja) * 2018-11-19 2020-06-04 朝日航洋株式会社 画像処理装置および画像処理方法

Similar Documents

Publication Publication Date Title
TWI423662B (zh) 顯示控制裝置、顯示控制方法及記錄媒體
CN108650542B (zh) 生成竖屏视频流、图像处理的方法、电子设备和视频系统
US8890983B2 (en) Tone mapping for low-light video frame enhancement
TWI542941B (zh) 用於照相機中之分佈式影像處理以使在拼接影像中之假影最小化之方法及設備
KR101953614B1 (ko) 카메라장치의 이미지처리장치 및 방법
US10720091B2 (en) Content mastering with an energy-preserving bloom operator during playback of high dynamic range video
WO2009122721A1 (fr) Système d'imagerie, procédé d'imagerie et support apte à être lu par ordinateur contenant un programme
US20160321772A1 (en) Watermarking Digital Images to Increase Bit Depth
US20130121588A1 (en) Method, apparatus, and program for compressing images, and method, apparatus, and program for decompressing images
JP5411786B2 (ja) 撮影装置および画像統合プログラム
JP6824084B2 (ja) 撮像装置及びその制御方法、プログラム、記憶媒体
JP4793449B2 (ja) ネットワーク映像合成表示システム
US8964109B2 (en) Image processing device and method
CN111294522A (zh) Hdr图像成像方法、装置以及计算机存储介质
WO2022264418A1 (fr) Système, procédé et programme de combinaison de vidéos
US20150086111A1 (en) Image processing method, image processing apparatus, and image processing program
US20140147090A1 (en) Image capturing apparatus, image processing apparatus, and control method therefor
US9723283B2 (en) Image processing device, image processing system, and image processing method
US8077226B2 (en) Data processing apparatus having parallel processing zoom processors
Sugito et al. Practical use suggests a re-evaluation of HDR objective quality metrics
JP6250970B2 (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、及び記録媒体
Thoma et al. Chroma subsampling for HDR video with improved subjective quality
JP7038935B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
JP3368012B2 (ja) 画像データ処理装置
US20240153055A1 (en) Techniques for preprocessing images to improve gain map compression outcomes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946092

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE