WO2004077821A1 - 映像合成装置 - Google Patents
映像合成装置 Download PDFInfo
- Publication number
- WO2004077821A1 WO2004077821A1 PCT/JP2004/001990 JP2004001990W WO2004077821A1 WO 2004077821 A1 WO2004077821 A1 WO 2004077821A1 JP 2004001990 W JP2004001990 W JP 2004001990W WO 2004077821 A1 WO2004077821 A1 WO 2004077821A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- trigger
- video
- sub
- screen
- Prior art date
Links
- 238000004364 calculation method Methods 0.000 claims abstract description 95
- 239000000203 mixture Substances 0.000 claims description 137
- 230000033001 locomotion Effects 0.000 claims description 31
- 230000002194 synthesizing effect Effects 0.000 claims description 25
- 238000001514 detection method Methods 0.000 claims description 10
- 230000015572 biosynthetic process Effects 0.000 description 135
- 238000003786 synthesis reaction Methods 0.000 description 135
- 238000000034 method Methods 0.000 description 79
- 230000015654 memory Effects 0.000 description 59
- 239000002131 composite material Substances 0.000 description 37
- 238000012545 processing Methods 0.000 description 28
- 238000012544 monitoring process Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 8
- 230000002159 abnormal effect Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 5
- 230000000737 periodic effect Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 241001556567 Acanthamoeba polyphaga mimivirus Species 0.000 description 1
- 208000019300 CLIPPERS Diseases 0.000 description 1
- 208000012661 Dyskinesia Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 208000021930 chronic lymphocytic inflammation with pontine perivascular enhancement responsive to steroids Diseases 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention relates to a video combining apparatus for combining a plurality of videos into one screen, and in particular, a system in which a video important for the user is selected for combining, and an important video is visually arranged in the screen.
- a video important for the user is selected for combining, and an important video is visually arranged in the screen.
- a video receiving terminal for receiving a plurality of cameras' images at a remote place it is preferable to be a general-purpose display terminal which is inexpensive and has only one display screen, rather than being an expensive dedicated terminal.
- a surveillance system for reproducing surveillance videos from a plurality of cameras on a video reception terminal having only one display screen there are many systems in which videos of a plurality of cameras are time-divided and sequentially displayed on one screen. Seen. In this system, different images are displayed in sequence on a single screen at fixed time intervals, so there is the problem that the correspondence between the displayed image and the camera being captured is very difficult to see and difficult to see.
- images are displayed in a time-division manner, important scenes may not be displayed. Therefore, as a monitoring system that combines multiple camera images on one screen and displays multiple images simultaneously, the one shown in Japanese Patent Application Laid-Open No. Hei 4-24059 may be used.
- FIG. 1 This system is shown in Figure 1.
- AZD converter connected to multiple (three in this case) surveillance cameras 1-1, 1-12, 1-13 and surveillance cameras 1-1-1: L1-3.
- Control unit 9 a DZA conversion unit 11, and a motor 13 for displaying an image.
- the signal processing circuit 7 has a selection circuit 15 for selecting an image signal, and a screen reduction and synthesis circuit 17 for reducing a plurality of images and combining them into one screen.
- the images of the monitoring cameras 1 1 to 1 3 are output to the memory units 5 1 to 5 3 through the AZD conversion units 3 1 to 3 3.
- Screen reduction and synthesis circuit 17 reduces and combines all the images into one image, and outputs it to selection circuit 15.
- the signal processing circuit 7 receives an image selection signal from the control unit 9, the selection circuit 15 selects one of the images of a plurality of surveillance cameras or a reduced composite image according to the image selection signal, and / A converter 1 Output to 1 1
- the D ZA conversion unit 1 1 outputs the video signal to the monitor 13.
- a plurality of videos can be displayed on a terminal having only one display screen, and a user can easily grasp the whole image using a plurality of videos. Also, since the user can switch the video, the user can select and view one video.
- An object of the present invention is to provide an image combining apparatus for combining a plurality of images on one screen, capable of automatically displaying an image important to the user, and a screen on which the important image is easy to view visually
- An object of the present invention is to provide a video composition apparatus capable of combining and displaying the composition.
- the video compositing device is a video compositing device that composes a plurality of videos on one screen, and generates a video input unit for inputting the video and a trigger indicating the importance of the video.
- Image generating means for generating an image to be synthesized from the input image based on the calculated screen configuration and the screen configuration calculating means for calculating the screen configuration according to the trigger generation means for generating, the importance of the generated trigger, and
- a screen combining means for combining a plurality of images including the created image on one screen.
- Figure 1 shows an example of a conventional surveillance system
- FIG. 2 is a block diagram showing the configuration of a video combining apparatus according to an embodiment of the present invention
- FIG. 3 is a flowchart for explaining the operation of the video synthesizing apparatus according to the present embodiment
- FIG. 4 is a flowchart showing the contents of the operation example 1 of the screen combining parameter calculation process in FIG. 3;
- FIG. 5 is an explanatory view showing an outline of screen synthesis by still image synthesis in the operation example 1
- FIG. 6 is a content in the operation example 2 of the screen synthesis parameter calculation process in FIG. Showing a flowchart
- FIG. 7 is an explanatory diagram of a cutout area calculation method in the operation example 2.
- FIG. 8 is an explanatory view showing an outline of screen synthesis by cutout synthesis in the operation example 2;
- FIG. 9 is a flowchart showing the contents of the operation example 3 of the screen combining parameter calculation process in FIG.
- Fig. 10 is a flowchart showing the contents of the operation example 3 of the image storage processing in Fig. 3;
- Fig. 11 is an explanatory diagram showing an outline of screen synthesis by loop synthesis in the operation example 3.
- FIG. 12 is a flowchart showing the contents of operation example 4 of the screen combining parameter calculation process in FIG.
- FIG. 13 is a flowchart showing the contents of the operation example 5 of the screen synthesis parameter calculation process in FIG.
- FIG. 14 is a flowchart showing the contents in Operation Example 6 of the screen combining parameter calculation process in FIG. 3.
- the gist of the present invention calculates a screen configuration (specifically, a screen combining parameter) using a trigger indicating importance of the image when combining a plurality of images on one screen, and based on the calculation result. It is to perform screen composition. For example, when performing video composition with the video at the trigger occurrence time as a still image (“Still image composition” described later) or when enlarging the video at the trigger occurrence position and performing screen composition ( “) Or, screen synthesis may be performed so that the scenes before and after the trigger occurrence are played back slowly (“ loop synthesis ”described later).
- control the screen composition parameter according to the size of the trigger. For example, depending on the size of the trigger, control the display time or Control the zoom ratio (for “cut out synthesis”) or control the playback speed-loop interval ⁇ loop count (for “loop synthesis”). Specifically, for example, the larger the trigger, the longer the display time, the larger the magnification, the slower the playback speed, the longer the loop section, or the more the number of loops.
- the type of screen composition is expressed using the image. For example, the color or shape of the frame of the image display area expresses the type of screen composition.
- a plurality of images is a representation including the case where a plurality of image data are generated from the output of one camera, in addition to the case where the outputs of a plurality of cameras are output.
- each image constituting an input video is defined as an input image, and a portion of the input image which is formed on the entire screen is defined as a "main image”, and a partial area of the input image An image that is composed of and that is combined with the main image is defined as a “sub-image”.
- FIG. 2 is a block diagram showing the configuration of the video combining apparatus according to the embodiment of the present invention.
- the video compositing apparatus 100 has a function of compositing a plurality of videos on one screen, and a video input unit 102 for inputting images constituting the video for each image, and the importance of the video And a combination trigger calculation unit 1 06 which calculates a combination trigger for calculating a screen combination parameter to be described later by using the trigger from the trigger generation unit 1 0 4;
- a screen configuration calculation unit 108 that determines the presence or absence of screen compositing using a compositing trigger and calculates a screen configuration (specifically, a screen compositing parameter), a video storage unit 110 that stores video, and an input image ( Sub-image creation to create an image (sub-image) used for compositing with the main image
- a video signal generating unit 200 for generating a
- the video signal generation unit 200 is constituted of, for example, a camera and an AZD conversion unit.
- the number of cameras (and AZD conversion units) is not particularly limited.
- One or more videos output from the video signal generation unit 200 are supplied to the video input unit 102 in the video combining device 100.
- the video input unit 102 inputs and processes the video signal output from the video signal generation unit 200 for each video. Specifically, the synchronization signal is detected from the input video signal, and the images constituting the video are output to the screen configuration calculation unit 108 and the video storage unit 110 for each screen. At this time, the video input unit 102 adds an image number which is unique to each image and whose value monotonically increases as time progresses, and outputs it.
- the trigger generation unit 104 generates a trigger indicating the degree of importance of the image, and outputs the trigger to the combination trigger calculation unit 106.
- the trigger is transmitted when the image input to the video compositing apparatus 100 includes an image that is determined to be important to the user, and a value indicating the degree of importance ( (Hereinafter referred to as “trigger value”).
- the trigger generation unit 104 can, for example, when it is assumed that this video compositing apparatus 100 is used for a monitoring system that monitors the presence or absence of an abnormal state.
- the trigger generation unit 104 can, for example,
- the motion detection sensor outputs a trigger when it detects an area in which a sudden movement has occurred, such as the appearance of an intruder, in a captured image.
- a sudden movement such as the appearance of an intruder
- the motion detection sensor is, for example, an infrared sensor. Therefore, in this case, the trigger is an alarm information indicating that a specific condition such as an abnormality is preset, for example, by a sensor attached to the surveillance camera or a sensor installed near the surveillance camera. is there.
- the motion recognition sensor outputs a trigger when an object (including a person) having a motion other than a normal motion registered in advance is present in the input video.
- the larger the abnormal movement the larger the trigger value and the greater the degree of importance.
- the motion recognition sensor is configured of, for example, a camera.
- the trigger is, for example, motion detection information indicating the magnitude of the motion of the object obtained by detecting the motion of the object in the video.
- the image recognition sensor outputs a trigger when there is a pre-registered object in the input video.
- the higher the recognition result the larger the trigger value and the greater the degree of importance.
- the image recognition sensor is, for example, an image processing apparatus.
- the trigger is an image recognition result indicating that a specific object is present in the image obtained by image recognition (for example, by a method such as pattern matching) of the specific object in the image.
- the trigger generation unit 104 When a scene considered to be important in the input image is captured, the trigger generation unit 104 combines the trigger position information representing the trigger occurrence position in the image with the trigger in addition to the trigger. Output to calculation unit 106.
- the trigger generation unit 104 is not limited to the motion detection sensor, the motion recognition sensor, and the image recognition sensor described above.
- it may be a device that accepts a screen compositing request from the user.
- the trigger is a screen compositing request from the user.
- the trigger since the criteria judged to be important in the input image vary depending on the application of the system, the trigger is not limited to the one emitted by the sensor or user request, and the trigger occurrence position in the image It may be output from any means, as long as it contains a position) and a value (trigger value) indicating the degree of importance of the image.
- the trigger sources (the above various sensors, user requirements) may be used alone or in combination.
- the synthesis trigger calculation unit 106 calculates the synthesis trigger using the trigger from the trigger generation unit 104 and outputs the synthesis trigger to the screen configuration calculation unit 108.
- the synthesis trigger is a value used to calculate screen synthesis parameters, and is a trigger type indicating the type of importance of the input image and a trigger value indicating the degree of the importance. It is a signal to have.
- the combination trigger calculation unit 106 selects the trigger type of the combination trigger, for example,
- Important area means that a specific area of the input video contains an important area
- the size of the trigger input from the trigger generation unit 104 that is, the trigger value indicates the degree of importance, so the size of the input trigger is used as it is. .
- the synthesis trigger calculation unit 106 selects the trigger type of the synthesis trigger as 1. If the trigger is based on a motion detection sensor, the image of the time when a suspicious intruder invaded is important, and it is decided to be an “important shot” to prevent it from being missed.
- the trigger is based on an image recognition sensor, the pre-registered suspicious object-suspicious person etc. is important, and it is decided to be an "important area" to make it easier to view enlarged
- the trigger is from a motion recognition sensor, the scene where the object with abnormal motion and the person are present is important, so it is decided to be the “important scene”.
- triggers by various sensors can be converted into trigger types that clearly indicate the meaning of importance in the image. Therefore, according to the trigger type that indicates the importance of the image, it is possible to determine the screen composition parameters so that important scenes can be easily viewed. The determination method will be described in detail later.
- the combination trigger calculation unit 106 outputs the trigger position information from the trigger generation unit 104 to the screen configuration calculation unit 108 as it is.
- the screen configuration calculation unit 108 uses the combination trigger (and trigger position information as needed) from the combination trigger calculation unit 106 to determine the presence / absence of the screen combination and calculate the screen configuration. That is, it is determined whether or not to perform screen compositing by using a compositing trigger, and in the case of performing screen compositing, the screen compositing parameters are calculated, and the image storage unit 110, the sub image creation unit 112, the image Output to the information addition unit 1 1 4 and the screen synthesis unit 1 1 6.
- the input image from the video input unit 102 is output to the screen combining unit 116 regardless of the determination result of the presence or absence of the screen combining.
- the screen configuration calculation unit 108 receives an image from the video input unit 102, it receives a combination trigger and trigger position information from the combination trigger calculation unit 106, and generates a combination of trigger types. Store the trigger value and trigger position of the trigger in the internal memory (not shown). However, if the synthesis trigger is not output from the synthesis trigger calculation unit 106, the trigger value of the synthesis trigger is stored as zero (0) in the internal memory.
- one of the screen combining parameters calculated by the screen configuration calculating unit 108 is a combining type.
- the composition type is a parameter indicating how to compose the screen, for example,
- Still image composition Compose a still image sub image to a part of the input image
- the synthesis type is
- the trigger type of the synthesis trigger is "important shot”, since the important image is included in the time when the trigger is output in the input video, it is decided to be "still image synthesis",
- the area to which the trigger is output in the input video group includes an important object etc.
- the remaining parameters of the screen composition parameters differ depending on the composition type.
- the screen composition parameters calculated by the screen configuration calculation unit 108 are not only the composition type but also the target sub image (image number of the image used for sub image creation). Parameter indicating the) and sub-image display time (parameter indicating the time to combine sub-images and display continuously) (3 total).
- the screen composition parameters calculated by the screen configuration calculation unit 108 are, in addition to the composition type, for example, extraction center coordinates (center coordinates in input image of image cut out as sub image These are the parameters shown) and the cutout size (the parameters that indicate the size of the image cut out as a sub image) (total of 3).
- the screen composition parameters calculated by the screen configuration calculation unit 108 are, in addition to the composition type, for example, in the first pattern (hereinafter referred to as “pattern 1”), the composition is Scene center time (parameter indicating the image number of the image located at the center time of the scene to be synthesized) and playback speed (parameter indicating the playback speed of the scene to be repeatedly played back as a sub image) (total of 3), In pattern 2 (hereinafter referred to as “pattern 2”), the synthesized scene center time and loop interval (a parameter indicating the number of images constituting the scene to be repeatedly reproduced as a sub image) (three in total), In the pattern (hereinafter referred to as “pattern 3”), the synthesized scene center time, the number of loops (the number of times the scene is repeatedly played back as a sub image) A parameter indicating the number) and a frame counter (a parameter indicating the number of remaining images to be combined as a sub image) (total 4).
- pattern 1 the composition is Scene
- the sub image size (a parameter indicating the composite size of the sub image) is added as a screen composition parameter in each composition type.
- the video storage unit 110 stores the image output from the video input unit 102 in the internal memory. When storing the video, the video storage unit 1 10 is used to calculate the screen configuration.
- the internal memory is not rewritten if the number of the target sub-image of the screen combination parameter is different from the image number of the input image. Conversely, if the combination type is “still image combination” and the number of the target sub-image of the combination parameter is the same as the image number of the input image, the internal memory is rewritten using the input image.
- composition type is other than “no composition”.
- the video storage unit 110 has an internal memory capable of storing a plurality of images, in particular, if it can correspond to “loop composition” as the composition type.
- a plurality of images output from the unit 102 can be stored in the internal memory.
- the internal memory in addition to the memory area for storing a plurality of images, the internal memory has a storage counter indicating the storage position of the image and a readout counter indicating the readout position of the image.
- the maximum possible value of each counter is the number of images that can be stored in the internal memory. If the value of the counter is updated and exceeds the maximum value, the value of the counter is returned to “1”. That is, the internal memory is structured such that periodic image data can be stored and read out by updating the counter each time the image is stored and read.
- the sub-image creating unit 112 creates a sub-image using the image output from the video storage unit 110 based on the screen combining parameter output from the screen configuration calculating unit 108.
- the composition type of the screen configuration parameter is “still image composition”
- the image to be a target of the sub image output from the video storage unit 110 is reduced to the size of the sub image.
- the size of the sub image is assumed to be predetermined, and the size of the input image Shall not exceed However, the size of the sub image can be changed according to the content of the image.
- the sub image is cut out and the size is reduced using the image of the sub image target outputted from the video storage unit 110, and the image information adding unit 114 is selected.
- Output For example, in the image of the sub-image target, cut out a cutout region (see FIG. 7 described later) defined by the cutout size in the horizontal and vertical directions centering on the cutout center coordinate of the screen synthesis parameter in the image of the sub-image target.
- the size of the sub-image shall be determined in advance and shall not exceed the size of the input video.
- the size of the sub image can be changed according to the content of the image.
- the image information addition unit 114 changes the color of the outer frame of the sub image output from the sub image generation unit 112 in accordance with the synthesis type of the screen synthesis parameter output from the screen configuration calculation unit 108. Do. Specifically, for example, if the composite type is "still image composite", the color of the outer frame of the sub image is changed to red, and if it is "cut out composite", it is changed to blue, "loop composite” In the case of, change to yellow.
- the color of the outer frame corresponding to the type of composition is not limited to the above example, and it can be represented as long as it can indicate whether the sub image is a still image, a clipped image, or a loop playback image. It may be a natural color.
- the sub image in which the color of the outer frame indicating the type of composition is added to the outer frame is output to the screen combining unit 116.
- the shape of the sub image can be changed in addition to the method of changing the color of the outer frame of the sub image. This method will be described in detail later.
- the screen compositing unit 116 combines the image (main image) output from the screen configuration calculation unit 108 and the sub image output from the image information adding unit 114 into one screen. Synthesize and output the synthesized image (synthesized image) to the video encoding unit 300.
- the position at which the sub-image is composited with the main image is determined in advance, and the sub-image is superimposed on the position at which the sub-image in the main image is composited to create a composite image.
- the composition position of the sub-image can be changed according to the characteristics of the input image, and may be any position.
- the video signal generation unit 200 is configured by one camera and an AZD conversion unit, and only one image is input to the video synthesis apparatus 100 I assume.
- the description will be made on the assumption that the present video synthesis apparatus 100 is used as a monitoring system for monitoring the presence or absence of an abnormal state.
- an image at the time when the trigger occurs is regarded as a still image, and this still image is regarded as a sub image in a partial area of the input image.
- the case of combining that is, the case of performing "still image combining" will be described. Also, in this case, it is assumed that the display time is set longer as the size of the trigger is larger.
- FIG. 3 is a flowchart for explaining the operation of the video compositing apparatus 100 according to the present embodiment.
- step S1000 the video input unit 102 performs video input processing for inputting a video signal.
- the synchronization signal is detected from the video signal input from the video signal generation unit 200, and the images constituting the video are displayed on a screen-by-screen basis for each screen, and the video storage unit 1 1 Output to 0.
- an image number which is unique to the individual image and whose value monotonously increases with time is added.
- step S 2 0 0 it is determined whether or not a trigger (including a trigger value representing the degree of importance) has been generated in the trigger generation unit 104. This judgment is for example, this is done depending on whether or not the signal (trigger) from the trigger generation unit 104 is input to the synthesis trigger calculation unit 106.
- the trigger is issued by a sensor such as a motion detection sensor, a motion recognition sensor, an image recognition sensor, or a user request.
- step S4000 If a trigger is generated as a result of this determination (S2 0 0 0: YES), the procedure proceeds to step S 3 0 0 0, and if a trigger is not generated (S 2 0 0 0: NO), then Proceed to step S4000.
- step S 3 0 0 the combining trigger calculation unit 106 performs combining trigger calculation processing of calculating the combining trigger by inputting the trigger.
- the trigger from the trigger generation unit 104 is used to calculate a combination trigger (including trigger type and trigger value), and output to the screen configuration calculation unit 1008.
- trigger types for synthesis triggers are, for example, (1) Important shot, (2) Important area, (3) Important sheet Decide one of the Also, for the size of the trigger value of the synthesis trigger, the size of the input trigger is used as it is.
- the trigger type of the synthesis trigger is
- the trigger is based on the motion detection sensor, the image of the time when the suspicious intruder invaded is important, and it is determined to be a "important shot” to prevent it from being missed. 2. If the trigger is based on the image recognition sensor , Pre-registered suspicious objects-suspicious people, etc. are important and are set as "important areas" to make them easier to view
- the trigger is based on a motion recognition sensor, the scene with an object with abnormal motion and a person is important, so it is determined to be an “important scene”.
- the trigger type is determined to be "important shot” because "still image synthesis” is performed.
- step S4000 the screen composition calculation unit 10 A screen combining parameter calculation process is performed to calculate parameters. Specifically, it is first determined whether or not to perform screen compositing using the compositing trigger from the synthetic trigger calculation unit 106, and if screen compositing is performed as a result of this determination, the screen compositing parameter is selected.
- the image data is calculated and output to the image storage unit 1 10, the sub image generation unit 1 12, the image information addition unit 1 1 4, and the screen combination unit 1 1 6.
- the input image from the video input unit 102 is output to the screen combining unit 1 16 regardless of the determination result of the screen combining.
- the combining trigger from the combining unit V calculation unit 106 is received, and the trigger type of the combining trigger and the trigger value are stored in the internal memory.
- the trigger value of the synthesis trigger is stored in the internal memory as zero (0). Then, the judgment of the screen composition is performed depending on whether the trigger value of the composition trigger is zero or not, that is, whether or not the composition trigger has been input. Also, based on the trigger type of the synthesis trigger, determine screen synthesis parameters such as the synthesis type.
- target sub-image is a parameter indicating the image number of the image used for creating the sub-image as described above
- sub-image display time is a combination of the sub-image as described above. It is a parameter that indicates the continuous display time.
- FIG. 4 is a flowchart showing the contents of the operation example 1 of the screen combining parameter calculation process in FIG.
- step S410 it is determined whether or not the trigger value of the synthesis trigger is zero, that is, the presence or absence of the input of the synthesis trigger. As a result of this determination, if the trigger value of the synthesis trigger is not zero, that is, if there is an input of the synthesis trigger (S 4 1 0 0: NO), the process proceeds to step S 4 1 1 0, and If the radiation value is zero, that is, if there is no synthetic trigger input (S 4 1 0 0: YES), the process proceeds to step S 4 1 4 0.
- step S410 since the combination trigger has a non-zero trigger value, that is, the combination trigger has been input, the combination type is determined according to a predetermined criterion. For example, as above
- the combination trigger type is "important shot"
- the still image synthesis is selected because the important image is included in the input video when the trigger is output.
- the combination trigger type is "important area”, the area to which the trigger is output in the input video includes an important object, etc., so select “combine combining", and 3.
- the combination trigger type is "important" In the case of “Scene”, since the important scene is included around the time when the trigger is output in the input video, it is decided to be “loop synthesis”. In this operation example, since the trigger type is “important shot”, the synthetic type is determined to be “still image synthesis”.
- step S 4 1 2 the target sub image is determined.
- the target sub-image is determined as the current input image.
- time-disp (t) display time of sub image at time t
- MAX_time Setting maximum value of sub image display time
- (Formula 1) is an example of the calculation method, and is not limited to this.
- Sub image The method of calculating the display time may be any method as long as the display time increases as the trigger value increases.
- step S410 since the combination trigger has a zero trigger value, that is, there is no combination trigger input, the screen combination parameter is set to the parameter at the previous calculation.
- step S 4 150 the continuous display time of the sub image is updated.
- sub-image display time time— di sp (t) is given by the following (Expression 2),
- the sub-image display time is updated by subtracting the elapsed time from the time when the previous screen composition parameter was calculated to the present time from the sub-image display time when it was calculated last time.
- step S 4 160 the combination type is updated. Specifically, if the sub image display time becomes equal to or less than zero as a result of the sub image display time update process in step S 4 150, change the composition type to “no composition” step S 4 1 7 At 0, the three screen composition parameters (composition type, target sub-image, sub-image display time) calculated at step S4 1 00 to step S After being output to the screen combining unit 116, the input image (the input image from the video input unit 102) is output to the screen combining unit 116 while being output to the unit 112, the image information adding unit 114, and the screen combining unit 116.
- the input image the input image from the video input unit 102
- the image information adding unit 114 the image information adding unit 114
- step S5000 the video storage unit 110 performs video storage processing for storing video. Specifically, the image output from the video input unit 102 is stored in the internal memory. At that time, based on the screen combination parameter from the screen configuration calculation unit 108, it is determined whether to rewrite the internal memory. For example, If the compositing type of the screen compositing parameter is "no compositing", the internal memory is rewritten using the input image.
- the combination type is “still image combination”
- the internal memory is not rewritten, but the combination type is “ In the case of “still image composition”
- the number of the target sub-image of the composition parameter is the same as the image number of the input image
- the internal memory is rewritten using the input image.
- the image stored in the internal memory is output to the sub image generation unit 112.
- step S6000 the sub image creation unit 112 performs a sub image creation process to create a sub image to be used for screen combination. Specifically, based on the screen composition parameters output from the screen configuration calculation unit 108, a sub image is created using the image output from the video storage unit 110, and the created sub image is used as an image. Output to information addition section 1 1 4
- the image to be the target of the sub image output from the video storage unit 110 is reduced to the size of the sub image, and image information addition is performed.
- the size of the sub-image shall be determined in advance and shall not exceed the size of the input image.
- the size of the sub image can be changed according to the content of the image.
- step S7000 the image information adding unit 114 performs an image information adding process of adding the image information of the sub image. Specifically, for example, the color of the outer frame of the sub image output from the sub image generation unit 112 is changed according to the synthesis type of the screen synthesis parameter output from the screen configuration calculation unit 108. The sub image with the outer frame color changed is output to the screen synthesizer 116.
- the color of the outer frame of the sub image is changed to red.
- the color of the outer frame is not limited to red, and any color that can indicate that the sub image is a still image It may be
- step S8000 the screen combining section 116 performs screen combining processing to combine the images into one screen. Specifically, the image (main image) output from the screen configuration calculation unit 108 and the sub image output from the image information addition unit 114 are combined into one screen, and the combined image is displayed. (Composite image) is output to the video encoding unit 300.
- screen compositing as described above, the position where the sub-image is composited with the main image is determined in advance, and the sub-image is superimposed on the position where the sub-image in the main image is composited to create a composite image.
- the composite position of the sub video can be changed according to the characteristics of the input video.
- step S 9 0 0 it is determined whether or not the series of video combining processes in step S 1 0 0 0 to step S 8 0 0 0 is to be ended. This determination is made, for example, based on whether or not a preset time or number of frames has been exceeded, or whether or not the user has issued a termination request. As a result, when the time or frame number set in advance is exceeded, or when the termination request is issued from the user (S 9 0 0 0: YES), the above series of video composition processing is terminated. If not (S 9 0 0 0: NO), return to step S 1 0 0 0.
- FIG. 5 is an explanatory view showing an outline of screen composition by the above-mentioned “still image composition”.
- “4 0 1” represents the current input image
- “4 0 3” represents the target sub-image to be reduced
- “4 0 5” represents the reduced target sub-image 4 0 3
- Sub-image “4 0 7” is the sub-image representing the image information (composite type) of sub-image 4 0 5 in the color of the outer frame
- “4 0 9” is the input image 4 0 1 and after the frame change Is a composite image obtained by superimposing and synthesizing the sub-image 4 0 7 of
- the color of the outer frame can represent the state of the sub-image (type: “still image composition” in this case).
- the trigger value indicating the importance of the video is larger, the video at the trigger occurrence time is controlled to be displayed as a still image for a long time, and the image synthesis is performed.
- the user can view not only the current video but also the video of important time as a still image at the same time even at the receiving terminal with only one screen. It is possible, and moreover, the more important the image is, the longer it can be viewed.
- the user may judge the content of the sub image by the color of the sub image frame without transmitting and receiving information other than the composite image. it can. That is, in the conventional system, since only a plurality of videos are simply reduced and synthesized, the simply reduced and synthesized images do not include additional information indicating the state of each video and the like. In order to know the additional information of the video, it is necessary to transmit and receive the additional information in addition to the video, and the system becomes complicated. However, in the present invention, the inconvenience can be eliminated.
- this example shows an example of displaying an input image on the main image and a still image on the sub image
- the present invention is not limited to this, and a still image is displayed on the main image and input on the sub screen. It is also possible to display an image.
- step S I 0 0 0 to step S 3 0 0 0 are the same as in the operation example 1, and thus the description thereof is omitted.
- the trigger generation unit 104 when a scene considered to be important in the input image is captured, as described above, the trigger generation unit 104 generates a trigger (indicating a degree of importance).
- the trigger position information representing the trigger generation position in the screen is also output to the synthetic trigger calculation unit 106 along with the trigger.
- the trigger position information input to the combination trigger calculation unit 106 is output to the screen configuration calculation unit 108 along with the combination trigger.
- the trigger type is determined to be “important area” in the combining trigger calculation process of step S 300 0.
- step S4000 the screen synthesis parameter calculation process is performed as in the operation example 1.
- the combination trigger and trigger position information from the combination trigger calculation unit 106 are accepted, and the combination trigger type and trigger value, and Stores the trigger position of the trigger in the internal memory.
- cut out center coordinates is a parameter indicating the center coordinates in the input image of the image cut out as a sub-image as described above
- cut-out size is an image cut out as a sub-image as described above It is a parameter indicating the size of the image.
- FIG. 6 is a flowchart showing the contents of the operation example 2 of the screen combining parameter calculation process in FIG. The process common to the operation example 1 shown in FIG. 4 will only be briefly described.
- step S4200 as in the operation example 1 (see step S41000 in FIG. 4), whether or not the trigger value of the synthesis trigger is zero, that is, the input of the synthesis trigger Determine the presence or absence of Trigger value of synthesis trigger as a result of this judgment If is not zero, that is, if there is a synthetic trigger input (S 4 2 0 0: NO), go to step S 4 2 1 0, and if the synthetic trigger trigger value is zero, ie, synthetic trigger If there is no input of (S 4 2 0 0: YES), the process proceeds to step S 4 2 4 0.
- step S 4 2 1 as in the operation example 1 (see step S 4 1 1 0 in FIG. 4), when the trigger value of the synthesis trigger is not zero, that is, when the synthesis trigger is input. Therefore, the composition type is determined according to a predetermined standard. However, in this operation example, the trigger type is "important area" and the area where the trigger is output in the input video includes an important object, so the composite type is determined to be "cut out and composite". Do.
- step S420 the cutout center coordinates are determined.
- the cutout center coordinate is determined as the trigger position.
- MIN-size-h Horizontal setting minimum value of sub image cutout size
- MIN-size-V Calculated using the set minimum value of the vertical size of the sub image cutout size. As shown in (Eq. 3) and (Eq. 4), the sub-image clipper The smaller the trigger value, the smaller the noise. However, the cutout size shall not exceed the size of the input image.
- FIG. 7 is an explanatory view of this cutout area calculation method.
- “5 0 3” is the input image of the sub image target
- “5 0 5” is the trigger position (here equal to the cutout center coordinates)
- “5 0 7” is based on the trigger value. It is a cutout area defined by the calculated cutout size.
- (Formula 3) and (Formula 4) are an example of a calculation method, and it is not limited to this.
- the method of calculating the cutout size may be any method that reduces the size as the trigger value increases.
- Step S 4 2 0 0 sets the screen synthesis parameter to the parameter at the previous calculation, since it is when the trigger value of the synthesis trigger is zero, that is, when there is no input of the synthesis trigger.
- step S 4 250 the three image composition parameters (composition type, cutout center coordinates, cutout size) calculated in step S 4 20 0 to step S 4 2 0 are displayed in the image storage unit 1 1 0,
- step S 5 00 the image output from the video input unit 1 0 2 is stored in the internal memory as in the operation example 1.
- the image stored in the internal memory is output to the sub-image creating unit 112 in order to perform “cut-out combining”.
- step S 6 0 0 as in the operation example 1, based on the screen combining parameters output from the screen configuration calculation unit 1 08, the image output unit 1 1 0 uses the image output from Create an image, and output the created sub-image to the image information addition unit 114.
- the video storage unit 1 The sub image is cut out and the size is scaled down using the target image of the sub image output from 10, and the result is output to the image information addition unit 114.
- the sub-image cutout is centered on the cutout center coordinates G (cx, C y) (equal to the trigger position 500) of the screen composition parameter in the input image 503 of the sub-image target. It is carried out by cutting out the cut area 5 0 7 defined by the horizontal and vertical cut sizes cut_s ize_h (t) and cut_size_v (t).
- the size of the sub-image shall be determined in advance and shall not exceed the size of the input image.
- the size of the sub image can be changed according to the content of the image.
- step S 7 0 0 as in the operation example 1, the sub image generation unit 1 1 2 outputs the sub-image generation unit 1 1 2 according to the composition type of the screen composition parameter output from the screen configuration calculation unit 1 08.
- the color of the outer frame of the image is changed, and the sub-image with the changed color of the outer frame is output to the screen combining section 116.
- the color of the outer frame of the sub image is changed to blue.
- the color of the outer frame is not limited to blue, and may be any color as long as it can indicate that the sub-image is a cutout image.
- step S 8 0 0 0 and step S 9 0 0 0 are the same as in the operation example 1 and thus the description thereof is omitted.
- FIG. 8 is an explanatory view showing an outline of screen composition by the above-mentioned “cut out composition”.
- “5 0 1” is the current input image
- “5 0 3” is the target sub-image to be extracted
- “5 0 5” is the trigger position in the image where the trigger occurred
- “5 0 7” is a cut-out area indicating a cut-out area
- “5 0 9” is a sub-image created by cutting out the size of the target sub-image 5 0 3 and adjusting its size
- “5 1 1” is a sub-image 5 0
- “5 1 3 J is a composite image obtained by superposing and synthesizing the input image 5 0 1 and the sub image 5 1 1 after frame change.
- the input image and the image scaled and reduced around the position where the trigger occurred are displayed at the same time in the combined image 513. be able to.
- the color of the outer frame can represent the state (type: “extraction combination” in this case) of the sub-image.
- the cutout size of the area centered on the trigger generation position is reduced and the cutout image synthesis is performed, as the value of the trigger indicating the importance of the video is larger.
- the user By encoding the synthesized video and displaying it on the receiving terminal through the transmission path, the user not only sees the current video but also cuts out the video of the important position on the receiving terminal with only one screen. 'You can see it at the same time, and the more important the image, the larger the image of the important position.
- the user should judge the content of the sub image by the color of the sub image outer frame without transmitting and receiving information other than the composite image. Can.
- the input video is not limited to one case, and it is possible to cut out and combine in multiple cases.
- steps S 1 0 0 to S 3 0 0 0 are the same as in the operation example 1 and thus the description thereof is omitted. However, in this operation example, since “loop synthesis” is performed, the trigger type is determined to be “important scene” in the synthesis trigger calculation process of step S 300 0.
- step S4000 as in the operation example 1, the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- three kinds of synthesis type here, “loop synthesis”
- synthesis scene center time is a parameter indicating the image number of the image located at the center time of the scene to be combined as described above
- the “reproduction speed” is repeatedly described as the sub image as described above. This parameter indicates the playback speed of the scene to be played back.
- FIG. 9 is a flowchart showing the contents of the operation example 3 of the screen combining parameter calculation process in FIG. The process common to the operation example 1 shown in FIG. 4 will only be briefly described.
- step S4300 as in the operation example 1 (see step S41000 in FIG. 4), whether the trigger value of the synthesis trigger is zero or not, that is, the input of the synthesis trigger Determine the presence or absence of As a result of this determination, if the trigger value of the synthesis trigger is not zero, that is, if there is an input of the synthesis trigger (S 4 3 0 0: NO), the process proceeds to step S 4 3 1 0 and the trigger value of the synthesis trigger If is zero, that is, if there is no synthetic trigger input (S 4 3 0 0: YES), the process proceeds to step S 4 3 4 0. In step S 4 3 1 0, as in the operation example 1 (see step S 4 1 1 0 in FIG.
- the composition type is determined according to a predetermined standard.
- the trigger type is “important scene” and the important scene is included in the input video before and after the time the trigger is output. Therefore, the synthesis type is set to “loop synthesis”. .
- step S4302 the combined scene center time is determined.
- the synthetic scene center time is determined to be the image number of the current input frame.
- the playback speed is determined.
- the sub image playback speed is calculated based on the magnitude of the trigger value.
- MIN_fps Setting minimum value for sub image playback speed
- (Formula 5) is an example of the calculation method, and is not limited to this.
- the method of calculating the reproduction speed may be any method as long as the reproduction speed decreases as the trigger value increases.
- step S 4 3 4 when the trigger value of the synthesis trigger is zero, that is, when there is no input of the synthesis trigger, the screen synthesis parameter is set to the parameter at the previous calculation.
- step S 4 3 5 the three screen composition parameters calculated in step S 4 3 0 to step S 4 3 4 0 (composition type, composite scene center time, playback speed Is output to the image storage unit 110, the sub image creation unit 112, the image information addition unit 114, and the screen combining unit 116, and the input image (image input unit 102).
- the process returns to the main flowchart in FIG.
- the video storage unit 110 performs video storage processing.
- the video storage unit 1 10 has an internal memory capable of storing a plurality of images, and the image output from the video input unit 1 0 2 Is stored in its internal memory.
- This internal memory in addition to a memory area for storing multiple images, has a storage counter that indicates the storage position of the image, and a readout counter that indicates the readout position of the image, and updates the counter each time an image is stored or readout. This makes it possible to store and retrieve periodic image data.
- FIG. 10 is a flowchart showing the contents of the operation example 3 of the image storage processing in FIG. 3.
- the memory is initialized. Specifically, the synthesis scene center time of the screen synthesis parameter is compared with the synthesis scene center time inputted last time, and when both are different, the image data in the internal memory and each counter are initialized. . In initialization, the image data in the internal memory is cleared, the value of each counter is reset to “1”, and the current composite scene central time is stored in the internal memory.
- step S 5 1 it is determined whether the synthesis type is “loop synthesis” or “no synthesis”. As a result of the determination, if the combination type is "loop combination”, the process proceeds to step S51020, and if the combination type is "no combination”, the process proceeds to step S510.
- step S 5120 it is determined whether the scene storage is complete. As a result of this determination, if the scene storage is completed (S 5 1 2 0: YES), the process immediately proceeds to step S 5 1 5 0, and the scene storage is not completed. If not (S 5 1 20: NO), the process proceeds to step S 5 1 30.
- center_position A counter value indicating the position where the image of the synthesized scene center time is stored in the internal memory
- roop_mergin difference in force value from the synthesized scene central time to the last image of the scene to be stored
- the ratio of the number of images belonging to each of the time before and after the trigger occurrence time is determined in advance. That is, it is assumed that the number of images from the synthesized scene center time to the end of the scene to be stored is determined in advance, and the size of the internal memory is determined by the number of images in the scene to be stored. Therefore, the size of the internal memory determines the number of images of the scene to be reproduced repeatedly, that is, the section of the scene.
- the image is stored. Specifically, the input image is stored in the internal memory at a position indicated by the storage counter.
- step S 5 140 the storage capacity is updated. Specifically, update processing is performed by adding one value of storage capacity. However, if the stored counter value exceeds the maximum value, set the counter value to “1”.
- step S515 the image is read out. Specifically, the image at the read counter position in the internal memory is read out and output to the sub image generation unit 112.
- step S 51 60 the read counter is updated. Specifically, for example, the following (Expression 7), (Expression 8),
- count read (t) count read (t-l) + (add (- ⁇ (equation 8)
- a mod B Remainder when A is divided by B
- the method of updating the readout counter is determined according to the ratio of the reproduction speed of the sub-image to the reproduction speed of the main image. That is,
- Equation (7) the slower the sub-image playback speed, the lower the frequency of addition of the readout counter value, and slow playback. Conversely, in (Equation 8), as the sub image playback speed is higher, the addition frequency of the value of the read counter is higher, resulting in high speed playback.
- the playback speed of the sub-image can be changed by controlling the method of updating the reading power.
- the process returns to the main flow chart of FIG.
- the image is stored. Specifically, the input image is stored in the internal memory at a position indicated by the storage counter.
- step S 5 180 the storage counter is updated. Specifically, update processing is performed by adding 1 to the value of the storage counter. However, the save count If the value of the data exceeds the maximum value, set the value of the counter to "1". When this storage counter updating process is completed, the process returns to the main flowchart of FIG.
- step S 6 0 0 as in the operation example 1, based on the screen combining parameters output from the screen configuration calculation unit 1 08, the image output unit 1 1 0 uses the image output from Create an image, and output the created sub-image to the image information addition unit 114.
- step S 7 0 0 the sub image generation unit 1 12 outputs the sub according to the composition type of the screen composition parameter output from the screen configuration calculation unit 1 08.
- the color of the outer frame of the image is changed, and the sub-image with the changed color of the outer frame is output to the screen combining section 116.
- the color of the outer frame of the sub image is changed to yellow.
- the color of the outer frame is not limited to yellow and may be any color as long as it can indicate that the sub image is a loop reproduction image.
- step S 8 0 0 0 and step S 9 0 0 0 are the same as in the operation example 1, and thus the description thereof is omitted.
- FIG 11 is an explanatory diagram showing an overview of screen composition by the above-mentioned "loop composition”.
- the larger the trigger value indicating the importance of the video the lower the playback speed when repeatedly playing back a scene composed of images before and after the trigger occurrence time.
- the user can not only view the current video, but also importantly at the receiving terminal with only one screen.
- the scene before and after the time can be viewed simultaneously as a composite screen, and the scene with higher importance can be viewed at a lower playback speed and over a long time.
- the user should judge the content of the sub image by the color of the sub image outer frame without transmitting or receiving information other than the composite video. Can.
- the input image is displayed on the main image and the important scene is displayed on the sub image.
- the present invention is not limited to this.
- the important scene is displayed on the main image. It is also possible to display the input video on
- loop synthesis is possible even if there is more than one input video.
- operation example 4 when performing screen composition as the determination result of screen composition using a composition trigger, the scene is sub-sent so that a scene consisting of an image group before and after the time when the trigger occurs is repeatedly reproduced.
- the case where the image is synthesized to a partial area of the input image that is, the case where “loop synthesis” is performed will be described.
- the size of the trigger is The larger the number, the larger the number of images in the scene to be played back repeatedly (pattern 2).
- steps S 1 0 0 to S 3 0 0 0 ′ are the same as in the operation example 1 and thus the description thereof is omitted. However, in this operation example, since “loop synthesis” is performed, the trigger type is determined to be “important scene” in the synthesis trigger calculation process of step S 300 0.
- step S4000 similarly to the operation example 1, the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- 3 of synthetic type here, “loop synthesis”
- synthesis scene center time is calculated.
- the combined scene center time is a parameter indicating the image number of the image located at the center time of the scene to be combined as described above
- the “loop segment” is described as the sub image as described above. It is a parameter that indicates the number of images that make up a scene that is repeatedly played back.
- FIG. 12 is a flowchart showing the contents of Operation Example 4 of the screen combining parameter calculation process in FIG. The description of the processing common to the operation example 3 shown in FIG. 9 will be omitted.
- step S 4 3 0 to step S 4 3 2 0 are the same as in the operation example 3, and thus the description thereof is omitted.
- the loop interval is determined. Specifically, the loop section of the sub image is calculated based on the magnitude of the trigger value.
- the loop interval of the sub image frame— num (t) is the following (Equation 9), .frame mimi) ⁇ '* MAX fi'm picture ... (Equation 9)
- MAX one rigger Possible maximum value of trigger value
- MAX-frame 1 num Maximum value set for the loop section of the sub image
- (Formula 9) is an example of the calculation method, and is not limited to this.
- the method of calculating the loop interval may be any method as long as the loop interval becomes larger as the trigger value is larger.
- step S 4 3 4 0 and step S 4 3 5 0 are the same as in the operation example 3 and thus the description thereof is omitted.
- the video storage unit 110 performs video storage processing.
- the video storage unit 110 has an internal memory capable of storing a plurality of images, and stores the image output from the video input unit 102 in the internal memory.
- This internal memory has a storage unit that indicates the storage position of the image and a readout counter that indicates the readout position of the image.
- the maximum possible value of each counter is the number of images that can be stored in the internal memory. If the value of the counter is updated and exceeds the maximum value, it returns to “1”. That is, the internal memory has a structure capable of storing and reading out periodic image data by updating the counter each time the image is stored. In this operation example, the number of images that can be stored in the internal memory is controlled to be equal to the value indicated in the loop section of the synthesis parameter.
- the memory is initialized. Specifically, comparison is made between the synthesis scene center time of the screen synthesis parameters and the synthesis scene center time inputted last time, and if both are different, the image data and each counter in the internal memory can be stored in the internal memory Initialize the number of images. In initialization Clears the image data in the internal memory, resets the value of each counter to "1", and sets the number of images that can be stored in the internal memory in the loop section of the screen composition parameter. In addition, the current composite scene center time is stored in the internal memory.
- step S 5 1 0 to step S 5 1 5 0 are the same as in the operation example 3 and thus the description thereof is omitted.
- step S 516 the read counter is updated. Specifically, unlike the operation example 3, for example, the following (Expression 1 0),
- the maximum number of images stored in the internal memory is changed according to the screen composition parameter. This makes it possible to control the number of images in a scene to be synthesized as a sub-image, that is, the size of a section of the scene. That is, the larger the trigger value is, the larger the size of the section of the scene to be reproduced in the loop is set, so that scenes in long sections before and after the trigger occurrence time can be reproduced.
- step S 5 1 7 0 and step S 5 1 8 0 are the same as in the operation example 3 and thus the description thereof is omitted.
- step S 6 0 0 based on the screen combining parameters output from the screen configuration calculation unit 1 08, the image output unit 1 1 0 uses the image output from Create an image, and output the created sub-image to the image information addition unit 114.
- pattern 2 of “loop synthesis” is performed as in this operation example, the image to be the target of the sub image acquired by the readout counter, which is output from the video storage unit 110, is reduced and Create a sub image.
- step S 7 0 0 the sub image generation unit 1 12 outputs the sub according to the composition type of the screen composition parameter output from the screen configuration calculation unit 1 08.
- the color of the outer frame of the image is changed, and the sub-image with the changed color of the outer frame is output to the screen combining section 116.
- the color of the outer frame of the sub image is changed to yellow as in the operation example 3.
- the color of the outer frame is not limited to yellow and may be any color as long as it can indicate that the sub image is a loop reproduction image.
- step S 8 0 0 0 and step S 9 0 0 0 are the same as in the operation example 1 and thus the description thereof is omitted.
- the larger the trigger value indicating the importance of the image the larger the section of the scene formed by the image group before and after the trigger occurrence time, and the scene is repeated.
- the user In order to perform image synthesis so as to be played back, by encoding the synthesized video and displaying it on the receiving terminal through the transmission path, the user not only sees the current video but also on the receiving terminal having only one screen.
- the images before and after the important time can be viewed simultaneously as a composite screen, and the more important the scene is, the larger the section of the scene is, the more images before and after the important time can be seen.
- the user should judge the content of the sub image by the color of the sub image outer frame without transmitting or receiving information other than the composite video. Can.
- operation example 5 when performing screen composition as the determination result of screen composition using the composition trigger, the scene is sub-sent so that a scene consisting of an image group before and after the time when the trigger occurs is repeatedly reproduced.
- the case where the image is synthesized to a partial area of the input image, that is, the case where “loop synthesis” is performed is described.
- the number of loop playbacks of the scene to be repeatedly played back is set larger as the trigger size is larger (pattern 3).
- steps S 1 0 0 to S 3 0 0 0 are the same as in the operation example 1 and thus the description thereof is omitted. However, in this operation example, since “loop synthesis” is performed, the trigger type is determined to be “important scene” in the synthesis trigger calculation process of step S 300 0.
- step S4000 as in the operation example 1, the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- four kinds of synthesis types here “loop synthesis”
- synthesis scene center time is a parameter indicating the image number of the image located at the center time of the scene to be combined as described above
- the “loop count” is repeated as a sub image as described above.
- the “frame counter” is a parameter indicating the number of remaining images to be combined as a sub image as described above.
- FIG. 13 is a flowchart showing the contents of the operation example 5 of the screen combining parameter calculation process in FIG. The process common to operation example 3 shown in FIG. 9 The explanation is omitted.
- step S 4 3 0 to step S 4 3 2 0 are the same as in the operation example 3, and thus the description thereof is omitted.
- step S 4 3 the number of loops is determined. Specifically, the number of loops of the sub image is calculated based on the magnitude of the trigger value, and the frame counter is set using the loop counter.
- the number of loops in the sub image loop_num (t) is given by the following (Expression 1 1),
- MAX_loop_num Maximum number of loops set for sub image
- (Expression 1 1) is an example of a calculation method, and is not limited to this.
- the method of calculating the number of loops may be any method as long as the number of loops increases as the trigger value increases.
- the frame counter is set, but the value of frame counter frame1 count (t) is, for example, the following (Equation 12),
- frame_count (t) frame counter value at time t
- loop_num (t) number of loops of the sub image at time t
- MAX_frame_num Calculated using the number of images that can be stored in the internal memory of the video storage unit 110.
- step S 4 3 4 5 if the trigger value of the synthesis trigger is zero, Unlike the operation example 3, the synthesis parameter is updated because the synthesis trigger is not input. Specifically, of the previous screen synthesis parameters, update processing of the frame counter and the synthesis method is performed.
- the frame counter is
- the frame counter is updated by subtracting its value by one. However, if the value of the frame counter becomes 0 or less by updating the value, the value of the frame counter should be "0".
- composition type is updated according to the value of the updated frame counter
- composition type is updated, for example, according to the following rule:
- the video storage unit 1 10 controls the number of loop playbacks by storing the image in the internal memory and simultaneously outputting the image for loop playback to the sub image creation unit 1 12 according to the type of composition. It becomes possible.
- step S 4 3 5 the four screen composition parameters (composition type, composition scene center time, loop count, frame counter) calculated in step S 4 3 0 to step S 4 3 4 5 1 1 0, sub-image creation unit 1 12, image information addition unit 1 1 4, and screen combining unit 1 1 6 While outputting an input video (input image from the video input unit 1 0 2) to the screen combining unit After outputting to 1 1 6, return to the main flow chart in Figure 3.
- composition type composition scene center time
- loop count, frame counter the four screen composition parameters (composition type, composition scene center time, loop count, frame counter) calculated in step S 4 3 0 to step S 4 3 4 5 1 1 0, sub-image creation unit 1 12, image information addition unit 1 1 4, and screen combining unit 1 1 6 While outputting an input video (input image from the video input unit 1 0 2) to the screen combining unit After outputting to 1 1 6, return to the main flow chart in Figure 3.
- step S 5 0 0 to step S 9 0 0 0 is the same as that of the operation example 3 Therefore, the explanation is omitted.
- the larger the trigger value indicating the importance of the video the larger the number of loops that repeatedly reproduce a scene composed of an image group before and after the trigger occurrence time.
- the synthesized video is encoded and displayed on the receiving terminal through the transmission path, so that the user can not only view the current video, but also at important times at the receiving terminal with only one screen.
- the images before and after can be viewed simultaneously as a composite screen.
- the higher the scene importance is, the larger the number of looping of the scene, and the image before and after the critical time can be repeatedly viewed a greater number of times.
- the user should judge the content of the sub image by the color of the sub image outer frame without transmitting or receiving information other than the composite video. Can.
- the input image is displayed on the main image and the important scene is displayed on the sub image.
- the present invention is not limited to this.
- the important scene is displayed on the main image. It is also possible to display the input video on
- the operation example 6 is a case where the size of the sub image is changed according to the size of the trigger.
- an image at the time when the trigger occurs is taken as a still image, and this still image is taken as a sub image.
- the case of combining on a region, that is, the case of performing “still image combining” will be described.
- the size of the sub image is set larger as the size of the trigger is larger.
- changing the sub-image size according to the trigger size can be applied not only to “still image combining” but also to other combining types such as “cut out combining” and “loop combining”. Also, in this case, the explanation will be made focusing on the part where the processing is different from the operation example 1 with reference to FIG.
- step S 1 00 00 to step S 3 0 0 0 are the same as in the operation example 1 and thus the description thereof is omitted.
- step S4000 as in the operation example 1, the screen composition determination is performed and the screen configuration is calculated using the composition trigger to calculate the screen composition parameters.
- the sub image size that is, the combining type (here, “still image combining”)
- target sub image sub image display time
- target sub-image is a parameter indicating the image number of the image used for sub-image creation as described above
- sub-image display time is a combination of sub-images as described above.
- Sub-image size is a parameter indicating the composite size of the sub-image.
- FIG. 14 is a flowchart showing the contents of Operation Example 6 of the screen combining parameter calculation process in FIG. 3. The description of the processing common to the operation example 1 shown in FIG. 4 will be omitted.
- step S 4 1 0 to step S 4 1 3 0 are the same as in the operation example 1 and thus the description thereof is omitted.
- step S 4 13 5 the sub image size is determined.
- MAX 1 sizeji Setting maximum value of sub image horizontal size
- MAX_size_v Setting maximum value of sub image vertical size
- (Expression 14) and (Expression 15) are an example of the calculation method, and the present invention is not limited to this.
- the method of calculating the sub-image size may be any method as long as the size increases as the trigger value increases.
- step S 4 1 4 to step S 4 1 7 0 are the same as in the operation example 1 and thus the description thereof is omitted.
- step S5000 to step S90000 since the processing of step S5000 to step S90000 is the same as that of the operation example 1, the description thereof will be omitted.
- step S600 the sub-image target video output from the video storage unit 110 is reduced to the sub-image size output from the screen configuration calculation unit 100. Create a sub image.
- the sub-image target video output from the video storage unit 110 is reduced to the sub-image size output from the screen configuration calculation unit 100. Create a sub image.
- the larger the trigger value indicating the importance of the image the larger the sub-image size when combining the image at the trigger occurrence time as a still image, and the image synthesis is performed.
- the user can view not only the current video but also the image of the important time, even at the receiving terminal with only one screen.
- the image can be viewed simultaneously as a composite screen, and the higher the degree of importance is, the larger the size of the image, and the image can be viewed in detail in one screen.
- the user does not transmit or receive any information other than the composite image, and the content of the sub image is outside the sub image. It can be judged by the color of the frame.
- an example is shown where an input image is displayed on the main image and an important still image is displayed on the sub image.
- the present invention is not limited to this.
- An important still image is displayed on the main image. It is also possible to display the input image on the screen.
- still images can be synthesized not only in the case of one input video, but also in the case of multiple.
- the operation example 7 is a case where the composition of the screen is calculated using the composition trigger indicating the degree of importance of the image, and the composition information is expressed by the shape of the screen to be composited.
- the screen configuration calculation unit 108 calculates the screen combining parameters by the method of any of the operation examples 1 to 6, and the processing of the image information addition unit 114 is particularly described below. explain.
- the image information addition unit 114 changes the shape of the sub image output from the sub image generation unit 112 in accordance with the combination type of the screen combination parameter output from the screen configuration calculation unit 108. For example, if the composite type is "still image composite", change the shape of the sub image to a circle.
- the shape of the outer frame is not limited to a circle, and may be any shape as long as it can indicate that the sub image is a still image.
- the image composition is performed because the composition is performed according to the trigger and the control is performed to change the shape of the sub image according to the composition type of the sub image.
- the sub image shape represents the composition type of the sub image
- the sub image If the correspondence between the shape of the image and the content of the sub-image is known, the user can determine the sub-image composite type based on the sub-image shape without transmitting and receiving information other than the composite video.
- the present invention it is possible to automatically display a video important to the user in a video composition apparatus that composites a plurality of videos into one screen, and further, the important video Can be combined and displayed on a screen that is easy to see visually.
- the present invention has an effect of automatically displaying important images for the user and combining the important images into a visually easy-to-see screen configuration, and combining a plurality of images on one screen. Are useful in video compositing devices.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/514,439 US20060033820A1 (en) | 2003-02-25 | 2004-02-20 | Image combining apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003047354A JP2004266376A (ja) | 2003-02-25 | 2003-02-25 | 映像合成装置 |
JP2003-047354 | 2003-02-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004077821A1 true WO2004077821A1 (ja) | 2004-09-10 |
Family
ID=32923266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/001990 WO2004077821A1 (ja) | 2003-02-25 | 2004-02-20 | 映像合成装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20060033820A1 (ja) |
JP (1) | JP2004266376A (ja) |
CN (1) | CN1698350A (ja) |
WO (1) | WO2004077821A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006099404A (ja) * | 2004-09-29 | 2006-04-13 | Sanyo Electric Co Ltd | 画像表示装置 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100771119B1 (ko) | 2006-03-06 | 2007-10-29 | 엠텍비젼 주식회사 | 복수의 개별 영상 데이터 합산 방법 및 그 장치 |
JP5021227B2 (ja) * | 2006-03-31 | 2012-09-05 | 株式会社日立国際電気 | 監視映像表示方法 |
US8254626B2 (en) | 2006-12-22 | 2012-08-28 | Fujifilm Corporation | Output apparatus, output method and program for outputting a moving image including a synthesized image by superimposing images |
TWI451358B (zh) * | 2007-02-14 | 2014-09-01 | Photint Venture Group Inc | 香蕉編碼解碼器 |
CN101275831B (zh) * | 2007-03-26 | 2011-06-22 | 鸿富锦精密工业(深圳)有限公司 | 图像离线处理系统及方法 |
JP4459250B2 (ja) * | 2007-04-20 | 2010-04-28 | 富士通株式会社 | 送信方法、画像送信システム、送信装置及びプログラム |
CN101803385A (zh) * | 2007-09-23 | 2010-08-11 | 霍尼韦尔国际公司 | 跨多个关联视频屏动态跟踪闯入者 |
JP5298930B2 (ja) * | 2009-02-23 | 2013-09-25 | カシオ計算機株式会社 | 動画像を記録する動画処理装置、動画処理方法及び動画処理プログラム |
JP5548002B2 (ja) * | 2010-03-25 | 2014-07-16 | 富士通テン株式会社 | 画像生成装置、画像表示システム及び画像生成方法 |
JP5592138B2 (ja) * | 2010-03-31 | 2014-09-17 | 富士通テン株式会社 | 画像生成装置、画像表示システム及び画像生成方法 |
MY177404A (en) * | 2011-12-05 | 2020-09-14 | Mimos Berhad | Method and system for prioritizing displays of surveillance system |
JP5966584B2 (ja) * | 2012-05-11 | 2016-08-10 | ソニー株式会社 | 表示制御装置、表示制御方法およびプログラム |
TW201424380A (zh) * | 2012-12-07 | 2014-06-16 | Ind Tech Res Inst | 影像與訊息編碼系統、編碼方法、解碼系統及解碼方法 |
CN103871186A (zh) * | 2012-12-17 | 2014-06-18 | 博立码杰通讯(深圳)有限公司 | 安防监控系统及相应的报警触发方法 |
WO2014132816A1 (ja) * | 2013-02-27 | 2014-09-04 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
JP6119992B2 (ja) * | 2013-08-23 | 2017-04-26 | ブラザー工業株式会社 | 画像処理装置およびコンピュータプログラム |
GB2557597B (en) * | 2016-12-09 | 2020-08-26 | Canon Kk | A surveillance apparatus and a surveillance method for indicating the detection of motion |
CN108307120B (zh) * | 2018-05-11 | 2020-07-17 | 阿里巴巴(中国)有限公司 | 图像拍摄方法、装置及电子终端 |
JP7271887B2 (ja) * | 2018-09-21 | 2023-05-12 | 富士フイルムビジネスイノベーション株式会社 | 表示制御装置及び表示制御プログラム |
CN110825289A (zh) * | 2019-10-31 | 2020-02-21 | 北京字节跳动网络技术有限公司 | 操作用户界面的方法、装置、电子设备及存储介质 |
CN111080741A (zh) * | 2019-12-30 | 2020-04-28 | 中消云(北京)物联网科技研究院有限公司 | 合成图片的生成方法 |
CN111680688B (zh) * | 2020-06-10 | 2023-08-08 | 创新奇智(成都)科技有限公司 | 字符识别方法及装置、电子设备、存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10290447A (ja) * | 1997-04-16 | 1998-10-27 | Omron Corp | 画像出力制御装置、監視用システム、画像出力制御方法および記憶媒体 |
JP2000069367A (ja) * | 1998-08-21 | 2000-03-03 | Toshiba Corp | 記録密度可変型ビデオスイッチャ |
JP2000295600A (ja) * | 1999-04-08 | 2000-10-20 | Toshiba Corp | 監視装置 |
JP2003256836A (ja) * | 2001-07-10 | 2003-09-12 | Hewlett Packard Co <Hp> | 知的特徴選択およびパンズーム制御 |
JP2003346463A (ja) * | 2002-04-03 | 2003-12-05 | Fuji Xerox Co Ltd | ビデオシーケンスの縮小表示 |
JP2004023373A (ja) * | 2002-06-14 | 2004-01-22 | Canon Inc | 画像処理装置及びその方法、並びにコンピュータプログラム及びコンピュータ可読記憶媒体 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3641266A (en) * | 1969-12-29 | 1972-02-08 | Hughes Aircraft Co | Surveillance and intrusion detecting system |
US5237408A (en) * | 1991-08-02 | 1993-08-17 | Presearch Incorporated | Retrofitting digital video surveillance system |
US5625410A (en) * | 1993-04-21 | 1997-04-29 | Kinywa Washino | Video monitoring and conferencing system |
US20040036718A1 (en) * | 2002-08-26 | 2004-02-26 | Peter Warren | Dynamic data item viewer |
-
2003
- 2003-02-25 JP JP2003047354A patent/JP2004266376A/ja not_active Withdrawn
-
2004
- 2004-02-20 WO PCT/JP2004/001990 patent/WO2004077821A1/ja active Application Filing
- 2004-02-20 CN CN200480000267.5A patent/CN1698350A/zh active Pending
- 2004-02-20 US US10/514,439 patent/US20060033820A1/en not_active Abandoned
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10290447A (ja) * | 1997-04-16 | 1998-10-27 | Omron Corp | 画像出力制御装置、監視用システム、画像出力制御方法および記憶媒体 |
JP2000069367A (ja) * | 1998-08-21 | 2000-03-03 | Toshiba Corp | 記録密度可変型ビデオスイッチャ |
JP2000295600A (ja) * | 1999-04-08 | 2000-10-20 | Toshiba Corp | 監視装置 |
JP2003256836A (ja) * | 2001-07-10 | 2003-09-12 | Hewlett Packard Co <Hp> | 知的特徴選択およびパンズーム制御 |
JP2003346463A (ja) * | 2002-04-03 | 2003-12-05 | Fuji Xerox Co Ltd | ビデオシーケンスの縮小表示 |
JP2004023373A (ja) * | 2002-06-14 | 2004-01-22 | Canon Inc | 画像処理装置及びその方法、並びにコンピュータプログラム及びコンピュータ可読記憶媒体 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006099404A (ja) * | 2004-09-29 | 2006-04-13 | Sanyo Electric Co Ltd | 画像表示装置 |
JP4578197B2 (ja) * | 2004-09-29 | 2010-11-10 | 三洋電機株式会社 | 画像表示装置 |
Also Published As
Publication number | Publication date |
---|---|
US20060033820A1 (en) | 2006-02-16 |
CN1698350A (zh) | 2005-11-16 |
JP2004266376A (ja) | 2004-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004077821A1 (ja) | 映像合成装置 | |
US9734680B2 (en) | Monitoring system, monitoring method, computer program, and storage medium | |
KR101113844B1 (ko) | 감시카메라 및 감시카메라의 프라이버시 보호방법 | |
US20090268079A1 (en) | Image-capturing apparatus and image-capturing method | |
JP4167777B2 (ja) | 映像表示装置、映像表示方法および映像を表示するためのプログラムを記録した記録媒体 | |
US8704768B2 (en) | Image processing apparatus and method | |
US6456321B1 (en) | Surveillance camera apparatus, remote surveillance apparatus and remote surveillance system having the surveillance camera apparatus and the remote surveillance apparatus | |
US8179442B2 (en) | Imaging device and method for performing surveillance by infrared radiation measurement | |
JP3230858B2 (ja) | 動画像の優先度自動選択方法および動画像ダイジェスト自動表示装置 | |
US20020054211A1 (en) | Surveillance video camera enhancement system | |
US20070159530A1 (en) | Method and apparatus for controlling output of a surveillance image | |
US9065986B2 (en) | Imaging apparatus and imaging system | |
JP4722537B2 (ja) | 監視装置 | |
JP2004266670A (ja) | 撮像装置及び方法、画像情報提供システム並びにプログラム | |
EP2083562A1 (en) | Apparatus and method for controlling color of mask of monitoring camera | |
US20050117025A1 (en) | Image pickup apparatus, image pickup system, and image pickup method | |
JP5072103B2 (ja) | 画角制御装置及び画角制御方法 | |
JPH0918828A (ja) | ディジタル画像データの記録装置および方法ならびに再生装置および方法 | |
JP3838149B2 (ja) | モニタリングシステムおよび方法、並びにプログラムおよび記録媒体 | |
JP2005328333A (ja) | 監視システム | |
US20040223059A1 (en) | Image pickup apparatus, image pickup system, and image pickup method | |
JPH0744788A (ja) | 映像監視方法及びその装置 | |
JP2004088352A (ja) | 映像処理装置、映像処理方法、プログラム及び記録媒体、並びに映像処理システム | |
JP2003125350A (ja) | 画像再生装置 | |
GB2324429A (en) | Electronic zoom control in a virtual studio |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006033820 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10514439 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048002675 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 10514439 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |