CN102508628A - Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall - Google Patents

Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall Download PDF

Info

Publication number
CN102508628A
CN102508628A CN2011103040064A CN201110304006A CN102508628A CN 102508628 A CN102508628 A CN 102508628A CN 2011103040064 A CN2011103040064 A CN 2011103040064A CN 201110304006 A CN201110304006 A CN 201110304006A CN 102508628 A CN102508628 A CN 102508628A
Authority
CN
China
Prior art keywords
combination
background
image
piece
captured image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103040064A
Other languages
Chinese (zh)
Other versions
CN102508628B (en
Inventor
江志和
刘伟俭
杨继禹
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Gaohang Intellectual Property Operation Co ltd
Rugao Tianan Electric Technology Co ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN201110304006.4A priority Critical patent/CN102508628B/en
Publication of CN102508628A publication Critical patent/CN102508628A/en
Application granted granted Critical
Publication of CN102508628B publication Critical patent/CN102508628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method and a device for eliminating splicing seams of a splicing wall as well as an image system based on the splicing wall. The method comprises the following steps: distinguishing a background splicing wall in a shot image; processing the shot image, and eliminating the splicing seams of the splicing wall in the shot image. According to the technical scheme of the invention, show edges of the splicing wall can be distinguished in the image shot by a camera, and the splicing wall in the shot image can be processed to eliminate the splicing seams of the splicing wall in the shot image, thus the splicing seams of a picture output by the display can be effectively eliminated, but the splicing display wall does not need to be processed, the hardware cost can not be increased, and the consistency of pictures with different viewpoints can be realized without the restriction of the viewing angle changes of the camera.

Description

Method, the device of eliminating the combination piece reach the picture system based on combination
Technical field
The present invention relates to a kind of method of combination piece, a kind of device and a kind of picture system of eliminating the combination piece eliminated based on combination.
Background technology
Continuous development and renewal along with led light source DLP display technique; The combination display technique has obtained increasingly extensive application; The wide colour gamut of led light source, beautiful color Characteristic have further been deepened the application and the development of DLP display technique; Obtained increasing use especially in radio, TV and film industries at present, more employing be to be main DLP display background wall with the led light source, the host can carry out interactive operation with the display background wall easily; During application, video camera photographs directly to be exported to the user after the foreground content such as the displaying contents that comprises the background combination and host and watches.
Yet, in the application of present background display wall, because whole display splicing wall is spliced by each DLP display unit; Thereby between splicing display unit and splicing display unit, exist the slit inevitably, with regard to present technical merit, this slit can be controlled in 1 millimeter; But spectators for TV set terminal; Still can clearly see slit bigger between each mosaic display screen, when the camera lens of video camera furthers, this slit will be more obvious.
In order to solve the problem of this piece; The scheme that is adopted at present all is to adopt the edge amalgamation mode, handles the seamless picture of formation through the overlay region picture between two projectors being carried out brightness gradual change, emergence etc.; This edge amalgamation mode; Must use the overlay region of two display unit outputs, but this processing mode is destroyed the closure of box-type display unit light path, and be not suitable for display background walls elimination pieces application such as DLP, LCD.On the other hand, at the scene of recording of broadcasting and TV program, can have a plurality of shooting points, different shooting points still can cause the image evenness problem, also can have the non-positive situation of taking of wall, influences televisor spectators' visual effect.
Summary of the invention
To the problem that exists in the above-mentioned prior art; The object of the present invention is to provide a kind of method of combination piece, a kind of device of combination piece, a kind of picture system eliminated eliminated based on combination; It can effectively eliminate the piece of the picture that shows output; And can realize the consistance of the picture of different observation point, can not receive the restriction of video camera visual angle change.
For achieving the above object, the present invention adopts following technical scheme:
A kind of method of eliminating the combination piece comprises step:
Discern the background combination in the captured image;
Captured image is handled, eliminated the piece of background combination in the captured image.
A kind of device of eliminating the combination piece comprises:
The combination recognition unit that is connected with video camera is used for discerning the background combination of captured image;
The seam unit that disappears is used for captured image is handled, and eliminates the piece of background combination in the captured image.
A kind of picture system based on combination comprises more than one camera, also comprises the device of at least one aforesaid elimination combination piece, and this device of eliminating the combination piece is connected with at least one camera.
According to the invention described above scheme; It is through in the image of shot by camera, identifying the demonstration edge of combination, and eliminates the piece of combination in the captured image to captured Flame Image Process, thus not only effectively removal of images play the piece of the picture of exporting; And need not spliced display wall is done any processing; Can not increase hardware cost, and can realize the consistance of the picture of different observation point, can not receive the restriction of video camera visual angle change.
Description of drawings
Fig. 1 is the schematic flow sheet of the method embodiment of elimination combination piece of the present invention;
Fig. 2 is the schematic flow sheet of the method for the elimination combination piece in the concrete example 1;
Fig. 3 is definite mode synoptic diagram of scaling coefficient;
The synoptic diagram of human eye observation picture when Fig. 4 is the details in a play not acted out on stage, but told through dialogues shooting;
The synoptic diagram of the picture that video camera photographed when Fig. 5 was the details in a play not acted out on stage, but told through dialogues shooting;
Fig. 6 is the schematic flow sheet of the method for the elimination combination piece in the concrete example 2
Fig. 7 is the schematic flow sheet of the method for the elimination combination piece in the concrete example 3;
Fig. 8 is wherein a kind of synoptic diagram of discerning the piece of background combination and carrying out convergent-divergent;
Fig. 9 is other a kind of synoptic diagram of discerning the piece of background combination and carrying out convergent-divergent;
Figure 10 is the schematic flow sheet that the color compensating in the concrete example 4 is handled;
Figure 11 is the structural representation of the device embodiment of elimination combination piece of the present invention;
Figure 12 is the structural representation of the device of elimination combination piece of the present invention in the concrete example 5;
Figure 13 is the synoptic diagram of an application scenarios of the device in the concrete example 5;
Figure 14 is the synoptic diagram of the Another application scene of the device in the concrete example 5;
Figure 15 is the structural representation of the device of elimination combination piece of the present invention in the concrete example 6;
Figure 16 is the synoptic diagram of an application scenarios of the device in the concrete example 6;
Figure 17 is the synoptic diagram of the Another application scene of the device in the concrete example 6;
Figure 18 is the structural representation of the device of elimination combination piece of the present invention in the concrete example 7;
Figure 19 is the synoptic diagram of an application scenarios of the device in the concrete example 7;
Figure 20 is the structural representation of the device of elimination combination piece of the present invention in the concrete example 8;
Figure 21 is the synoptic diagram of an application scenarios of the device in the concrete example 8.
Embodiment
The schematic flow sheet of the method embodiment of elimination combination piece of the present invention has been shown among Fig. 1, and as shown in Figure 1, it comprises step:
Step S101: discern the background combination in the captured image;
Step S102: captured image is handled, eliminated the piece of background combination in the captured image.
Wherein, When in above-mentioned steps S101, discerning the background combination in the captured image, various implementation can be arranged, for example; Can be through respectively increase the sign of a special color or nailing material at four edges of combination; Perhaps set 1 to 2 special pixel value in four corners of screen, perhaps the screen edge at spliced display wall adds at least three outstanding marks, thereby can be through discerning the demonstration edge that these special markings automatically identify the background combination when video camera is taken; And then identifying the background combination in the captured image, concrete implementation does not repeat them here.
When in above-mentioned steps S102, the background combination in the captured image being handled, eliminating the piece of background combination in the captured image; According to actual needs; Multiple different processing mode can be arranged; In a kind of therein mode; Can be to utilize original combination input signal, the picture signal of original combination input is carried out convergent-divergent after, to the background combination in the captured image partly replace or with the output that superposes of isolated clear foreground picture; In other a kind of mode, can be through carrying out the laggard line output of piece part of convergent-divergent covering background combination behind the piece of discerning background combination in the captured image, to each display unit in the captured image.Below just describe respectively to the concrete example of these implementations.
Concrete example 1
The schematic flow sheet of method of the piece of the elimination combination in this concrete example has been shown among Fig. 2, in this example, has been after the picture signal of original combination input is carried out convergent-divergent, describes with the output that superposes of isolated clear foreground picture.
As shown in Figure 2, in this concrete example, method of the present invention comprises step:
Step S201: discern the background combination in the captured image, get into step S202;
Step S202: confirm that background combination and the total painting reduction of area of captured image in the captured image put scale-up factor, get into step S203;
Step S203: when the details in a play not acted out on stage, but told through dialogues of synchronizing signal, take; Obtain comprising the realtime graphic of details in a play not acted out on stage, but told through dialogues background combination and clear foreground picture; And isolate the clear foreground picture in this realtime graphic; This foreground picture be in the captured image except the picture of background combination part, get into step S204;
Step S204: the picture signal of original combination input is put scale-up factor according to above-mentioned total painting reduction of area carries out convergent-divergent, with the image that obtains behind the convergent-divergent corresponding to the clear foreground picture of the details in a play not acted out on stage, but told through dialogues background combination position in the realtime graphic as bottom layer image, after separating as the top layer images laggard line output that superposes.
When confirming among the above-mentioned steps S202 that the total painting reduction of area is put scale-up factor, being that the image that rectangle and video camera shooting are exported also is the characteristic of rectangular image usually based on combination, can be the scaling that calculates transverse and longitudinal respectively.Definite mode synoptic diagram of scaling coefficient has been shown among Fig. 3.
Suppose that the coordinate of the captured image of one of them video camera observation point in coordinate system is as shown in Figure 3, four edge angle coordinate points of shot by camera image be (x0, y0), (x3; Y0), (x0, y3), (x3, y3); Four edge coordinate points of the background combination in the shot by camera image for (x1, y1), (x2, y1), (x2; Y2) and (x1, y2); Thereby can calculate the scaling coefficient of the whole picture of background combination and shot by camera in this shot by camera picture in view of the above, this scaling coefficient comprises horizontal direction scaling coefficient and vertical direction scaling coefficient, is respectively:
Horizontal direction scaling coefficient=(x2-x1)/(x3-x0);
Vertical direction scaling coefficient=(y2-y1)/(y3-y0).
After obtaining this scaling coefficient, can the position coordinates scope and the scaling coefficient of the background combination in each shot by camera image be stored, use to make things convenient in the follow-up work process.
In existing style of shooting, in order to guarantee the shooting effect of video camera, all be the shooting of carrying out picture at the light field of synchronous signal line in the time usually, the wall picture supplies the user to watch to obtain clearly.Yet; In the present invention program,, take when then being selected at the details in a play not acted out on stage, but told through dialogues of synchronizing signal in order to distinguish the background combination effectively and foreground picture is handled; Utilize the light field of synchronizing signal and the difference of details in a play not acted out on stage, but told through dialogues; Cog region is told foreground image and details in a play not acted out on stage, but told through dialogues background combination clearly, obtains comprising the realtime graphic of details in a play not acted out on stage, but told through dialogues background combination and clear foreground picture, in order to implement to take in the details in a play not acted out on stage, but told through dialogues of synchronizing signal more accurately; With respect to existing synchronizing signal, the details in a play not acted out on stage, but told through dialogues time that can the proper extension synchronizing signal.
Because the visual effect of human eye; In a live telecast, human eye can't be felt has the existence of details in a play not acted out on stage, but told through dialogues, video camera has been shown among Fig. 4 has taken the synoptic diagram of human eye observation site picture constantly in the details in a play not acted out on stage, but told through dialogues of synchronizing signal; It is thus clear that human eye observes remains display frame clearly.But in fact; Video camera taken picture and human eye observation site picture when the details in a play not acted out on stage, but told through dialogues of synchronizing signal is taken is different; The synoptic diagram of the picture that video camera photographed when the details in a play not acted out on stage, but told through dialogues of synchronizing signal had been shown among Fig. 5, visible by figure, take through details in a play not acted out on stage, but told through dialogues in synchronizing signal; In the realtime graphic that obtains, details in a play not acted out on stage, but told through dialogues background combination can clearly be distinguished mutually with the foreground picture of combination and come.When reality was implemented, the tone in the time of can requiring foreground content (for example host's clothes, hair color etc.) with the display screen details in a play not acted out on stage, but told through dialogues was inequality, to avoid that follow-up processing procedure is caused interference.
The background combination part of video camera in the captured realtime graphic that obtains of the details in a play not acted out on stage, but told through dialogues moment of synchronizing signal; It will be the color background background color of dark saturation degree; Take the back in order to ensure details in a play not acted out on stage, but told through dialogues and form the color background background color, the screen of combination and piece can use same material.
Subsequently, video camera is carried out separation of images at the captured realtime graphic that obtains of the details in a play not acted out on stage, but told through dialogues moment of synchronizing signal handle, isolate the clear foreground picture in the realtime graphic.When specifically carrying out separating treatment, can be that the color settings with the details in a play not acted out on stage, but told through dialogues background combination in the realtime graphic is transparent or the color background background color, so that the stack in the subsequent process is synthetic.During enforcement; Chroma key function capable of using realizes, the background color through chroma key function details in a play not acted out on stage, but told through dialogues background combination is set to transparent, and existing linear editing device capable of using is realized; Existing linear editing utensil has the chroma key function, thereby also can not increase too much equipment cost.Then; According to above-mentioned definite scaling coefficient the picture signal of original combination input is carried out convergent-divergent; During convergent-divergent; The picture signal of original combination input multiply by the scaling coefficient; Thereby obtain the image of the equal size of indication range of the background combination in the realtime graphic with shot by camera, and, confirm the picture position of background combination when image is play output according to the position coordinates scope of the background combination in the captured image of being stored; The position of the image that will soon obtain behind the convergent-divergent is set to identical with the position of background combination in realtime graphic; Clear foreground picture after the image that obtains behind the convergent-divergent handled as bottom, through separation of images is as the top layer synthetic laggard line output that superposes and since in the top layer images except the background color of the details in a play not acted out on stage, but told through dialogues background combination of foreground image (for example host) clearly become transparent, thereby can obtain output image according to the mode that pixel superposes; And the wall display frame that can guarantee different camera observation point all is positive in the face of the televisor spectators, also can guarantee the consistance of each display unit picture.
In a concrete actual use, concrete use can be:
In the debug phase in early stage; Special processing is carried out in four corners to the screen edge of spliced display wall; The spliced display wall here can be the spliced display wall of any types such as DLP, LCD, LED; Be convenient to the screen edge of video camera identification spliced display wall, when having set and having begun to debug in the position of each video camera, each video camera is taken respectively; According to each shot by camera to image can be observed four edge coordinate points of background display wall; And and then confirm to the background combination of each video camera with respect to the scaling coefficient of the whole picture of shooting of this video camera and background combination to obtain can storing after this scaling coefficient, the position coordinates scope with respect to the position coordinates scope of the whole picture of shooting of this video camera, make things convenient in the subsequent process and use;
Get into operate as normal record broadcast state after; The host can get into and carry out hosting work in the picture; Utilize light field and the details in a play not acted out on stage, but told through dialogues identification prospect and the background image of synchronizing signal; The proper extension details in a play not acted out on stage, but told through dialogues time, take when being chosen in the details in a play not acted out on stage, but told through dialogues of synchronizing signal, thereby can obtain comprising the details in a play not acted out on stage, but told through dialogues background display wall and the realtime graphic of foreground picture clearly; Separation for fear of to the background display wall causes interference, can require the tone of foreground content (for example host's clothes etc.) when avoiding with the mosaic display screen details in a play not acted out on stage, but told through dialogues identical;
Color settings with each pixel of the details in a play not acted out on stage, but told through dialogues background display wall in the captured image is transparent then; Owing to be the shooting of when the details in a play not acted out on stage, but told through dialogues of synchronizing signal, carrying out; The color of background display wall picture is all darker, and the host may shelter from a part of combination in concrete hosting work, therefore; Be provided with when transparent; Can be with in the background combination scope in the captured realtime graphic, the pixel value color settings that reaches each pixel of certain specific threshold is transparent, this specific threshold is set according to actual needs, thereby can avoid the pixel that the host of prospect is corresponding also to be set to transparent;
Subsequently; According to above-mentioned definite scaling coefficient the picture signal of original combination input is carried out convergent-divergent; The indication range that makes background combination in the image that obtains behind the convergent-divergent and the captured realtime graphic is size on an equal basis; Then the image that obtains behind the convergent-divergent is placed the picture position of background combination at captured realtime graphic, the foreground image after the image that obtains behind the convergent-divergent is handled as bottom, through separation of images superposes as top layer and synthesizes laggard line output.
Concrete example 2
The schematic flow sheet of method of the piece of the elimination combination in this concrete example has been shown among Fig. 6; In this example, be the picture signal of original combination input to be carried out the laggard line output of background combination that the signal behind the convergent-divergent directly replaces in the photographic images describe.
As shown in Figure 6, in this concrete example, method of the present invention comprises step:
Step S601: discern the background combination in the captured image, get into step S602;
Step S602: confirm that background combination and the total painting reduction of area of captured image in the captured image put scale-up factor, get into step S603;
Step S603: when the details in a play not acted out on stage, but told through dialogues of synchronizing signal, take, obtain comprising the realtime graphic of details in a play not acted out on stage, but told through dialogues background combination and clear foreground picture, get into step S604;
Step S604: according to above-mentioned scaling coefficient the picture signal of original combination input is carried out convergent-divergent, with the laggard line output of details in a play not acted out on stage, but told through dialogues background combination in the captured realtime graphic that obtains of image replacement that obtains behind the convergent-divergent.
As implied above; In this concrete example 2; Be the picture signal of original combination input to be carried out convergent-divergent according to the scaling coefficient; After obtaining the image of the equal size of indication range of background combination in the realtime graphic with shot by camera, according to the position coordinates scope of the background combination in the captured image of being stored, with the image that obtains behind the convergent-divergent as the direct laggard line output of details in a play not acted out on stage, but told through dialogues background combination in the taken realtime graphic of replacement of bottom.
And in the use of reality; Because the difference of the angle that video camera is set; In concrete hosting work, the host may shelter from the combination of a part, therefore; When the image that after with convergent-divergent, obtains replaces details in a play not acted out on stage, but told through dialogues background combination; Can be according to image that obtains behind the convergent-divergent and background display wall corresponding relation, with in the background combination scope in the taken realtime graphic, pixel value each pixel of reaching certain specific threshold replaces, this specific threshold is set according to actual needs, thereby can avoid the pixel that the host of prospect is corresponding also to replace.
Identical in other technologies characteristic in the concrete example 2 and the above-mentioned concrete example 1 do not repeat them here.
Wherein, In the explanation of above-mentioned two concrete examples; Only describe with two kinds of modes of utilizing the image that obtains behind the convergent-divergent that the background combination in the captured image is replaced wherein; Based on the purpose of replacement, those skilled in the art's multiple other mode of can also deriving is replaced, and these deriving modes should all be included within the present invention program's scope.
In addition; In above-mentioned explanation, be to be that example describes, promptly after debugging the time to confirm the position coordinates scope of scaling coefficient and background combination in debugging, to be applied in the processing procedure of follow-up captured realtime graphic then to the concrete course of work; The position of each video camera is changeless; And the variation that the camera lens of not considering video camera furthers and zooms out, this is because in a lot of conventional TV programme, only need be applied to the switching between the captured image of each camera; Do not relate to that furthering of single camera zoomed out and the variation of angular transition, for example news category program.And in the radio, TV and film industries of reality; From the concrete needs of using, the application that some program still possibly further and zoom out, for example programme televised live; In this case; Can be for system increase the intelligent self-learning pattern, after the each focusing of camera lens, and detect the video camera shooting angle at every turn and change; System is all with automatic Start-up and Adjustment pattern, and the background combination that recomputates the shot by camera picture is with respect to the scaling coefficient of the whole picture of shooting of this shot by camera picture and the position coordinates scope of background combination.Get into operate as normal record broadcast state after, realtime graphic is separated, merges stack output image afterwards.When definite scaling coefficient, consider the combination that may be blocked a part by the host, can be that the characteristic of rectangle is taken all factors into consideration according to determined combination border, combination, concrete processing procedure does not repeat them here.
In addition; In the explanation of above-mentioned two concrete examples of the present invention; All be to take during with the details in a play not acted out on stage, but told through dialogues of video camera when the concrete work to describe in synchronizing signal; This is that the color of considering the background combination in resultant image when details in a play not acted out on stage, but told through dialogues is taken can obviously be different from the color of foreground image, thereby is convenient to the identification to the background combination, conveniently extraction and foreground picture clearly; And be convenient to replacement to background combination part; As long as can realize the replacement of background combination part in the photographic images and be unlikely to cause mistake replacement, also can be to take constantly, as long as the image that can realize obtaining behind the convergent-divergent is to the replacement output of background combination at the light field of synchronizing signal to foreground picture.
Concrete example 3
The schematic flow sheet of method of the piece of the elimination combination in this concrete example has been shown among Fig. 7; In this example, be with behind the piece of discerning background combination in the captured image, each display unit in the captured image is carried out the laggard line output of piece part that convergent-divergent covers the background combination is that example is explained.
As shown in Figure 7, in this concrete example, method of the present invention comprises step:
Step S701: discern the background combination in the captured image, get into step S702;
Step S702: calculate the position coordinates of each piece of background combination in the captured image, get into step S703;
Step S703: according to the position coordinates of each piece, calculate and to determine the zoom factor that each display unit of background combination in the captured image zooms to the setting position of adjacent piece, get into step S704;
Step S704: the zoom factor according to each display unit carries out convergent-divergent to the corresponding image of each display unit of background combination in the realtime graphic of shot by camera.
After the corresponding image of each display unit of background combination carries out convergent-divergent in to realtime graphic in step S704, scaled images is exported, can be realized the seamless output of the image of background combination part.
When in above-mentioned steps S702, calculating the position coordinates of each piece of background combination in the captured image; Consider that each display unit in same combination all is same size; And the distance between each display unit (being piece) is also identical usually; Therefore; Can calculate the position coordinates of each piece automatically according to edge coordinate point and the type of combination system and the size of display unit of background combination, illustrate among Fig. 8 and used the synoptic diagram that this mode is confirmed the position coordinates of piece in the background combination.
In shown in Figure 8; Be the identification of hypothesis through background combination in three outstanding marks of display unit frame interpolation of combination are realized photographic images automatically; In the example shown in Figure 8, be that the size of each display unit of hypothesis is all identical and each piece size is also identical.As shown in Figure 8, the coordinate that can in captured image, identify three outstanding marks for (x1, y1), (x2; Y1) and (x5 y3), is set in the video camera output picture; Px is a display unit length, and horizontal longitudinal joint gap length degree is L, and Py is the display unit height; Thereby, can have: 4Px+3L=x5-x1 according to wall system type and display unit size; 3Px+3L=x5-x2; 2Py+1L=y3-y1.Thereby can calculate: Px=x2-x1; L=(x5-x2-3Px)/3; Py=(y3-y1-(x5-x2-3Px)/3)/2.
And can calculate the position coordinates of each display unit in view of the above:
With the 0-0 display unit is example: the coordinate points in its upper left corner for (x1, y1), the coordinate points in its upper right corner be (x2, y1), the coordinate points in its lower right corner be (x2, y2), promptly (x2, y1+Py), the coordinate points in its lower left corner be (x1, y2), promptly (x1, y1+Py);
With the 0-1 display unit is example: the coordinate points in its upper left corner for (x2 ', y1), promptly (x2+L, y1); The coordinate points in its upper right corner be (x3, y1), promptly (x2+Px+L, y1); The coordinate points in its lower right corner be (x3, y2), promptly (x2+Px+L, y1+Py); The coordinate points in its lower left corner be (x2 ', y2), promptly (x2+L, y1+Py).
Employing algorithm in like manner can obtain the coordinate points of other display units.
Subsequently, according to the coordinate points of each display unit, can confirm zoom factor to each display unit.When carrying out convergent-divergent to each display unit; As long as each display unit carries out can covering respectively behind the convergent-divergent part and the piece that the part behind each display unit convergent-divergent can cover the background combination fully of the piece of background combination; Computation purpose for convenience; Can make the border of each display unit behind the convergent-divergent arrive the middle boundary of adjacent piece respectively, in following explanation, be that example describes so that the border of each display unit behind the convergent-divergent arrives the middle boundary of adjacent piece respectively.
With the 0-0 display unit shown in Fig. 8 is example:
When the border with this 0-0 display unit zooms to the middle boundary of adjacent piece, promptly need will by (x1, y1), (x2; Y1), (x2, y2), (x1, the picture of y2) forming is amplified to by (x1; Y1), ((x2+x2 ')/2, y1), ((x2+x2 ')/2, (y2+y2 ')/2), (x1; (y2+y2 ')/2) picture of forming, wherein, (x2 '+x2)/the 2nd, the intermediate value of horizontal direction piece; (y2 '+y2)/the 2nd, the intermediate value of vertical direction piece;
Thereby can obtain the zoom factor of 0-0 display unit; Wherein the amplification coefficient on the horizontal direction is: ((x2+x2 ')/2-x1)/(x2-x1); Amplification coefficient on the vertical direction is: ((y2+y2 ')/2-y1)/(y2-y1), image can adopt high order interpolation scheduling algorithm to realize the compensation of pixel when amplifying.
With the 0-1 display unit shown in Fig. 8 is example:
When the border with this 0-1 display unit zooms to the middle boundary of adjacent piece, promptly need will by (x2 ', y1), (x3; Y1), (x3, y2), (x2 ', the picture of y2) forming is amplified to by ((x2+x2 ')/2; Y1), ((x3+x3 ')/2; Y1), ((x3+x3 ')/2, (y2+y2 ')/2), ((x2+x2 ')/2, (y2+y2 ')/2) picture formed;
Thereby can obtain the zoom factor of 0-1 display unit; Wherein the amplification coefficient on the horizontal direction is: ((x3+x3 ')/2-(x2+x2 ')/2)/(x3-x2 '); Amplification coefficient on the vertical direction is: ((y2+y2 ')/2-y1)/(y2-y1), image can adopt high order interpolation scheduling algorithm to realize the compensation of pixel when amplifying.
Employing algorithm in like manner can obtain the zoom factor and the piece offset of other display units.
After calculating the zoom factor of each display unit, can be kept in the storage medium, handle with the seam convergent-divergent that disappears that is applied in the follow-up real-time shooting process.Wherein, when video camera is started shooting or the position is moved, the background combination be need discern again at every turn, zoom ranges and zoom factor that piece is located, confirmed to calculate each display unit carried out.
Carrying out convergent-divergent according to zoom factor when handling, can be to handle according to relative coordinate, at this moment, need convert absolute coordinates to relative coordinate earlier, deduct that (x1 y1) can obtain relative coordinate with each coordinate figure is corresponding.For ease of understanding, in following explanation, remain with absolute coordinates and describe.
In above-mentioned example shown in Figure 8, be that the equal size of size of each display unit of hypothesis, the situation that each piece size is also identical get off to realize the identification of piece and the calculating of zoom factor.In the actual installation process, because the problem of error, the size of each piece maybe be also inequality, therefore, and can be through the displaying contents of physics piece and display unit being distinguished the identification that realizes piece.The synoptic diagram of piece being discerned according to this mode has been shown among Fig. 9, shown in Figure 9 in, be that the combination with 3 row, 3 row is example describes.
In order better the physics piece of combination to be distinguished; Can be to coat a kind of special color material at the splicing seams of display unit; This color can adopt a kind of color of RGB three primary colours outside synthetic; To distinguish mutually, conveniently distinguish physics piece and background image content with the color that the background display wall can show.
In wherein a kind of application mode,, after taking through video camera, the image that photographs is confirmed piece, obtain the piece template, and, obtain the template position coordinate of shooting picture through the image binaryzation processing through export complete white picture at spliced display wall.
When carrying out the image binaryzation processing, if the brightness value of pixel then is output as 1, otherwise is output as 0 greater than setting threshold Y.Can confirm the piece boundary position of the image shown in Fig. 9 in view of the above; Wherein, The piece boundary position of horizontal direction is respectively: x0, x1, x2, x3, x4, x5......, the piece boundary position of vertical direction is respectively: y0, y1, y2, y3, y4, y5.......
Thereby can confirm the zoom ranges and the zoom factor of each display unit in view of the above, in view of the above each display unit carried out convergent-divergent, be example with the middle boundary that each display unit is zoomed to adjacent piece, in the example depicted in fig. 9:
For the 0-0 display unit:
Need with the image of 0-0 display unit by (x0, y0), (x0, y1), (x1; Y0), (x1, the picture of y1) forming be amplified to by (x0, y0), (x0; Y2), (x2, y0), (x2, the picture of y2) forming; Thereby can confirm that the zoom factor on the horizontal direction is: (x2-x0)/(x1-x0), the zoom factor on the vertical direction is: (y2-y0)/(y1-y0), image can adopt high order interpolation scheduling algorithm to realize the compensation of pixel when amplifying;
For the 0-1 display unit:
Need with the image of 0-1 display unit by (x3, y0), (x3, y1), (x4, y0), (x4; Y1) picture of forming be amplified to by (x2, y0), (x2, y2), (x5; Y0), (x5, the picture of y2) forming, thus can confirm that the zoom factor on the horizontal direction is: (x5-x2)/(x4-x3).Zoom factor on the vertical direction is: (y2-y0)/(y1-y0), image can adopt high order interpolation scheduling algorithm to realize the compensation of pixel when amplifying;
For the 0-2 display unit:
Need with the image of 0-2 display unit by (x6, y0), (x6, y1), (x7; Y0), (x7, the picture of y1) forming be amplified to by (x5, y0), (x5; Y2), (x7, y0), (x7, the picture of y2) forming; Thereby can confirm that the zoom factor on the horizontal direction is: (x7-x5)/(x7-x6), the zoom factor on the vertical direction is: (y2-y0)/(y1-y0), image can adopt high order interpolation scheduling algorithm to realize the compensation of pixel when amplifying.
Based on same principle, can obtain the offset of zoom ranges, zoom factor and the corresponding piece of other each display units.
According to above-mentioned definite zoom ranges, zoom factor to the realtime graphic of shot by camera in each display unit of background combination carry out the laggard line output of convergent-divergent; Can realize the image output of seamless picture; And this processing mode need not to consider the image frame overlay region; Lower the complicacy of system handles greatly, be specially adapted to the generation of the seamless picture of photoelectricity industry background combination.
Concrete example 4
In this concrete example 4, with respect to above-mentioned concrete example 3, consider problem to the colour consistency of spliced display wall, scaled images has been carried out the color compensating processing.The convergent-divergent process that combines in this concrete example 4 in the above-mentioned concrete example 3 has been shown among Figure 10 has carried out the schematic flow sheet that color compensating is handled.
Shown in figure 10, the flow process that color compensating is handled comprises step:
Step S1001: the image when taking spliced display wall and exporting panchromatic test pattern is compensated the coefficient test pattern, gets into step S1002;
Step S1002: the zoom factor according to each display unit carries out convergent-divergent to the corresponding image of each display unit of background combination in the penalty coefficient test pattern, gets into step S1003;
Step S1003: the pixel value of each pixel of background combination in the penalty coefficient test pattern behind the identification convergent-divergent, the penalty coefficient according to the pixel value and the pixel target setting value of each pixel are calculated each pixel in the background combination gets into step S1004;
Step S1004: each pixel according to background combination in the penalty coefficient of each pixel realtime graphic after to convergent-divergent carries out pixel compensation.
Wherein, the identical test pattern of pixel value that above-mentioned panchromatic test pattern is each pixel for example can be complete white test pattern, and this moment, above-mentioned penalty coefficient test pattern was complete white test pattern image; Perhaps can be to comprise complete red test pattern, complete green test pattern, complete blue test pattern, this moment, above-mentioned penalty coefficient test pattern comprises complete red test pattern image, complete green test pattern image, complete blue test pattern image.
Comprise in employing under the situation of panchromatic test pattern of complete red test pattern, complete green test pattern, complete blue test pattern; The mode of the penalty coefficient of concrete definite each pixel can be; Make spliced display wall show red entirely, the background frame of green, full indigo plant entirely respectively; Video camera respectively based on entirely red, background combination picture green, full indigo plant is taken entirely; Obtain the pixel value of each pixel under this test mode respectively, promptly R value, G value, B value can realize the calculating of the penalty coefficient of each pixel subsequently according to pixel target setting value.
The pixel target setting value of supposing each pixel for (Ro, Go, Bo), then can according to this target value set (Ro, Go Bo) calculate the penalty coefficient of each pixel, the pixel value of supposing certain pixel for (B), then the penalty coefficient of this pixel is for R, G:
R component penalty coefficient r-gain=Ro/R;
G component penalty coefficient g-gain=Go/G;
B component penalty coefficient b-gain=Bo/B;
The penalty coefficient that realtime graphic after convergent-divergent in the above-mentioned concrete example 1 handled multiply by corresponding each pixel is exported again, can realize the output of the playing image behind color, the gamma correction.
Definite mode of above-mentioned penalty coefficient; Only be to be illustrated with the simplest a kind of mode wherein; According to actual needs; Can also be that the processing mode that includes other is taken all factors into consideration,, can compensate the consistance output that realizes color, brightness to each pixel and get final product as long as can determine the penalty coefficient of each pixel.
In conjunction with the mode of the elimination piece in the above-mentioned concrete example 3 and the mode of the pixel compensation in the concrete example 4, in a concrete actual use, the complete white test pattern of output is an example during with test, and concrete use can be:
In the debug phase in early stage; The piece of spliced display wall is coated special material color be convenient to discern piece; The spliced display wall here can be the spliced display wall of any types such as DLP, LCD, LED, when having set and having begun to debug in the position of each video camera, shows complete white picture toward spliced display wall output; Each video camera is taken respectively; According to each shot by camera to complete white test pictures can observe and distinguish each display unit and piece in the captured background display wall, through discerning the border of confirming each display unit and the position coordinates of piece, confirm zoom ranges and scaling to each display unit; Obtain to store behind zoom ranges and the scaling of each display unit, so that use in the follow-up course of normal operation;
Zoom ranges and scaling according to each display unit that obtains carry out convergent-divergent to the complete white test pictures that photographs; With the piece in the complete white test pictures of eliminating the background combination; Subsequently, the pixel value of each pixel in the complete white test pictures of identification behind the convergent-divergent is confirmed the penalty coefficient of each pixel according to pixel target setting value; And the penalty coefficient of each pixel that will obtain stores, and uses in the convenient follow-up course of normal operation;
Get into operate as normal record broadcast state after; After video camera has photographed the realtime graphic that comprises the background combination; At first each display unit of the background combination of realtime graphic is carried out convergent-divergent according to the zoom ranges of above-mentioned each display unit and scaling; Realization disappears to stitch and handles, and the realtime graphic behind the convergent-divergent carries out the laggard line output of compensation of pixel value again according to the penalty coefficient of above-mentioned each pixel, realizes that the consistance of color, brightness is handled.
According to the method for the elimination combination piece of the invention described above, the present invention also provides a kind of device of eliminating the combination piece, and the structural representation of the device embodiment of elimination combination piece of the present invention has been shown among Figure 11, and it includes:
Combination recognition unit 1101 is used for discerning the background combination of captured image, and this combination recognition unit 1101 is connected with video camera;
Disappear and stitch unit 1102, be used for captured image is handled, eliminate the piece of background combination in the captured image.
Wherein, Combination recognition unit 1101 can have various implementation, for example when the background combination of identification in the captured image; Can be through respectively increase the sign of a special color or nailing material at four edges of combination; Perhaps set 1 to 2 special pixel value in four corners of screen, perhaps the screen edge at spliced display wall adds at least three outstanding marks, thereby can be through discerning the demonstration edge that these special markings automatically identify the background combination when video camera is taken; And then identifying the background combination in the captured image, concrete implementation does not repeat them here.
Disappear seam unit 1102 when the piece of background combination in the captured image is handled, eliminated to captured image; According to actual needs; Multiple different processing mode can be arranged; In a kind of therein mode, can be to utilize original combination input signal, to the background combination in the captured image partly replace or with the output that superposes of isolated clear foreground picture; In other a kind of mode, can be through carrying out the laggard line output of piece part of convergent-divergent covering background combination behind the piece of discerning background combination in the captured image, to each display unit in the captured image.Below just describe respectively to the concrete example of these implementations.
Concrete example 5
The structural representation of the device of the elimination combination piece in this concrete example 5 has been shown among Figure 12; In this example, be with after separating background display wall part in the captured image, the picture signal of original combination input being carried out convergent-divergent, with separate the output that superposes of back image and describe.
Shown in figure 12, in this concrete example, the seam unit 1102 that disappears includes:
The scale-up factor that is connected with combination recognition unit 1101 is confirmed unit 1201, is used for the background combination of definite captured image and the total painting reduction of area of captured image and puts scale-up factor;
The background image separative element 1202 that is connected with video camera is used for taking the realtime graphic that obtains when the details in a play not acted out on stage, but told through dialogues of synchronizing signal from video camera and isolates foreground picture clearly;
Confirm the fusion superpositing unit 1203 that unit 1201, background image separative element 1202 are connected with scale-up factor; Be used for the picture signal of original combination input is carried out convergent-divergent according to above-mentioned scaling coefficient, with the image that obtains behind the convergent-divergent corresponding to the clearly foreground picture of the background combination position in the captured realtime graphic as bottom layer image, after separating as the top layer images laggard line output that superposes.
In this concrete example 5; The RM of background combination, total painting reduction of area put scale-up factor definite mode, details in a play not acted out on stage, but told through dialogues style of shooting, background image separate mode and merge stack mode can with the concrete example 1 of the invention described above method in identical, do not repeat them here.
When specifically using, can be the device that all inserts the elimination combination piece that a invention described above is installed to each video camera, shown in figure 13.For the economize on hardware cost; Can be to make each video camera insert the device of same elimination combination piece; At this moment, can be provided with signal selected cell 1400, this signal selected cell 1400 is connected between video camera and combination recognition unit 1101, the background image separative element 1202; Select to confirm the image of which shot by camera is handled through signal selected cell 1400, shown in figure 14.The selection of the signal of 1400 pairs of video cameras of signal selected cell, the signal synchronised that the camera lens in the time of can be with the camera lens instructor in broadcasting switches does not repeat them here.
Concrete example 6
The structural representation of the device of the elimination combination piece in this concrete example 6 has been shown among Figure 15; In this example, be with the picture signal of original combination input carry out signal behind the convergent-divergent directly replace the background combination in the photographic images after the output playing image describe.
Shown in figure 15, in this concrete example, the seam unit 1102 that disappears includes:
The scale-up factor that is connected with combination recognition unit 1101 is confirmed unit 1501, is used for the background combination of definite captured image and the total painting reduction of area of captured image and puts scale-up factor;
Confirm the replacement integrated unit 1502 that unit 1501 is connected with video camera, scale-up factor; Be used for the picture signal of original combination input being carried out convergent-divergent, the image replacement video camera that obtains behind the convergent-divergent is taken the laggard line output of details in a play not acted out on stage, but told through dialogues background combination in the realtime graphic that obtains when the details in a play not acted out on stage, but told through dialogues of synchronizing signal according to above-mentioned scaling coefficient.
In this concrete example 6, definite mode of the RM of background combination, scale-up factor, replacement amalgamation mode can with the concrete example 2 of the invention described above method in identical, do not repeat them here.
When specifically using, can be the device that all inserts the elimination combination piece that a invention described above is installed to each video camera, shown in figure 16.For the economize on hardware cost; Can be to make each video camera insert the device of same elimination combination piece; At this moment, can be provided with signal selected cell 1700, this signal selected cell 1700 is connected video camera and combination recognition unit 1101, replaces between the integrated unit 1502; Select to confirm the image of which shot by camera is handled through signal selected cell 1700, shown in figure 17.The selection of the signal of 1700 pairs of video cameras of signal selected cell, the signal synchronised that the camera lens in the time of can be with the camera lens instructor in broadcasting switches does not repeat them here.
Concrete example 7
The structural representation of the device of the elimination combination piece in this concrete example 7 has been shown among Figure 18; In this example, be with behind the piece of discerning background combination in the captured image, each display unit in the captured image is carried out the laggard line output of piece part that convergent-divergent covers the background combination is that example is explained.
Shown in figure 18, in this concrete example, the seam unit 1102 that disappears includes:
It is single 1801 that the piece position is confirmed, is used for calculating the position coordinates of each piece of captured image background combination;
Zoom factor is confirmed unit 1802, calculates zoom ranges and the zoom factor determine each display unit of background combination in the captured image, said zoom ranges be with display unit zoom to adjacent piece the scope of setting position;
Unit for scaling 1803 is used for according to the zoom ranges of each display unit and zoom factor the corresponding image of each display unit of the realtime graphic background combination of shot by camera being carried out convergent-divergent.
In this concrete example 7; Definite mode of the RM of background combination, piece position, definite mode of zoom factor, convergent-divergent disappear seam mode and adjacent piece desired location definite mode can with the concrete example 3 of the invention described above method in identical, do not repeat them here.
When specifically using, can be the cancellation element that all inserts the combination piece that a invention described above is installed to each video camera.For the economize on hardware cost; Can be to make each video camera insert the cancellation element of same combination piece; At this moment, can be provided with signal selected cell 1900, this signal selected cell 1900 is connected between video camera and combination recognition unit 1101, the unit for scaling 1803; Select to confirm the image of which shot by camera is handled through signal selected cell 1900, shown in figure 19.The selection of the signal of 1900 pairs of video cameras of signal selected cell, the signal synchronised that the camera lens in the time of can be with the camera lens instructor in broadcasting switches does not repeat them here.
Concrete example 8
The structural representation of the cancellation element of the combination piece in this concrete example 8 has been shown among Figure 20, in this example, has considered problem, realized the color compensating of scaled images is handled the colour consistency of spliced display wall.
Shown in figure 21, in this concrete example 8, for above-mentioned concrete example 7, the seam unit 1102 that disappears also includes: pixel recognition unit 2001, penalty coefficient are confirmed unit 2002, pixel compensation unit 2003.
In this concrete example; Above-mentioned unit for scaling 1803; Also be used for the corresponding image of each display unit of penalty coefficient test pattern background combination being carried out convergent-divergent according to the zoom factor of each display unit; The penalty coefficient test pattern here is the resulting image of image when taking spliced display wall and exporting panchromatic test pattern, the identical test pattern of pixel value that panchromatic here test pattern is each pixel;
Wherein, pixel recognition unit 2001 is used for discerning the pixel value of each pixel of the penalty coefficient test pattern background combination behind the convergent-divergent;
Penalty coefficient is confirmed unit 2002, is used for the penalty coefficient that calculates each pixel of background combination according to the pixel value and the pixel target setting value of each pixel;
Pixel compensation unit 2003, each pixel that is used for penalty coefficient according to each pixel realtime graphic background combination after to the unit for scaling convergent-divergent carries out pixel compensation.
In this concrete example 8, definite mode of the RM of background combination, the RM of pixel value, penalty coefficient, pixel compensation mode can with the concrete example 4 of the invention described above method in identical, do not repeat them here.
When specifically using, can be the cancellation element that all inserts the combination piece that a invention described above is installed to each video camera.For the economize on hardware cost; Can be to make each video camera insert the device of same elimination combination piece; At this moment, can be provided with signal selected cell 2100, this signal selected cell 2100 is connected between video camera and combination recognition unit 1101, the unit for scaling 1803; Select to confirm the image of which shot by camera is handled through signal selected cell 2100, shown in figure 21.The selection of the signal of 2100 pairs of video cameras of signal selected cell, the signal synchronised that the camera lens in the time of can be with the camera lens instructor in broadcasting switches does not repeat them here.
In conjunction with the mode of the pixel compensation in the above-mentioned concrete example 4, in a concrete actual use, the complete white test pattern of output is an example during with test, and the concrete use of the device of the combination piece in the above-mentioned concrete example 4 can be:
In the debug phase in early stage; The piece of spliced display wall is coated special material color be convenient to discern piece; The spliced display wall here can be the spliced display wall of any types such as DLP, LCD, LED; When the position of each video camera has been set and has been begun to debug, show complete white picture toward spliced display wall output, each video camera is taken respectively; Shot by camera to complete white test pictures be sent to combination recognition unit 1101; Each display unit and the piece in the background display wall in the complete white test pictures can discerned and distinguish to combination recognition unit 1101, and piece position determination unit 1801 is calculated the position coordinates of determining each piece, and zoom factor confirms that unit 1802 calculates zoom ranges and the scaling of confirming each display unit; Zoom factor confirms that unit 1802 obtains can storing behind zoom ranges and the scaling of each display unit, so that use in the follow-up course of normal operation;
Subsequently, unit for scaling 1803 confirms that according to zoom factor the zoom ranges of each display unit that unit 1802 obtains and scaling carry out convergent-divergent to the complete white test pictures that photographs, with the piece in the complete white test pictures of eliminating the background combination;
Subsequently; The pixel value of each pixel in the complete white test pictures behind the pixel recognition unit 2001 identification convergent-divergents; Penalty coefficient confirms that unit 2002 confirms the penalty coefficient of each pixel according to pixel target setting value; And the penalty coefficient of each pixel that will obtain stores, and uses in the convenient follow-up course of normal operation, and pixel compensation unit 2003 can carry out exporting to the user behind the pixel compensation according to the penalty coefficient of each pixel complete white test pictures after to convergent-divergent and watch the test effect;
Get into operate as normal record broadcast state after; After video camera has photographed the realtime graphic that comprises the background combination; Unit for scaling 1803 is confirmed each display unit that unit 1802 obtains according to zoom factor zoom ranges and scaling carries out convergent-divergent to each display unit of the background combination of realtime graphic; Realization disappears to stitch and handles; Subsequently, pixel compensation unit 2003 carries out the laggard line output of compensation of pixel value according to the penalty coefficient of each pixel realtime graphic after to convergent-divergent, realizes that the consistance of color, brightness is handled.
Device according to the elimination combination piece of the invention described above; The present invention also provides a kind of picture system based on combination; Specifically can be based on the image taking output system of combination, this picture system comprises the device of more than one video camera and aforesaid elimination combination piece of the present invention, wherein; This device of eliminating the combination piece is connected with each video camera; And inserting original combination input signal, each shot by camera image entering apparatus of the present invention disappears and carries out image output after seam is handled, so that the image that the user watched does not have the piece of combination; The concrete framework of this image taking output system can not repeat them here shown in above-mentioned Figure 13, Figure 14, Figure 16, Figure 17, Figure 19, Figure 21.
Device to comprise the elimination combination piece in the above-mentioned concrete example 5 is an example, and the concrete course of work of the image taking output system based on combination of the present invention can be:
In the debug phase in early stage; Image after each video camera will be taken respectively is sent to the device of eliminating the combination piece; The background combination of this each shot by camera of device identification in the image; And and then confirm to the background combination of each video camera with respect to the scaling coefficient of the whole picture of shooting of this video camera and background combination coordinate parameters with respect to the whole picture of shooting of this video camera; Obtain to store after this scaling coefficient, the coordinate parameters, make things convenient in the subsequent process and use;
Get into operate as normal record broadcast state after; The host can get into and carry out hosting work in the picture; Video camera is taken when the details in a play not acted out on stage, but told through dialogues of synchronizing signal; Thereby can obtain comprising the background display wall picture of details in a play not acted out on stage, but told through dialogues and the realtime graphic of foreground picture clearly, cause interference, can require the tone of foreground content (for example host's clothes etc.) when avoiding identical with the mosaic display screen details in a play not acted out on stage, but told through dialogues for fear of separation to background image;
Video camera captured realtime graphic when the details in a play not acted out on stage, but told through dialogues of synchronizing signal is sent to the device of eliminating the combination piece; This installs the image through the no combination piece of output after the processing procedures such as background image separation, image overlay fusion; Concrete processing procedure is same as described above, does not repeat them here.
Above-described embodiment of the present invention only is the detailed description to preferred embodiment of the present invention, does not constitute the qualification to protection domain of the present invention.Any modification of within spirit of the present invention and principle, being done, be equal to replacement and improvement etc., all should be included within the claim protection domain of the present invention.

Claims (10)

1. a method of eliminating the combination piece is characterized in that, comprises step:
Discern the background combination in the captured image;
Captured image is handled, eliminated the piece of background combination in the captured image.
2. the method for elimination combination piece according to claim 1 is characterized in that, the process of captured image being handled, eliminated the piece of background combination in the captured image specifically comprises:
Confirm background combination and the scaling coefficient of captured image in the captured image;
Take realtime graphic, the image after the background combination in the separation realtime graphic obtains separating;
According to said scaling coefficient the picture signal of original combination input is carried out convergent-divergent, with the image that obtains behind the convergent-divergent corresponding to the image of the background combination position in the said realtime graphic as bottom layer image, after separating as the top layer images laggard line output that superposes.
3. the method for elimination combination piece according to claim 1 is characterized in that, the combination in the captured image is handled, specifically comprised with the process of eliminating the piece of combination in the captured image:
Confirm background combination and the scaling coefficient of captured image in the captured image;
Take realtime graphic;
According to said scaling coefficient the picture signal of original combination input is carried out convergent-divergent, the image that obtains behind the convergent-divergent is replaced the laggard line output of background combination in the said realtime graphic.
4. according to the method for claim 2 or 3 described elimination combination pieces, it is characterized in that, when the details in a play not acted out on stage, but told through dialogues of synchronizing signal, take, obtain comprising the said realtime graphic of details in a play not acted out on stage, but told through dialogues background combination and clear foreground picture.
5. according to the method for claim 2 or 4 described elimination combination pieces, it is characterized in that said separating background combination comprises: each pixel of background combination is set to transparent or the color background background color in the said realtime graphic.
6. a device of eliminating the combination piece is characterized in that, comprising:
The combination recognition unit that is connected with video camera is used for discerning the background combination of captured image;
The seam unit that disappears is used for the background combination of captured image is handled, and eliminates the piece of background combination in the captured image.
7. the device of elimination combination piece according to claim 6 is characterized in that, the said seam unit that disappears specifically comprises:
Scale-up factor is confirmed the unit, is used for confirming the background combination of captured image and the scaling coefficient of captured image;
The background image separative element is used for taking the image after the realtime graphic separating background combination that obtains obtains separating from video camera;
Merge superpositing unit; Be used for the picture signal of original combination input is carried out convergent-divergent according to said scaling coefficient, with the image that obtains behind the convergent-divergent corresponding to the background combination position of said realtime graphic image as bottom layer image, after separating as the top layer images laggard line output that superposes.
8. the device of elimination combination piece according to claim 6 is characterized in that, the said seam unit that disappears specifically comprises:
Scale-up factor is confirmed the unit, is used for confirming the background combination of captured image and the scaling coefficient of captured image;
The replacement integrated unit is used for according to said scaling coefficient the picture signal of original combination input being carried out convergent-divergent, and the image replacement video camera that obtains behind the convergent-divergent is taken the laggard line output of background combination in the realtime graphic that obtains.
9. according to the device of claim 7 or 8 described elimination combination pieces, it is characterized in that:
Said realtime graphic is video camera captured realtime graphic that comprises details in a play not acted out on stage, but told through dialogues background combination and clear foreground picture when the details in a play not acted out on stage, but told through dialogues of synchronizing signal;
And/or
Also comprise the signal selected cell, this device of eliminating the combination piece is connected with video camera through the signal selected cell.
10. picture system based on combination; Comprise more than one camera; It is characterized in that, also comprise at least one device like any described elimination combination piece of claim 6 to 9, this device of eliminating the combination piece is connected with at least one camera.
CN201110304006.4A 2011-10-08 2011-10-08 Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall Active CN102508628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110304006.4A CN102508628B (en) 2011-10-08 2011-10-08 Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110304006.4A CN102508628B (en) 2011-10-08 2011-10-08 Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall

Publications (2)

Publication Number Publication Date
CN102508628A true CN102508628A (en) 2012-06-20
CN102508628B CN102508628B (en) 2014-12-24

Family

ID=46220722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110304006.4A Active CN102508628B (en) 2011-10-08 2011-10-08 Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall

Country Status (1)

Country Link
CN (1) CN102508628B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102427504A (en) * 2011-10-08 2012-04-25 广东威创视讯科技股份有限公司 Image processing method, device and system based on background splicing wall
CN102801899A (en) * 2012-08-29 2012-11-28 广东威创视讯科技股份有限公司 Method and device for improving image display quality of spliced screen
CN105304002A (en) * 2015-10-21 2016-02-03 利亚德光电股份有限公司 LED display screen splicing error detection method and device
CN105446692A (en) * 2015-12-28 2016-03-30 浙江宇视科技有限公司 Seam compensation method and device of spliced screen
CN105677280A (en) * 2016-01-05 2016-06-15 广东威创视讯科技股份有限公司 Spliced display screen spliced joint line drawing processing method and device
TWI552600B (en) * 2014-12-25 2016-10-01 晶睿通訊股份有限公司 Image calibrating method for stitching images and related camera and image processing system with image calibrating function
CN106293558A (en) * 2015-05-11 2017-01-04 佰路得信息技术(上海)有限公司 A kind of realize the method and system that multimedia continuously displays
WO2020011249A1 (en) * 2018-07-13 2020-01-16 京东方科技集团股份有限公司 Image processing method and device for tiled screen and tiled screen
CN111553842A (en) * 2020-04-24 2020-08-18 京东方科技集团股份有限公司 Spliced picture display method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923709A (en) * 2009-06-16 2010-12-22 日电(中国)有限公司 Image splicing method and equipment
CN201846426U (en) * 2010-11-10 2011-05-25 北京赛四达科技股份有限公司 Multi-image automatic geometry and edge blending system based on photography

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101923709A (en) * 2009-06-16 2010-12-22 日电(中国)有限公司 Image splicing method and equipment
CN201846426U (en) * 2010-11-10 2011-05-25 北京赛四达科技股份有限公司 Multi-image automatic geometry and edge blending system based on photography

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102427504B (en) * 2011-10-08 2014-12-24 广东威创视讯科技股份有限公司 Image processing method, device and system based on background splicing wall
CN102427504A (en) * 2011-10-08 2012-04-25 广东威创视讯科技股份有限公司 Image processing method, device and system based on background splicing wall
CN102801899A (en) * 2012-08-29 2012-11-28 广东威创视讯科技股份有限公司 Method and device for improving image display quality of spliced screen
TWI552600B (en) * 2014-12-25 2016-10-01 晶睿通訊股份有限公司 Image calibrating method for stitching images and related camera and image processing system with image calibrating function
US9716880B2 (en) 2014-12-25 2017-07-25 Vivotek Inc. Image calibrating method for stitching images and related camera and image processing system with image calibrating function
CN106293558A (en) * 2015-05-11 2017-01-04 佰路得信息技术(上海)有限公司 A kind of realize the method and system that multimedia continuously displays
CN105304002A (en) * 2015-10-21 2016-02-03 利亚德光电股份有限公司 LED display screen splicing error detection method and device
CN105304002B (en) * 2015-10-21 2018-09-28 利亚德光电股份有限公司 The detection method and device of LED display stitching error
CN105446692A (en) * 2015-12-28 2016-03-30 浙江宇视科技有限公司 Seam compensation method and device of spliced screen
CN105677280A (en) * 2016-01-05 2016-06-15 广东威创视讯科技股份有限公司 Spliced display screen spliced joint line drawing processing method and device
WO2020011249A1 (en) * 2018-07-13 2020-01-16 京东方科技集团股份有限公司 Image processing method and device for tiled screen and tiled screen
US11568513B2 (en) 2018-07-13 2023-01-31 Boe Technology Group Co., Ltd. Image processing method and device for spliced panel, and spliced panel
CN111553842A (en) * 2020-04-24 2020-08-18 京东方科技集团股份有限公司 Spliced picture display method and device, electronic equipment and storage medium
CN111553842B (en) * 2020-04-24 2024-03-12 京东方科技集团股份有限公司 Spliced picture display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102508628B (en) 2014-12-24

Similar Documents

Publication Publication Date Title
CN102427504B (en) Image processing method, device and system based on background splicing wall
CN102508628B (en) Method and device for eliminating splicing seams of splicing wall as well as image system based on splicing wall
CN103929604B (en) Projector array splicing display method
US8045060B2 (en) Asynchronous camera/projector system for video segmentation
Fiala Automatic projector calibration using self-identifying patterns
CN106605195B (en) Communication apparatus and control method of communication apparatus
US9538067B2 (en) Imaging sensor capable of detecting phase difference of focus
US20070030452A1 (en) Image adaptation system and method
US20190313070A1 (en) Automatic calibration projection system and method
KR101489261B1 (en) Apparatus and method for managing parameter of theater
CN102665031A (en) Video signal processing method and photographic equipment
CN101620846A (en) Multi display system and multi display method
JP2011082798A (en) Projection graphic display device
CN201919121U (en) Projection system for mosaicking of multiple projected images
CN103702096A (en) Optimizing method, device and system for image fusion treatment
US5940140A (en) Backing luminance non-uniformity compensation in real-time compositing systems
MY131918A (en) Visible-invisible background prompter
JP5515988B2 (en) Signal processing apparatus, signal processing method, display apparatus, and program
KR101310216B1 (en) Apparatus and method for converting color of images cinematograph
JP2006074805A (en) Multi-projection video display device
JP2010085563A (en) Image adjusting apparatus, image display system and image adjusting method
JP3757979B2 (en) Video display system
JPS5851676A (en) Shading compensation circuit
US20090167949A1 (en) Method And Apparatus For Performing Edge Blending Using Production Switchers
CN101430483B (en) Image display apparatus and image display method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233

Patentee after: VTRON GROUP Co.,Ltd.

Address before: 510663 Guangzhou province high tech Industrial Development Zone, Guangdong, Cai road, No. 6, No.

Patentee before: VTRON TECHNOLOGIES Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201125

Address after: Unit 2414-2416, main building, no.371, Wushan Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.

Address before: 510670 Guangdong city of Guangzhou province Kezhu Guangzhou high tech Industrial Development Zone, Road No. 233

Patentee before: VTRON GROUP Co.,Ltd.

Effective date of registration: 20201125

Address after: 226500 Jiangsu city in Nantong Province town of North Street Community in Rugao city in 11 groups

Patentee after: RUGAO TIANAN ELECTRIC TECHNOLOGY Co.,Ltd.

Address before: Unit 2414-2416, main building, no.371, Wushan Road, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGDONG GAOHANG INTELLECTUAL PROPERTY OPERATION Co.,Ltd.