CN101378463A - Composite image generating apparatus, composite image generating method, and storage medium - Google Patents

Composite image generating apparatus, composite image generating method, and storage medium Download PDF

Info

Publication number
CN101378463A
CN101378463A CNA2008102144825A CN200810214482A CN101378463A CN 101378463 A CN101378463 A CN 101378463A CN A2008102144825 A CNA2008102144825 A CN A2008102144825A CN 200810214482 A CN200810214482 A CN 200810214482A CN 101378463 A CN101378463 A CN 101378463A
Authority
CN
China
Prior art keywords
data
image data
unit
view data
composograph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102144825A
Other languages
Chinese (zh)
Other versions
CN101378463B (en
Inventor
湖城孝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Casio Computer Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co Ltd filed Critical Casio Computer Co Ltd
Publication of CN101378463A publication Critical patent/CN101378463A/en
Application granted granted Critical
Publication of CN101378463B publication Critical patent/CN101378463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides an image synthesizing device, an image synthesizing method and program product. The image synthesizing device comprises the following components: a first storing unit (14) which stores the action as synthesizing object to corresponding with first synthesis image data; inputting units (21, 35) which input a plurality of image data; a first determining unit (167) which determines whether the variation of changed part in a plurality of image data inputted by inputting units is approximately same with the action stored in the first storing unit; and first synthesizing unit (169, 171) which read first synthesizing image data stored in the first storing unit corresponding with the action and synthesizing in the image data with the changed part when the first determining unit determines that the variation of changed part in a plurality of image data inputted by inputting units is approximately same.

Description

Image synthesizer, image combining method and program product
Technical field
The present invention relates to image synthesizer, image combining method and the program product of synthetic other image in captured image.
Background technology
A kind of image output device is disclosed at present, it extracts character image from the photographic images that digital camera photographs, and then judge the posture of the character image that this extracts, according to this posture of judging character image is synthesized on this character image and show.
Summary of the invention
The objective of the invention is to, provide a kind of and judge the part that changes, image synthesizer, image combining method and the program product of the view data of synthetic correspondence according to a plurality of view data.
In order to reach above-mentioned purpose, composograph generating apparatus of the present invention has following structure: the 1st memory cell, the storage that is mapped as the action of synthetic object and the 1st composograph data; Input unit is imported a plurality of view data; The 1st judging unit, the change of judging the part that in a plurality of view data, changes by this input unit input whether with described the 1st memory cell in the action of storing identical substantially; And the 1st synthesis unit, when by the 1st judgment unit judges be cardinal principle when identical, read the 1st composograph data of in described the 1st memory cell, storing corresponding to this action, synthesize in having the view data of described changing unit.
In addition, in order to reach above-mentioned purpose, composograph generation method of the present invention comprises following steps: the input step of importing a plurality of view data; The determining step whether change of the part that judgement changes in a plurality of view data by the input of this input step is identical substantially with the action of setting corresponding to the composograph data; And in this determining step, be judged as when substantially identical, the composograph data that will set corresponding to this action are synthesized to the synthesis step in the view data with described changing unit.
In addition, in order to reach above-mentioned purpose, program product of the present invention makes the function of following each one of computer realization: the input unit of importing a plurality of view data; The judging unit (step SD) whether the change of the part that judgement changes in a plurality of view data by the input of this input unit is identical substantially with the action of setting corresponding to the composograph data; And when being identical substantially by this judgment unit judges, the composograph data that will set corresponding to this action are synthesized to the synthesis unit in the view data with described changing unit.
Description of drawings
Fig. 1 is the figure of the animation shooting state in expression the 1st execution mode.
Fig. 2 A is the figure of the state that shows in display part 11 in the presentation graphs 1.
Fig. 2 B is the figure that back composograph of demonstration in display part 11 is taken in expression.
Fig. 3 is the block diagram of the circuit structure of expression digital camera 10.
Fig. 4 is illustrated in the figure that becomes the content of storing in advance in the body data storage 14.
Fig. 5 is the figure that is illustrated in the content of storing in advance in the special-effect data storage 23.
Fig. 6 is illustrated in the figure that destroys the content of storing in advance in the data storage 15.
Fig. 7 is illustrated in the figure that becomes the content of storing in advance in the body character data memory 24.
Fig. 8 is the block diagram of structure of the electronic circuit of expression server unit 30.
Fig. 9 is the flow chart of all processes of expression composograph output processing.
Figure 10 is that the indicated object image extracts the sub-process figure that handles.
Figure 11 is the sub-process figure of the processing of the step SB in the presentation graphs 9.
Figure 12 is the sub-process figure of the processing of the step SC in the presentation graphs 9.
Figure 13 is the sub-process figure of the processing of the step SD in the presentation graphs 9.
Figure 14 is the sub-process figure of the processing of the step SE in the presentation graphs 9.
Figure 15 is the sub-process figure of the processing of the step SF in the presentation graphs 9.
Figure 16 is the sub-process figure of the processing of the step SG in the presentation graphs 9.
Figure 17 is the sub-process figure of the processing of the step SH in the presentation graphs 9.
Figure 18 A is the figure of expression captured image data G1.
Figure 18 B is the figure of the captured image data Gm1 of expression tape label.
Figure 18 C is the figure of expression composograph data GG1.
Figure 19 A is the figure of expression captured image data G2.
Figure 19 B is the figure of the captured image data Gm2 of expression tape label.
Figure 19 C is the figure of expression composograph data GG2.
Figure 20 A is the figure of expression captured image data G3.
Figure 20 B is the figure of the captured image data Gm3 of expression tape label.
Figure 20 C is the figure of expression composograph data GG3.
Figure 21 A is the figure of expression captured image data G4.
Figure 21 B is the figure of the captured image data Gm4 of expression tape label.
Figure 21 C is the figure of expression composograph data GG4.
Figure 22 A is the figure of expression captured image data G5.
Figure 22 B is the figure of the captured image data Gm5 of expression tape label.
Figure 22 C is the figure of expression composograph data GG5.
Figure 23 A is the figure of expression captured image data G6.
Figure 23 B is the figure of the captured image data Gm6 of expression tape label.
Figure 23 C is the figure of expression composograph data GG6.
Figure 24 A is the figure of expression captured image data G7.
Figure 24 B is the figure of the captured image data Gm7 of expression tape label.
Figure 24 C is the figure of expression composograph data GG7.
Figure 25 A is the figure of expression captured image data G8.
Figure 25 B is the figure of the captured image data Gm8 of expression tape label.
Figure 25 C is the figure of expression composograph data GG8.
Figure 26 A is the figure of expression captured image data G9.
Figure 26 B is the figure of the captured image data Gm9 of expression tape label.
Figure 26 C is the figure of expression composograph data GG9.
Embodiment
Fig. 1 be the expression present embodiment utilization the figure of shooting state of digital camera 10, Fig. 2 A and Fig. 2 B are the figure of the state that shows in the expression display part 11 and the figure that takes the composograph that the back shows in display part 11.
Digital camera 10 has LCD display part 11, input part 12, audio output unit (loud speaker) 13 at its back side, the captured image data Gn that takes according to the operation of input part 12 is presented in the display part 11 in real time.
And digital camera 10 is knocked down performer P object A, is shot down sudden object B, and the scene of the action that performer K logins is in advance taken and record as animation.
A plurality of captured image data Gn of shooting continuous in time are presented in the display part 11 shown in Fig. 2 A in real time.Meanwhile, analyze a plurality of captured image datas, the flag data 24Mw of the synthetic replacement image data 24Gw of additional representation shows in the view data corresponding with performer P, about the pairing view data of performer K, when detecting predefined action, the flag data 14Mu of the synthetic replacement image data 14Gu of additional representation shows.
In addition, flag data (before the contact) 15Ma of the synthetic replacement image data of additional representation (before the contact) 15Ga shows in the pairing view data of object A, and flag data (before the contact) 15Mb of the synthetic replacement image data of additional representation (before the contact) 15Gb shows in subject image data B.
In addition, when taking, read the voice data of storing in advance accordingly with each flag data, export from audio output unit 13.
In addition, in the explanation afterwards, the image that will add flag data 24Mw, flag data 14Mu, flag data 15Ma and flag data 15Mb in captured animation is called the photographic images Gm of tape label.
In addition, after the shooting, in the captured image data Gm of tape label, each view data (displacement object image data) of having added flag data 14Mu, 15Ma, 15Mb and 24Mu is replaced into the replacement image data 24Gw shown in Fig. 2 B, replacement image data 14Gu, replacement image data 15Ga, replacement image data 15Gb respectively, then, about replacement image data 24Gw, replacement image data 14Gu, generation captures the animation of the change of performer K or P, generates they are synthesized among the background image data BG and the composograph data GG that obtains.
In addition, be elaborated in the back, digital camera 10 is when contacting with performer P or the pairing displacement object image data of K to the pairing displacement object image data of object A or the pairing displacement object image data of object B by analyzing and detecting, flag data (before the contact) 15Ma and flag data (before the contact) 15Mb are replaced with flag data (contact back) 15Ma ' respectively, flag data (contact back) 15Mb ', and with replacement image data (before the contact) 15Ga, replacement image data (before the contact) 15Gb also is replaced into the pairing replacement image data in contact back (contact back) 15Ga ', replacement image data (contact back) 150Gb ' synthesizes.
Fig. 3 is the block diagram of structure of the electronic circuit of expression digital camera 10.
This digital camera 10 has the CPU16 as computer.
CPU16 is according to the system program of storage or read in the camera control program of memory 17 or the server unit 30 from the communication network (N) via recording medium reading parts 19 such as draw-in grooves via transmitting the camera control program that control part 20 reads in memory 17, the action that comes each one of control circuit from external recording mediums such as storage card 18 in advance in memory 17.
In addition, CPU16 mainly has the handling part of handling following function:
Extract handling part 161, among continuous in time a plurality of captured image data Gn, the performer K that carries out specific action is identified as the object that changes, and as the displacement object image data, in addition, by graphical analysis object A, B and performer P are identified as the displacement object image data and extract continuously;
The 1st judgment processing portion 162, it discerns the characteristic or the shape of this view data, judges whether the displacement object image data is the data of logining in advance;
The 1st mark handling part 163, when judging when being the data of logining in advance, corresponding to the displacement object image data in a plurality of captured image datas, the additional marking data show;
The 2nd mark handling part 164 when judging when being the data of logining in advance, judges whether the image as this displacement object is the image of taking with the optimal viewing angle state, adds different flag datas according to this judged result and shows;
The 2nd judgment processing portion 165, in a plurality of captured image datas, judge the position relation (more specifically, the displacement object image data that is extracted processing contacts with displacement object image data that other is extracted out or be overlapping) of the displacement object image data be extracted processing and other displacement object images that is extracted out;
The 3rd mark handling part 166 is being judged contact or when overlapping, replaces flag data and adds and show;
The 3rd judgment processing portion 167 judges whether the motion (action motion) among a plurality of captured image data Gn that are extracted the displacement object image data of handling or be labeled processing is identical substantially with the action of login in advance;
The 4th mark handling part 168 is being judged when being the action of logining in advance, and the additional marking data show;
Replacement Treatment portion 169 about having added a plurality of captured image datas of flag data, will replace object image data and be replaced into the pairing replacement image data of flag data;
Animation generates handling part 170, catches the change of displacement object image data, generates the animation data that makes the 3 dimension changes of replacement image data;
Synthetic animation generates handling part 171, and its generation comprises the synthetic animation data of the animation data that is generated.
The handling procedure of these handling parts 161~171 is stored in the image processing program 22 in advance, is written into CPU16 as required.In addition, the supervision processing of view data, image recognition processing, seizure processing and three dimensional stress are handled and are stored in advance in the image processing program, suitably are written into.
In addition, in program storage 22, except handling procedure, also store communication control program, the voice output handling procedure of the communication operation of the system program of molar behavior of control figure camera 10 or server unit 30 on camera control program, control and the communication network N that action is taken in control or exterior PC (Personal Computer) 40, and, according to from the key input signal of input part 12 or from the shooting input signal of image pickup section 21, be written into via the input signal that transmits control part 20 from external equipment (30,40).
In addition, on the CPU16 except connecting LCD display part 11, input part 12, audio output unit 13, memory 17, recording medium reading part 19, transmitting the control part 20, also connected image pickup section 21 with solid-state imager (CCD) or image pickup optical system and range sensor or illuminance transducer.
In memory 17, have: become body data storage (changing motion data memory) 14, special-effect data storage (special effect graphic data memory) 23, destroy data storage (replaced graphic data memory) 15, become body character data memory (replaced graphic datamemory) 24, captured image data memory 25, the captured image data memory 26 of tape label, composograph data storage 27 and other operation data storage.
In addition, in digital camera 10, the face part 24T of the displacement object image data of performer P correspondence and replacement image data 24Gw are stored in accordingly in advance to be become in the body character data memory 24.
In addition, displacement object image data 15T is stored in advance accordingly with replacement image data 15Ga respectively and is destroyed in the data storage 15.
Fig. 4 is the figure that is illustrated in the content that becomes the data of storing in advance in the body data storage 14.
Become in body data storage 14 at this, in storage area 14a, 14b, distinguish storage action data 14P, replacement image data 14Gu, flag data 14Mu and voice data 14Su accordingly.
In addition, storage is simplified flag data 14Mu1, the 14Mu2 that shows with replacement image data 14Gu1, the 14Gu2 of a plurality of kinds respectively in the storage area of flag data 14Mu.
And storage is used to read action data 14P1a~14P1c, 14P2a~14P2c replacement image data 14Gu1,14Gu2, that be made of a series of action (posture) in the storage area of action data 14P.
In addition, replacement image data 14Gu1,14Gu2, storage shows the 3D rendering generation data of various motions accordingly with personage's brothers' the bone or the mobile data of characteristic points of faces.
In addition, action data 14P1a~14P1c, 14P2a~14P2c storage is with the personage's of this action movement brothers' the bone or the mobile data and the view data of characteristic points of faces.
Fig. 5 is the figure that is illustrated in the content of the data of storing in advance in the special-effect data storage 23.
In this special-effect data storage 23, in storage area 23a, 23b, distinguish storage action data 23P, replacement image data 23Gu, flag data 23Mu and voice data 23Su accordingly.
In the storage area of flag data 23Mu, store to simplify respectively and showed the replacement image data 23Gu1 of a plurality of kinds, flag data 23Mu1, the 23Mu2 of 23Gu2.
In the storage area of action data 23P, corresponding to each of replacement image data 23Gu1,23Gu2, storage is used at the assigned position of replacement image data 14Gu1,14Gu2 action data 23P1a~23P1c, 23P2a~23P2c synthetic, that be made of a series of action (posture).
In addition, replacement image data 23Gu1,23Gu2 are stored with data as view data itself or 3D rendering generation.
In addition, action data 23P1a~23P1c, 23P2a~23P2c storage is with the personage's of this action movement brothers' the bone or the mobile data and the view data of characteristic points of faces.
Fig. 6 is the figure that is illustrated in the content of destroying the data of storing in advance in the data storage 15.
This destruction data storage 15 is difference storage displacement accordingly object image data 15T, replacement image data 15G, flag data (contact is preceding) 15M, flag data (contacting the back) 15M ' and voice data 15S in storage area 15a~15c, and for example contact is preceding, each the replacement image data 15Ga1~15Ga3 when contacting, after the contact in storage in the storage area of replacement image data 15G.
In addition, about voice data 15S, set voice data 15Sa as output time during respectively with contact, with the contact previous crops is that output time is set voice data 15Sb1, set voice data 15Sb2 as output time during with contact, set voice data 15Sc as output time to contact the back.
In each storage area of flag data (before the contact) 15M and flag data 15M ' (contacting the back), for example about storage area 15a, the flag data 15Ma of performance replacement image data 15Ga1, the flag data 15Ma ' of simplification performance replacement image data 15Ga3 are simplified in storage.And, in the storage area of replacing object image data 15T, for example storage area 15a, store the optimal viewing angle state of the displacement object image data (for example view data of object A) of the displacement object that becomes replacement image data 15Ga1~15Ga3.
In addition, replacement image data 15Ga1~15Ga3,15Gb1~15Gb3,15Gc1~15Gc3 is stored with data as view data itself or 3D rendering generation.
In addition, displacement object image data 15T, as taking this object view data itself or represent this object feature shape data and be stored.
Fig. 7 is illustrated in the figure that becomes the content of storage in the body role memory 24.
This change body character data memory 24 is stored displacement object images 24T, replacement image data (1) 24Gw, flag data (1) 24Mw, replacement image data (2) 24Gw ', flag data (2) 24Mw ' and the voice data 24S that the user can login arbitrarily respectively accordingly in storage area 24a, 24b.
In the storage area of replacement image data (1) 24Gw, for example under the situation of storage area 24a, store the replacement image data 24Gw1 in the 1st stage of a plurality of kinds.
In the storage area of flag data (1) 24Mw, for example under the situation of storage area 24a, the flag data 24Mw1 of performance replacement image data 24Gw1 is simplified in storage.
In the storage area of replacement image data (2) 24Gw ', for example under the situation of storage area 24a, store the replacement image data 24Gw1 in 2nd stage corresponding with the replacement image data 24Gw1 in the 1st stage.
In the storage area of flag data (2) 24Mw ', for example under the situation of storage area 24a, the flag data 24Mw1 ' of performance replacement image data 24Gw1 ' is simplified in storage.
In the storage area of displacement object image data 24T, login the performer's of the displacement object that becomes replacement image data 24Gw1,24Gw2 face image data.
In the storage area of voice data 24S, for example under the situation of storage area 24a, with replacement image data 24Gw1 and flag data 24Mw1 stored sound data 24Sw1 accordingly, with replacement image data 24Gw1 ' and flag data 24Mw1 ' stored sound data 24Sw1 ' accordingly.
When showing the captured image data Gm of tape label and when showing composograph data GG, from audio output unit 13 this voice data of output 24S.
In addition, replacement image data 24Gw, 24Gw ' are stored with the combination of data as view data itself that shows various motions or brothers' the bone or the mobile data and the 3D rendering generation of characteristic points of faces.
In addition, though unspecified in the present embodiment, can set arbitrarily to the variation of the demonstration (synthesizing) of the replacement image data 24Gw ' in the 2nd stage from the replacement image data 24Gw in the 1st stage.For example, can change, can change according to the date or time that shows of regenerating as the composograph data according to the date or time of taking.
In addition, about identifying object view data (face image part), logined the face image of multiple expression in advance, for example can in the expression of serenity, read replacement image data 24Gw and synthesize, in angry expression, read replacement image data 24Gw ' and synthesize.
A plurality of captured image data Gn that storage is taken in time continuously by image pickup section 21 in captured image data memory 25.
In the captured image data memory 26 of tape label, about a plurality of captured image data Gn of storage in captured image data memory 25, store the captured image data Gm (with reference to Fig. 2 A) of the tape label that in shooting, generates in real time according to image processing program in turn.
In composograph data storage 27, the composograph data GG (with reference to Fig. 2 B) that storage generates according to the captured image data Gm of the tape label that generates storage in the captured image data memory 26 of tape label and according to image processing program.
In addition, the digital camera 10 that constitutes, additional marking data in a plurality of captured image data Gn that take continuously by image pickup section 21, the captured image data Gm (with reference to Fig. 2 A) that generates tape label in real time shows, after shooting, captured image data Gm generation composograph data GG (with reference to Fig. 2 B) according to the tape label that writes down show that as animation perhaps the animation that will generate exports the outside of digital camera 10 to.
In addition, as shown in Figure 8, also can make the server unit 30 on the communication network N possess the above-mentioned functions that digital camera 10 is had.
In addition, also can make this server unit 30 have the function of communicating by letter with PC40 via communication network N and digital camera 110, captured image data Gm and composograph data GG by server unit 30 generation tape labels reply branch to digital camera 110, PC40 the service that provides are provided.
In this case, server unit 30 possesses CPU31 and promptly possesses computer.
The handling part that each handling part that CPU31 possesses that the CPU16 with digital camera 10 handles 161~171 is equal, and according to the system program of storage or read in the server controls program of memory 32, the action of each one of control circuit via recording medium reading parts 34 such as CD drive in advance in memory 32 from external recording mediums such as CD-ROM 33.
In addition, on the CPU31 except connected storage 32, recording medium reading part 34, also connect input part 36, the LCD display parts 37 such as transmission control part 35, keyboard and mouse of the transmission control be used between digital camera on the communication network N 110 and PC40, carrying out data.
In the program storage 32A of memory 32, the system program of all actions of storage control book server device 30, the communication control program of the communication operation of control figure camera 110 and PC40, and memory image handling procedure in advance, this image processing program control is used for generating or captured image data Gm of output (distribution) tape label or the various functions of composograph data GG, this captured image data Gm or composograph data GG are that basis is from digital camera 110, the animation captured image data Gn that PC40 sends similarly generates or exports with described digital camera 10 (distribution).
And the various programs of storing in program storage 32A are according to from the input signal of input part 36, be activated via the input signal from digital camera 110, PC40 that transmits control part 35.
In addition, in memory 32, possess with illustrated digital camera in substantially identical data content, promptly become body data storage 32B, special-effect data storage 32C, destroy data storage 32D, become captured image data memory 32G, the composograph data storage 32H of body character data memory 32E, captured image data memory 32F, tape label and other operation with data storage etc.
Thus, in server unit 30, can be according to a plurality of captured image data Gn of digital camera 110 that connects from communication network N or PC40 transmission, generate the captured image data Gm and the composograph data GG of same tape label, and can provide the digital camera 110 that sends captured image data Gn, the service that PC40 replys distribution.
Then, the CPU16 of the digital camera 10 that constitutes or server unit 30,31 composograph output are handled described.
Fig. 9 is the flow chart of all processes of expression composograph output processing.
Figure 10 is the step SA of expression in the flow chart, promptly extracts the sub-process figure that handles, and this extracts to handle from captured image data Gn and (extracts the G1~G9) and replace object image data.
Figure 11 is the step SB of expression in the flow chart, is the sub-process figure that mark is handled A, and this mark handles that additional marking data 24Mw shows in the displacement object image data that A extracts in step SA.
Figure 12 is the step SC of expression in the flow chart, is the sub-process figure of mark treatments B that additional marking data 15M, 15M ' show in the displacement object image data that this mark treatments B extracts, and output sound data 15S in step SA.
Figure 13 is the step SD of expression in the flow chart, is the sub-process figure that mark is handled C, this mark is handled the displacement object image data of C about extracting in step SA, the posture (action) of detection in a plurality of captured image data Gn, when the action data 14P of login is identical substantially in judging this action and becoming body data storage 14, additional marking data 14Mu shows in this displacement object image data, and the corresponding voice data 14Su of output.
Figure 14 is the step SE of expression in the flow chart, is the sub-process figure that mark is handled D, this mark is handled D about be labeled the displacement object image data of processing in step SD, the posture (action) of detection in a plurality of captured image data Gn, when the action 23P of login is identical in judging this action and special-effect data storage 23, further additional marking data 23Mu shows in this performer's view data, and the corresponding voice data 23Su of output.
Figure 15 is the step SF of expression in the flow chart, i.e. the sub-process figure of the synthetic A of processing, the displacement object image data of should the synthetic A of processing having added flag data 24Mw, 14Mu following composograph output to handle is replaced as replacement image data 24Gw, 14Gu synthesizes, and sets the output of voice data 24Sw, 14Su.
Figure 16 is the step SG of expression in the flow chart, i.e. the sub-process figure of synthetic treatments B, the displacement object image data of should synthetic treatments B having added flag data 15M, 15M ' following composograph output to handle is replaced as replacement image data 15G synthesizes, and sets the output of voice data 15S.
Figure 17 is the step SH of expression in the flow chart, i.e. the sub-process figure of the synthetic C of processing, the assigned position that should the synthetic C of processing have added the displacement object image data of flag data 23Mu following composograph output to handle appends replacement image data 23Gu and synthesizes, and sets the output of voice data 23Su.
Figure 18~26th represented the figure based on the image processing state of the captured image data Gn that follows composograph output to handle in turn, in each figure, A is the figure of expression captured image data Gn, and B is the figure of the captured image data Gm of expression tape label, and C is the figure of expression composograph data GG.
The processing of step S1 among Fig. 9~S3 is shown in Figure 18 A~26A, and (obtaining as triggering of G1~G9) carried out, and generates and export the captured image data Gm (Gm1~Gm9) of the tape label shown in Figure 18 B~Figure 26 B in turn with captured image data Gn.
In addition, at the processing of the step S4~S7 in the captured image data Gm execution graph 9 of the tape label that shown in Figure 18 B~26B, generates, the composograph data GG shown in generation and the output map 18C~26C (GG1~GG9).
When for example taking a succession of scene shown in Figure 18 A~Figure 26 A by the image pickup section 13 of digital camera 10, (G1~G9) temporarily is kept at (step S1) in the captured image data memory 25 in turn with a plurality of captured image data Gn, then, carry out the sub-process (step SA) of handling extraction processing shown in Figure 10.
Extract and handle
CPU16 uses the operation that the captured image data Gn that preserves in turn temporarily transmits and is stored in the CPU16 in the memory (step SA1) in captured image data memory 25.Then, extract displacement object image data 15T, the 24T (step SA2) of handling part 161 identification captured image data Gn, positional information among the captured image data Gn of the displacement object image data that identifies is attached among the captured image data Gn, is stored in (step SA3) in the captured image data memory 25.
Then, Modularly extracts the image change part (step SA4) of the captured image data Gn-1 and this captured image data Gn of previous shooting, to have this modularization the positional information of view data of changing unit be attached among the captured image data Gn, be stored in (step SA5) in the captured image data memory 25.
Mark is handled A
When extracting the displacement object image data that exists by extraction processing (step SA) in captured image data Gn, and then, the 1st judgment processing portion 162 and the 1st mark handling part 163 are carried out the sub-process (step SB) that mark shown in Figure 11 is handled A.
Handle among the A at this mark, use the identification of known face image to handle, whether the displacement object image data that judgement is extracted by the extraction processing has been stored in as displacement object image data 24T and has become in the body character data memory 24 (step SB1).
At this, when judging (step SB1 (being)) when logining as displacement object image data 24T, read flag data 24Mw1 with the storage area 24a of this displacement object image data 24T corresponding stored, and be attached on the position of face image of the corresponding displacement object image data that extracts and show, and temporarily be kept at operation as the captured image data Gm1~Gm9 of the tape label shown in Figure 18 B~Figure 26 B with (step SB2) in the memory.
At this moment, flag data 24Mw1 both can show overlappingly with the face part of the displacement object image data that extracts, also can near the demonstration face part.
Then, judge whether,, then return the processing of step SB1,, then be transferred to the processing of step SB4 if judge end (step SB3 (being)) if judge and finish (step SB3 (deny)) at the judgement that is through with of whole displacement object image data.
And, when judging (step SB4 (being)), read out in the voice data 24Sw1 that becomes corresponding stored in the body character data memory 24 now just by digital camera 10 continuation shootings, export (step SB5) from audio output unit 13.
Then, carry out the sub-process (step SC) of handling mark treatments B shown in Figure 12.
The mark treatments B
In this mark treatments B, the 1st judgment processing portion 162 judges whether handle the displacement object image data that is extracted by extraction logins in destroying data storage 15 (step SC1) as displacement object image data 15T.
In this step SC1, the 2nd mark handling part 164 judges when judging (step SC1 (being)) when logining as displacement object image data 15T whether the displacement object image data that is extracted takes (step SC2) with the optimal viewing angle state.And, when judging is (step SC2 (being)) when taking with the optimal viewing angle state, read and the flag data of this displacement object image data 15T corresponding stored (before contacting) 15M, and be attached on the corresponding displacement object image data position that extracts and show (step SC3).
For example, the pairing displacement object image data that extracts of object A, the B shown in Figure 18 A and Figure 19 A is the state identical substantially with the displacement object image data 15T of the optimal viewing angle state of login in destroying data storage 15.
Therefore, read flag data (before contacting) 15Ma and flag data (before contacting) 15Mb with these displacement object image data 15T corresponding stored, and the position that is attached to as shown in the figure, the displacement object image data that extracts in captured image data Gm1, Gm2 shows.
At this moment, flag data (before the contact) 15Ma, flag data (before the contact) 15Mb both can show overlappingly with the displacement object image data that extracts, and also can show near the displacement object image data.
And, when judging (step SC4 (being)), read out in the voice data 15S that destroys storage in the data storage 15 now just by digital camera 10 continuation shootings, export (step SC5) from audio output unit 13.
After this, the 2nd judgment processing portion 165, position relation at the displacement object image data of having added flag data and other displacement object image data that extracts, monitor (step SC6) according to positional information additional on both displacement object image data, its result judges whether the displacement object image data contacts or overlapping (step SC7) with the displacement object image data that other extracts.
Above-mentioned judgement, more specifically, the pairing displacement object image data of judgment object A contacts with the pairing displacement object image data of performer P or is overlapping in Figure 19 A, and the pairing displacement object image data of judgment object B contacts with the pairing displacement object image data of performer P or be overlapping etc. in Figure 20 A.
When the positional information of judging the displacement object image data contacts with the positional information of other displacement object image data that extracts or be overlapping (step SC7 (being)), judge whether this contact or other overlapping displacement object image data that extracts are parts (step SC8) of handling the displacement object image data after having added flag data 24Mw among the A at mark.
Above-mentioned judgement for example judges whether to have added flag data (1) 24Mw1 etc. on the pairing displacement object image data of performer P in Figure 19 A.
When judging by step SC8 is (step SC8 (being)) when having added other displacement object image data that extracts a part of of flag data 24Mw, the 3rd mark handling part 166 mark-sense data (contact back) 15M ', and replacing flag data additional in step SC3 (before the contact) 15M adds, temporarily be kept at operation with in the data storage, and show the captured image data Gm (step SC9) of tape label.
When using Figure 19 B that the processing of this step SC9 is described, read and the flag data of replacing object image data 15T corresponding stored (contacting the back) 15Ma ' in the position of flag data (before the contact) 15Ma, replace flag data (before the contact) 15Ma to add, temporarily be kept at operation with in the data storage, and be presented at (step SC9) in the display part 11.
Then, when judging (step SC10 (being)), read and replace the voice data 15S of object image data 15T corresponding stored, and export (step SC11) from audio output unit 13 now just by digital camera 10 continuation shootings.
In addition, in step SC2, for example the pairing displacement object image data of object B shown in the pairing displacement object image data of the object A shown in Figure 20 A~Figure 26 A, Figure 21 A~Figure 26 A is such, the displacement object image data that extracts when judging is not (step SC2 (denying)) when being taken under the optimal viewing angle state, read and the flag data of replacing object image data 15T corresponding stored (contacting the back) 15M ', be attached to the corresponding displacement object image data position that extracts and show (step SC12).
Then, when judging (step SC13 (being)), read and replace the voice data 15S of object image data 15 corresponding stored, export (step SC14) from audio output unit 13 now just by digital camera 10 continuation shootings.
Then, carry out the sub-process (step SD) of handling mark processing C shown in Figure 13.
Mark is handled C
Handle among the C at this mark, in a plurality of captured image data Gn change has taken place by extract handling the displacement object image data that extracts, the 3rd judgment processing portion 167 judge its motion (action) whether with become body data storage 14 in the action data 14P cardinal principle identical (step SD1) of storing in advance.
Use Figure 21~Figure 23 that this step SD1 is described.
Shown in Figure 21 A~Figure 23 A, the change of judging the pairing displacement object image data of performer K whether with become body data storage 14 in the action data 14P of the 14a of storage (14P1a~14P1c) identical substantially in advance.
And, when being judged as (step SD1 (being)) when substantially identical, the 4th mark handling part 168 is read the flag data 14Mu1 with this action data 14P corresponding stored, and be attached on the image of face part of the displacement object image data that extracts and show, and temporarily be stored in operation with (step SD2) in the data storage as the captured image data Gm6~Gm9 of the tape label shown in Figure 23 B~Figure 26 B.
At this moment, flag data 14Mu1 both can show overlappingly with the face part of the displacement object image data that extracts, also can near the demonstration face part.
Then, when judging now (step SD3 (being)) when continuing to take, read the voice data 14Su with action data 14P corresponding stored, and export (step SD4) from audio output unit 13 by digital camera 10.
Then, carry out the sub-process (step SE) of handling mark processing D shown in Figure 14.
Mark is handled D
Handle among the D at this mark, handle displacement object image data further change in a plurality of captured image data Gn that C has added flag data by mark, the 3rd judgment processing portion 167 judge its motion (action) whether be stored in special-effect data storage 23 in advance in action data cardinal principle identical (step SE1).
Use Figure 24~Figure 26 that this step SE1 is described.
Shown in Figure 24 B~Figure 26 B, the change of judging the displacement object image data added flag data 14Mu whether be stored in special-effect data storage 23 in advance in the action data 23P (23P1a~23P1c) identical substantially of 23a.
And, when being judged as (step SE1 (being)) when substantially identical, the 4th mark handling part 168 is read the flag data 23Mu1 with this action data 23P corresponding stored, and be attached on the image of face part of the displacement object image data that extracts and show, and temporarily be kept at operation with (step SE2) in the data storage as the captured image data Gm9 of the tape label shown in Figure 26 B.
Then,, read the voice data 23Su with action data 23P corresponding stored, export (step SE4) from audio output unit 13 when judging now (step SE3 (being)) when just continuing to take by digital camera 10.
Be transferred to the processing of step S2 then.
In addition, flag data 23Mu1 both can show overlappingly with the face part of the displacement object image data that extracts, also can show near the face part.
The continuous captured image data Gm1~Gm9 of the tape label that generates on data storage in operation according to the processing of step SA~SE is stored in the captured image data memory 26 of tape label (step S2).
Then, (when the shooting of G1~G9) finishes (step S3 (being)), judge whether to enter captured image data Gm based on a succession of tape label that generates in the captured image data memory 26 of tape label and preserve (the generation processing (step S4) of the composograph of Gm1~Gm9) when judging a series of captured image data Gn from image pickup section 21.
CPU16 generates composograph or detects when obtaining the end that captured image data and mark handle when the input by operation signal detects to detect to have indicated from input part 12, is judged as the generation that enters composograph and handles (step S4 (being)).
So (Gm1~Gm9) read into operation with in the memory is transferred to the synthetic processing A (step SF) among Figure 15 with the captured image data Gm of tape label from the captured image data memory 26 of tape label.
The synthetic A that handles
CPU16 is at read into the captured image data Gm (Gm1~Gm9), judge during having added the displacement object image data of flag data whether there be the displacement object image data (step SF1) of by mark processing A and C having added flag data of operation with the tape label in the memory from the captured image data memory 26 of tape label.
About this step SF1, Figure 18 B~when Figure 26 B described, whether judgement existed displacement object image data (view data of performer P) of having added flag data 24Mw1 or the displacement object image data (view data of performer K) of having added flag data 14Mu1 in the captured image data Gm of tape label when using.
And, when judging existence displacement object image data (step SF1 (being)), animation generates handling part 170 according to additional positional information in the displacement object image data of having added flag data at this, catches the captured image data Gm (change (step SF2) of the displacement object image data among the Gm1~Gm9).
And, replacement Treatment portion 169 reads and becomes pairing replacement image data (1) 24Gw1 of flag data 24Mw1 in the body character data memory 24 and become the pairing replacement image data of flag data 14Mu1 14Gu1 in the body data storage 14, according to describing the replacement image data with a plurality of postures shown in the corresponding Figure 18 of change C to Figure 25 C that captures.
Then, synthetic animation generates handling part 170, replaces this replacement image data according to the positional information of the displacement object image data of correspondence, and is created on pre-prepd background image BG and goes up synthetic composograph data GG (GG1~GG9) (step SF3).
In addition, described to be exaggerated processing with the replacement image data 14Gu1 of the identical posture of performer K when describing this moment shown in the arrow x of Figure 23 C.
And CPU16 reads the voice data 24Sw1 with replacement image data 24Gw1 corresponding stored, and (GG1~GG9) preserves accordingly with composograph GG.
In addition, read the voice data 14Su1 with replacement image data 14Gu1 corresponding stored, and (GG6~GG9) preserves (step SF4) accordingly with composograph GG.
Then, carry out the sub-process (step SG) of handling synthetic treatments B as shown in figure 16.
Synthetic treatments B
CPU16, at from the captured image data memory 26 of tape label, reading into the captured image data Gm (Gm1~Gm9), judge during having added the displacement object image data of flag data whether there be the captured image data (step SG1) that by mark treatments B added flag data of operation with the tape label in the memory.
About this step SG1, Figure 18 B~when Figure 26 B described, whether judgement existed displacement object image data (view data of object A) of having added flag data 15Ma1 or the displacement object image data (view data of object B) of having added flag data 15Mb1 in the captured image data Gm of tape label when using.
And, when judging existence displacement object image data (step SG1 (being)), read the pairing replacement image data of destroying in the data storage 15 of flag data, synthetic animation generates handling part 170 and replaces this displacement object image data according to the positional information of the displacement object image data of correspondence, and is appended to the synthetic animation data GG that generates among the step SF3 and (synthesizes (step SG2) among the GG1~GG9).
About this step SG2, when using Figure 18 B~when Figure 26 B describes, added the displacement object image data (view data of object B) of flag data (before the contact) 15Mb1,15Gb1 replaces to the displacement view data, is appended among the synthetic animation data GG that generates among the step SF3 and synthesizes.
And CPU16 reads the voice data 15Sb1 with replacement image data 15Ga1 corresponding stored, with composograph GG (GG1~GG9) preservation (step SG3) accordingly.
Then, carry out the sub-process (step SH) of handling synthetic processing C shown in Figure 17.
The synthetic C that handles
CPU16, read into the captured image data Gm (Gm1~Gm9), judge in having added the displacement object image data of flag data, whether to exist and handle the captured image data (step SH1) that D has added flag data of the memory-aided tape label of operation by mark at captured image data memory 26 from tape label.
About this step SH1, when using Figure 26 B to describe, judge in the captured image data Gm of tape label, whether there is the displacement object image data of having added flag data 23Mu1.
And, when judging existence displacement object image data (step SH1 (being)), read the pairing replacement image data of flag data in the special-effect data storage 23, synthetic animation generates handling part 170 and replaces this displacement object image data according to the positional information of the flag data of correspondence, and is appended to the synthetic animation data GG that generates among the step SG2 and (synthesizes (SH2) among the GG1~GG9).
Then, further in appending synthetic replacement image data, obtain positional information, judge the marginal portion of these replacement image data and the position relation of other replacement image data, when the marginal portion of replacement image data contacts with other replacement image data or be overlapping, in this contact or position overlapped information, further append view data and synthesize (step SH3).
About this step SH2 and SH3, when using Figure 26 C to describe, append replacement image data 23Gu1 in the position that has added flag data 23Mu1 and synthesize, and append view data 23Gu1 ' in this replacement image data 23Gu1 and replacement image data 24Gw1 position contacting and synthesize.
And CPU16 reads the voice data 23Su1 with replacement image data 23Gu1 corresponding stored, with composograph GG (GG1~GG9) preservation (step SH4) accordingly.
After this, as shown in Figure 9, (GG1~GG9) be transformed to the synthetic animation data that has added voice data is kept at (step S6) in the composograph data storage 27 the composograph GG that will synthesize by the synthetic A~C of processing.
Then, should synthesize animation data and export display part 11 demonstration (step S7) of regenerating to.
In addition, carry out above-mentioned when respectively handling at server unit 30, (the transmission source of G1~G9), be digital camera 110 or PC40, the synthetic animation data that reads out in storage in the composograph data storage 27 is replied transmission (step S7) for the captured image data Gn that receives in step S1.
Therefore, the image that gets about the animation shooting can generate the synthetic animation in accordance with performer's will like a cork.
In addition, each of the composograph output device of putting down in writing in execution mode handled, program as computer is carried out can be stored in the exterior storage mediums 18 (33) such as storage card (ROM card, RAM card etc.), disk (floppy disk, hard disk etc.), CD (CD-ROM, DVD etc.), semiconductor memory and distribute.
In addition, be used to realize the data of the program of each method, the form that can be used as program code is in the last transmission of communication network (internet) N, also can obtain routine data, realize the generation output function of synthetic animation data according to described captured image data Gn from the terminal (procedure service device) of the last connection of this communication network (internet) N.
In addition, the present invention is not limited to each execution mode, the implementation phase can in the scope that does not break away from its thought, carry out various distortion.And, comprising the invention in various stages in each execution mode, the suitable combination by in disclosed a plurality of constitutive requirements can extract various inventions.For example, even the some constitutive requirements of deletion in disclosed whole constitutive requirements from each execution mode, perhaps making some constitutive requirements become different shape makes up, as long as can solve the invention problem of partly describing in problem to be solved, obtain the effect partly described of effect, extract deletion with regard to can be used as invention or make up these constitutive requirements and the structure that obtains in invention.

Claims (11)

1. image synthesizer is characterized in that having:
The 1st memory cell (14) is the storage that is mapped as the action of synthetic object and the 1st composograph data;
Input unit (21,35) is imported a plurality of view data;
The 1st judging unit (167), judge the part that changes has taken place in a plurality of view data by this input unit input change whether with described the 1st memory cell in the action of storing identical substantially; And
The 1st synthesis unit (169,171) when being substantially identical by the 1st judgment unit judges, being read with this action and is stored in the 1st composograph data in described the 1st memory cell accordingly, and synthesize in having the view data of described changing unit.
2. image synthesizer according to claim 1 is characterized in that also having:
The 2nd memory cell (15,24), the view data of object is extracted in storage;
The 2nd judging unit (162) is judged the view data that whether is included in the extraction object of storing in described the 2nd memory cell in the view data by described input unit input; And
The 2nd synthesis unit (169,171), when going out to comprise the view data of extracting object by the 2nd judgment unit judges, according to the view data of this extraction object and the position relation of view data, in the view data synthetic, append the view data of regulation and synthesize by described the 1st synthesis unit with described changing unit.
3. image synthesizer according to claim 1 is characterized in that,
Described the 2nd memory cell is also stored the 2nd composograph data accordingly with the view data of described extraction object,
Also have:
The 3rd synthesis unit (169,171), when going out to comprise the view data of extracting object by described the 2nd judgment unit judges, read the 2nd composograph data with the view data corresponding stored of this extraction object, and the position that is appended to the view data of described extraction object is synthesized.
4. image synthesizer according to claim 1 is characterized in that also having:
The 3rd memory cell (23) will have the storage that is mapped of the action of view data of described changing unit and the 4th composograph data;
The 3rd judging unit (167), the change of judging the view data that in a plurality of view data, has described changing unit by the input unit input whether with described the 3rd memory cell in the action of storing identical substantially; And
The 4th synthesis unit (169,171), when being identical substantially by the 3rd judgment unit judges, read with this action and be stored in the 3rd composograph data in described the 3rd memory cell accordingly, and be appended to by synthesizing in the synthetic view data of described the 1st synthesis unit.
5. image synthesizer according to claim 1 is characterized in that,
Also possess: generate the animation data generation unit (170) of animation data, this animation data consistently changes the change of the 1st composograph data and the view data of the part that changes and obtains in a plurality of view data,
Described the 1st synthesis unit (171) generates synthetic animation data, and this synthetic animation data comprises the animation data that is generated by described animation data generation unit.
6. image synthesizer according to claim 1 is characterized in that,
Described input unit comprises image unit.
7. image synthesizer according to claim 1 is characterized in that,
Described input unit comprises communication unit, and this communication unit is outside by the described a plurality of view data of communication input from this device.
8. image synthesizer according to claim 1 is characterized in that,
Also possess: display unit shows by the synthetic view data of described the 1st synthesis unit.
9. image synthesizer according to claim 1 is characterized in that,
Described the 1st composograph data are 3D renderings.
10. an image combining method is characterized in that, comprises following steps:
Import the input step (step S1) of a plurality of view data;
Determining step (step SD) judges whether the change of the part that changes in a plurality of view data of importing is identical substantially with the action of setting corresponding to the composograph data in this input step; And
Synthesis step (step SF) is judged as in this determining step when substantially identical, and the composograph data that will set corresponding to this action are synthesized in the view data with described changing unit.
11. a program product of storing program is characterized in that, described program makes the function of following each one of computer realization:
Import the input unit of a plurality of images;
Judging unit judges whether the change of the part that changes is identical substantially with the action of setting corresponding to the composograph data in a plurality of view data by this input unit input; And
Synthesis unit, when being identical substantially by this judgment unit judges, the composograph data that will set corresponding to this action are synthesized in the view data with described changing unit.
CN2008102144825A 2007-08-29 2008-08-28 Composite image generating apparatus, composite image generating method Active CN101378463B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2007-222595 2007-08-29
JP2007222595 2007-08-29
JP2007222595 2007-08-29
JP2008-197627 2008-07-31
JP2008197627A JP4973622B2 (en) 2007-08-29 2008-07-31 Image composition apparatus and image composition processing program
JP2008197627 2008-07-31

Publications (2)

Publication Number Publication Date
CN101378463A true CN101378463A (en) 2009-03-04
CN101378463B CN101378463B (en) 2010-12-29

Family

ID=40421769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102144825A Active CN101378463B (en) 2007-08-29 2008-08-28 Composite image generating apparatus, composite image generating method

Country Status (3)

Country Link
JP (1) JP4973622B2 (en)
KR (1) KR100981002B1 (en)
CN (1) CN101378463B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125450A (en) * 2013-04-26 2014-10-29 索尼电脑娱乐公司 Image pickup apparatus, information processing system and image data processing method
CN102712328B (en) * 2009-11-12 2015-07-15 三菱电机株式会社 Screen image information delivery display system and screen image information delivery display method
CN110140152A (en) * 2017-10-20 2019-08-16 三菱电机株式会社 Data processing equipment, programable display and data processing method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10972680B2 (en) * 2011-03-10 2021-04-06 Microsoft Technology Licensing, Llc Theme-based augmentation of photorepresentative view
US9721388B2 (en) 2011-04-20 2017-08-01 Nec Corporation Individual identification character display system, terminal device, individual identification character display method, and computer program
JP5777507B2 (en) 2011-12-27 2015-09-09 キヤノン株式会社 Information processing apparatus, information processing method, and program thereof
JP2013250773A (en) * 2012-05-31 2013-12-12 Sega Corp Stage device and stage facility
KR102192704B1 (en) 2013-10-22 2020-12-17 엘지전자 주식회사 image outputting device
CN105814611B (en) * 2013-12-17 2020-08-18 索尼公司 Information processing apparatus and method, and non-volatile computer-readable storage medium
CN110460893B (en) * 2018-05-08 2022-06-03 日本聚逸株式会社 Moving image distribution system, method thereof, and recording medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR9906453A (en) * 1998-05-19 2000-09-19 Sony Computer Entertainment Inc Image processing device and method, and distribution medium.
JP3413128B2 (en) * 1999-06-11 2003-06-03 キヤノン株式会社 Mixed reality presentation method
JP4291963B2 (en) * 2000-04-13 2009-07-08 富士フイルム株式会社 Image processing method
JP2002157607A (en) * 2000-11-17 2002-05-31 Canon Inc System and method for image generation, and storage medium
WO2003100703A2 (en) 2002-05-28 2003-12-04 Casio Computer Co., Ltd. Composite image output apparatus and composite image delivery apparatus
JP4253567B2 (en) * 2003-03-28 2009-04-15 オリンパス株式会社 Data authoring processor
JP4161769B2 (en) 2003-03-31 2008-10-08 カシオ計算機株式会社 Image output device, image output method, image output processing program, image distribution server, and image distribution processing program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102712328B (en) * 2009-11-12 2015-07-15 三菱电机株式会社 Screen image information delivery display system and screen image information delivery display method
CN104125450A (en) * 2013-04-26 2014-10-29 索尼电脑娱乐公司 Image pickup apparatus, information processing system and image data processing method
CN110140152A (en) * 2017-10-20 2019-08-16 三菱电机株式会社 Data processing equipment, programable display and data processing method
CN110140152B (en) * 2017-10-20 2020-10-30 三菱电机株式会社 Data processing device, programmable display and data processing method

Also Published As

Publication number Publication date
KR20090023242A (en) 2009-03-04
CN101378463B (en) 2010-12-29
KR100981002B1 (en) 2010-09-07
JP2009076060A (en) 2009-04-09
JP4973622B2 (en) 2012-07-11

Similar Documents

Publication Publication Date Title
CN101378463B (en) Composite image generating apparatus, composite image generating method
US7796155B1 (en) Method and apparatus for real-time group interactive augmented-reality area monitoring, suitable for enhancing the enjoyment of entertainment events
KR20060095780A (en) Composite image output apparatus, composite image output method, and recording medium
CN102193772B (en) A kind of message handler and information processing method
EP1717725A2 (en) Key generating method and key generating apparatus
TWI591575B (en) Method and system for enhancing captured data
EP1473731A3 (en) Reproducing apparatus
KR20190076360A (en) Electronic device and method for displaying object for augmented reality
KR20170125618A (en) Method for generating content to be displayed at virtual area via augmented reality platform and electronic device supporting the same
KR20060066597A (en) Information processing apparatus and information processing method
US8189864B2 (en) Composite image generating apparatus, composite image generating method, and storage medium
CN101171625B (en) Data recording device and data file transmission method in the data recording device
KR20180000022A (en) Virtual office implementation method
JP4962219B2 (en) Composite image output apparatus and composite image output processing program
WO2002067067A3 (en) Combined eye tracking information in an augmented reality system
KR101908068B1 (en) System for Authoring and Playing 360° VR Contents
WO2021230181A1 (en) Information processing method, information processing device, program, and information processing system
JP4983494B2 (en) Composite image output apparatus and composite image output processing program
KR20180000024A (en) Virtual office system
JP2009059014A (en) Composite image output device and composite image output processing program
KR100746651B1 (en) An instantly assigned mapping method for an index sticker having digital code and optical learning player
WO2024136041A1 (en) Device and method for video production based on user movement records
KR101883680B1 (en) Mpethod and Apparatus for Authoring and Playing Contents
JP2007006313A (en) Moving picture imaging apparatus and file storage method
CN115315960A (en) Content correction device, content distribution server, content correction method, and recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160928

Address after: 100085 Beijing city Haidian District Qinghe Street No. 68 Huarun colorful city shopping center two floor 9 room 01

Patentee after: BEIJING XIAOMI MOBILE SOFTWARE Co.,Ltd.

Address before: Tokyo, Japan

Patentee before: CASIO Computer Co., Ltd.