CN107067457A - Image generation system and image processing method - Google Patents

Image generation system and image processing method Download PDF

Info

Publication number
CN107067457A
CN107067457A CN201710064323.0A CN201710064323A CN107067457A CN 107067457 A CN107067457 A CN 107067457A CN 201710064323 A CN201710064323 A CN 201710064323A CN 107067457 A CN107067457 A CN 107067457A
Authority
CN
China
Prior art keywords
image
subject
role
information
color information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710064323.0A
Other languages
Chinese (zh)
Other versions
CN107067457B (en
Inventor
江头规雄
长嶋谦
长嶋谦一
永谷真之
高屋敷哲雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bandai Namco Entertainment Inc
Original Assignee
Bandai Namco Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bandai Namco Entertainment Inc filed Critical Bandai Namco Entertainment Inc
Publication of CN107067457A publication Critical patent/CN107067457A/en
Application granted granted Critical
Publication of CN107067457B publication Critical patent/CN107067457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/30Clipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The station diagram picture that synthesis subject can be generated is provided and can also reflect image generation system and image processing method of the image of the role of the Skin Color Information of subject etc..Image generation system includes:Input processing portion, obtains the photographed images of subject;Extraction process portion, carries out the extraction process of color information;And image production part, the station diagram picture for generating the appointed part of subject for being included the photographed images of subject synthesized in the composograph of the image of role.Extraction process portion carries out the extraction process of the Skin Color Information of subject according to the station diagram picture of the appointed part of subject.The Skin Color Information that image generation unit is extracted, sets the color information at the position outside the appointed part of role, generates the image of role.

Description

Image generation system and image processing method
Technical field
The present invention relates to image generation system, image processing method and information storage medium etc..
Background technology
The known subject to player etc. was taken pictures in the past, made photographed images reflection to the type of the role of game etc. Image generation system.As the prior art of this image generation system, for example, there is the technology disclosed in patent document 1.At this The role for forming real image of the synthesis from image pickup part and the Drawing image from drawing storage part in the prior art is planted, in display The picture in portion shows the role to be formed.
Citation
Patent document
Patent document 1:Japanese Unexamined Patent Publication 7-192143 publications.
The content of the invention
The invention problem to be solved
According to this prior art, it can be shown in picture and synthesize the role of the photographed images of player to be played.But It is, on the face of role, using the face image of player, on the position outside the face of role, to use painting as CG images Figure image.Therefore, currently there are no one and can give player characters is this sensation of attending to anything else of oneself.
According to several modes of the present invention, using the teaching of the invention it is possible to provide the station diagram picture of synthesis subject can be generated and can also be reflected Image generation system, image processing method and information storage medium of the image of the role of the Skin Color Information of subject etc..
The means to solve the problem
The mode of the present invention is related to image generation system, including:Input processing portion, obtains the shooting figure of subject Picture;Extraction process portion, carries out the extraction process of color information;And image production part, composograph is generated, the composograph will The station diagram picture of the appointed part of the subject included by the photographed images of the subject is synthesized in role Image, the extraction process portion carries out the quilt according to the station diagram picture of the appointed part of the subject The extraction process of the Skin Color Information of shooting body, described image generating unit sets the role according to the Skin Color Information of extraction The appointed part outside position color information, generate the image of the role.In addition, the present invention relates to as above-mentioned each Portion makes the program of computer function, or stores the information storage medium of the embodied on computer readable of the program.
According to the mode of the present invention, the position of the appointed part of the subject included by the photographed images of subject Image synthesizes the image in role.Now, in the mode of the present invention, according to the station diagram picture of subject, skin is extracted Color information.Moreover, according to the Skin Color Information of extraction, setting the color information at the position outside the appointed part of role, role is generated Image.In such manner, it is possible to which the Skin Color Information that the station diagram picture of the appointed part based on subject is obtained is reflected as role's The color information at the position outside appointed part.Clapped therefore, it is possible to generate the station diagram picture of synthesis subject and also reflect Take the photograph the image of the role of the Skin Color Information of body.
In addition, in the mode of the present invention, the station diagram seems at the face image of the subject, the extraction Reason portion carries out the extraction process of the Skin Color Information according to the face image, and described image generating unit is according to from the face image The Skin Color Information extracted, can set the color information at the position outside the face of the role.
In such manner, it is possible to the face image of subject is synthesized in role, and can be by the face image based on subject And the Skin Color Information obtained is used as the color message reflection at the position outside the face of role.
In addition, in the mode of the present invention, the role is the model object being made up of multiple objects, described image life Into the Skin Color Information of the portion according to the subject, in the multiple object of the model object, institute can be set The color information of the object at position outside appointed part is stated, is handled by the perspective for carrying out the model object, generates the mould The image of type object.
So, the Skin Color Information extracted from the station diagram picture of the appointed part of subject is set as constituting model pair The color information of the object at the position in the object of elephant outside appointed part, carries out perspective processing, is capable of the angle of generation model object The image of color.
In addition, in the mode of the present invention, image and the position that described image generating unit can be in the role The borderline region of image is corrected processing.
In this way, can be by correction process to the image in role and the image of the borderline region of station diagram picture The image for becoming unnatural is corrected.
In addition, in the mode of the present invention, described image generating unit can carry out the role in the borderline region Image color information and the station diagram picture color information translucent synthesis processing be used as the correction process.
By such mode, can by the image of role and station diagram picture borderline region the paired role of image rectification Image the translucent synthesis of the color information of color information and station diagram picture after image.
In addition, in the mode of the present invention, described image generating unit can carry out moditied processing to the borderline region It is used as the correction process.
In this way, the image and station diagram of role can be made by moditied processing as unnatural in borderline region Image is unobtrusively.
In addition, in the mode of the present invention, the extraction process portion is from the station diagram picture of the subject Extract with the consistent pixel of the colour of skin, according to the color information of the pixel of extraction, can in the hope of the subject the skin Color information.
So, in the pixel of the station diagram picture of subject, the selection pixel adaptable with the extraction of the colour of skin, Neng Gouti Take Skin Color Information.
In addition, in the mode of the present invention, including subject information obtaining section, the subject information obtaining section According to the sensor information from sensor, the subject information of action by determining the subject is obtained (based on making Calculation machine is used as subject information obtaining section function), described image generating unit can be generated to be believed by the subject The image for the role for ceasing the action according to the subject determined and acting.
So, according to the subject information obtained based on sensor information, the action of subject is determined, can be generated The image of the role acted according to the action of the subject.Moreover, being acted for this according to the action of subject Role, the station diagram picture of subject can be incorrectly synthesised, and can also reflect the Skin Color Information of subject.
In addition, in the mode of the present invention, described image generating unit determines the quilt according to the subject information The appointed part of shooting body, cuts out the station diagram picture, by what is cut out from the photographed images of the subject The station diagram picture is synthesized in the image of the role, and the extraction process portion can be according to true based on the subject information The station diagram picture of the fixed appointed part, carries out the extraction process of the Skin Color Information of the subject.
So, using the subject information of the action for determining subject, the appointed part of subject is determined, The station diagram picture of the appointed part of determination is synthesized into the image in role, or can be from the station diagram picture of the appointed part of determination The middle Skin Color Information for extracting subject.
In addition, in the mode of the present invention, the subject information obtaining section is obtained according to the sensor information The framework information of the subject is as the subject information, and the extraction process portion can be according to based on the skeleton At the station diagram picture for the appointed part that information is determined, the extraction for the Skin Color Information for carrying out the subject Reason.
So, using the framework information of the action for determining subject, the appointed part of subject is determined, can The Skin Color Information of subject is extracted from the station diagram picture of the appointed part of determination.
In addition, the other modes of the present invention are related to image processing method, carry out:Input processing, obtains taking the photograph for subject As image;Extraction process, carries out the extraction process of color information;And image generation processing, generate composograph, the composite diagram As the station diagram picture of the appointed part of the subject included in the photographed images of the subject is synthesized In the image of role, in the extraction process, according to the station diagram picture of the appointed part of the subject, enter The extraction process of the Skin Color Information of the row subject, in described image generation processing, believes according to the colour of skin of extraction Breath, sets the color information at the position outside the appointed part of the role, generates the image of the role.
Brief description of the drawings
Fig. 1 is the configuration example of the image generation system of present embodiment.
Fig. 2 is Application Example of the image generation system to business game device of present embodiment.
Fig. 3 (A), Fig. 3 (B) are the image generation systems of present embodiment to home-use game device, individual calculus The Application Example of machine.
Fig. 4 (A), Fig. 4 (B) are depth information, the example of body index information.
Fig. 5 is the explanation figure of framework information.
Fig. 6 be role image and player face image image combining method explanation figure.
Fig. 7 (A) is the example of the photographed images of player, and Fig. 7 (B) is the example of the composograph of role and face image Son.
Fig. 8 is the example of the game image generated by present embodiment.
Fig. 9 is the explanation figure of the method for present embodiment.
Figure 10 is the explanation figure of the method for present embodiment.
Figure 11 is the flow chart of the example for the extraction process for showing Skin Color Information.
Figure 12 (A), Figure 12 (B) are the examples of model object information, object information.
Figure 13 is the explanation figure for the method for making role action according to the action of player.
Figure 14 is the explanation figure for the method for making role action using framework information.
Figure 15 is the explanation figure of the perspective processing of model object.
Figure 16 is the explanation figure of the method for the position that face image is determined by framework information.
Figure 17 is the explanation figure in the correction process of the borderline region of translucent synthesis processing.
Figure 18 is the method that translucent synthesis processing is carried out when the image of role and the image of face image synthesize processing Illustrate figure.
Figure 19 is the explanation figure in the correction process of the borderline region of moditied processing.
Figure 20 is the flow chart for the detailed processing example for showing present embodiment.
Symbol description
PL, player (subject) CHP, role
IMF, face image (station diagram picture) IM, photographed images
NC, chest AR, AL, hand
LR, LL, pin BD, borderline region
AC, display thing SC, skeleton
100th, processing unit 102, input processing portion
110th, arithmetic processing section 111, game processing portion
112nd, object space configuration part 113, role's processing unit
114th, subject information obtaining section 115, extraction process portion
118th, gaming performance operational part 119, imaginary camera control unit
120th, image production part 130, sound generating unit
132nd, printing treatment portion 140, output processing part
150th, mouth 152, coin slot are paid
160th, operating portion 162, sensor
164th, color sensor 166, depth transducer
170th, storage part 172, object information storage part
178th, draw buffer 180, information storage medium
190th, display part 192, audio output unit
194th, I/F portions 195, portable type information storage medium
196th, communication unit 198, Printing Department.
Embodiment
Below, present embodiment is illustrated.In addition, present embodiment described below, is not undeservedly limited Present disclosure described in the scope of claims.Also, the whole of the composition illustrated in the present embodiment, differs Surely be the present invention must be configured into important document.
1. image generation system
Fig. 1 shows the composition of the image generation system (video generation device, games system, game device) of present embodiment Example.Image generation system includes processing unit 100, operating portion 160, sensor 162, storage part 170, display part 190, sound output Portion 192, I/F portions 194, communication unit 196, Printing Department 198.In addition, the composition of the image generation system of present embodiment is not limited It due to Fig. 1, can carry out omitting a part for its inscape (each several part), or increase the various changes of other inscapes etc. Shape is implemented.
Processing unit 100 is according to the operation information from operating portion 160, the sensor information from sensor 162, program Deng the various processing such as progress input processing, calculation process, output processing.
Each processing (each function) for the present embodiment that each several part of processing unit 100 is carried out can by processor (including The processor of hardware) realize.The processing that can be acted such as each processing of present embodiment by the information based on program Device, and the memory of the information of storage program etc. are realized.For example processor can pass through single hardware with the function of each several part Realize, or the function of each several part can be realized by the hardware of one.For example, processor includes hardware, the hardware can be wrapped At least a portion of the circuit for including processing digital information and the circuit for handling virtual signal.For example, processor can pass through peace Mounted in one or more circuit arrangements (such as IC) of circuit substrate, or one or more circuit elements (such as resistance, electricity Container etc.) constitute.Processor for example can be CPU (Central Processing Unit:Central processing unit).But, processing Device is not limited to CPU, can use GPU (Graphics Processing Unit:Graphics processor) or DSP (Digital Signal Processor:Digital signal processor) etc. various processors.In addition, processor can also be passed through ASIC hardware circuit.In addition, processor can include amplifier circuit or filter circuit of processing virtual signal etc..Memory (storage part 170) can be SRAM, DRAM etc. semiconductor memory or register.Or can be hard disk unit Etc. (HDD) magnetic memory apparatus, can for optical disc apparatus etc. optical memory appts.For example, memory storage is by calculating The order that machine can be read, by the order by computing device, realizes the processing (function) of each several part of processing unit 100.Herein Order can be the order group of configuration program or the order to the hardware circuit instruction of processor action.
Processing unit 100 includes input processing portion 102, arithmetic processing section 110, output processing part 140.Input processing portion 102 Carry out the input processing of various information.Such as input processing portion 102 will receive the operation that player is inputted by operating portion 160 The processing of information is carried out as input processing.For example carry out obtaining the processing of the operation information detected by operating portion 160.This Outside, input processing portion 102 carries out the processing that sensor information is obtained from sensor 162 as input processing.In addition, input Reason portion 102 carries out the processing that information is read from storage part 170 as input processing.For example, read from storage part 170 The processing for the information specified in sense order.In addition, input processing portion 102 will make via the processing of the receive information of communication unit 196 Carried out for input processing.For example, carrying out external device (ED) (other image generation systems, the clothes via network from image generation system It is engaged in device system etc.) processing of receive information.Reception processing is to indicate the receive information of communication unit 196, or acquisition communication unit 196 connects The information of receipts simultaneously writes processing of storage part 170 etc..
Arithmetic processing section 110 carries out various calculation process.For example arithmetic processing section 110 carries out game processing, object space Setting processing, role's processing, gaming performance calculation process, imaginary camera control process, image generation processing or sound life Into the calculation process of processing etc..The arithmetic processing section 110 includes game processing portion 111, object space configuration part 112, at role Reason portion 113, subject information obtaining section 114, extraction process portion 115, gaming performance operational part 118, imaginary camera control Portion 119, image production part 120, sound generating unit 130, printing treatment portion 132.
Game processing is to start the processing of game, the processing for carrying out game, Huo Zheman when meeting game beginning condition Processing of Exit Game etc. during sufficient game over condition.The game processing passes through (the program mould of game processing of game processing portion 111 Block) perform.
Object space setting processing is the processing that the multiple objects of setting are configured in object space.The object space setting processing Performed by object space configuration part 112 (program module of object space setting processing).For example, object space configuration part 112 By the role come on stage in gaming (people, robot, animal, monster, aircraft, ship, fighter plane, battlebus, battleship, car etc.), The expression for scheming (landform), building, route (road), trees, wall, the water surface etc. shows the various objects of thing (polygonal, freely song Face or subdivision cut curved surface etc. basal plane constitute object) configuration be set in object space.That is, object is determined under world coordinate system Position or the anglec of rotation (synonymous with towards, direction), (rotated in the position (X, Y, Z) with the anglec of rotation in X, Y, Z axis The anglec of rotation) configuration object.Specifically, make in the storage corresponding with object index of object information storage part 172 of storage part 170 For the object information of the information of the position of object (component object), the anglec of rotation, translational speed, moving direction etc..Object space Configuration part 112 in each framework such as being updated the processing of the object information.
Role's processing (moving body calculation process) is the various calculation process carried out on role's (moving body).Such as role Processing is to be used to make role's (display thing come on stage in game) mobile in object space (imaginary three dimensions, three-dimensional graphical space) Processing, or make the processing of role action.Role processing is held by role's processing unit 113 (program module that role is handled) OK.For example, role's processing unit 113 carries out following control process:The operation information that is inputted according to player by operating portion 160, Sensor information, program (mobile, action algorithm), various data (action data) from sensor 162 etc., make role's (mould Intend object) moved in object space, make role action (action, animation).Specifically, simulation process is carried out:In each frame (such as 1/60 second) tries to achieve mobile message (position, the anglec of rotation, speed or acceleration) or the action message (portion of role successively The position of part object or the anglec of rotation).In addition, frame is the movement, action processing (simulation process) or image generation for carrying out role The chronomere of processing.
Gaming performance calculation process is to calculate the processing of the achievement of player in gaming.For example, at gaming performance computing Reason is to calculate the processing of points or score that player obtains in gaming, or calculates currency, medal or ticket etc. in game The processing for achievement of playing.The gaming performance calculation process passes through the (program of gaming performance calculation process of gaming performance operational part 118 Module) perform.
Imaginary camera control process is the processing of the imaginary camera of control (the imaginary camera of viewpoint, standard), the vacation Think camera be used for generate from the image of the viewing point to (any) in object space.The imaginary camera control process Performed by imaginary camera control unit 119 (program module of imaginary camera control process).Specifically, imaginary camera Control unit 119 is controlled position (X, Y, Z) or the anglec of rotation (anglec of rotation rotated with X, Y, Z axis) of imaginary camera Processing (processing of control viewpoint position, direction of visual lines or image angle).
Image generation processing is the image (game image) for generating the display of display part 190, or is printed by Printing Department 198 The processing of the image of brush, can include various images and synthesize processing or image effect processing etc..Sound generation processing is to be used to give birth to Into the processing of the sound (game sound) of the BGM exported by audio output unit 192, effect sound or sound etc., it can include various Sound rendering processing or sound effect processing etc..These image generation processing, sound generation processing by image production part 120, Sound generating unit 130 (image generation processing, the program module of sound generation processing) is performed.
For example, image production part 120 is according to the various processing (game processing, simulation process) carried out in processing unit 100 As a result drawing processing is carried out, image is thus generated, is output to display part 190.Specifically, Coordinate Conversion (world coordinates is carried out Conversion, camera coordinate conversion), pruning modes, perspective conversion or light source processing etc. geometric manipulations, according to the processing knot Really, draw data (position coordinates, structure coordinate, chromatic number evidence, normal vector or the α values on the summit of basal plane etc.) is made.Here, root According to the draw data (basal plane data), the object (one or more basal planes) having an X-rayed after conversion (after geometric manipulations) is plotted in Figure buffering area 178 (can be with the buffering area of the pixel unit storage image information of frame buffer zone, job buffer etc.).Thus, Generated in object space to viewpoint (imaginary camera) from obtained image.In addition, entering in image production part 120 Capable drawing processing can be realized by vertex coloring processing or pixel shader processing etc..
Output processing part 140 carries out the output processing of various information.For example, output processing part 140 will be write in storage part 170 The processing for entering information is carried out as output processing.For example, entering the place that the information for being about to be specified in write instruction writes storage part 170 Reason.In addition, the information of the image of generation is output to display part 190 by output processing part 140, or by the information of the sound of generation The processing for being output to audio output unit 192 is carried out as output processing.In addition, output processing part 140 will be sent out via communication unit 196 Deliver letters breath processing as output processing carry out.For example carry out external device (ED) (other images generation system to image generation system System, server system etc.) via the processing of network transmission information.Transmission processing is to indicate that communication unit 196 sends information, Huo Zhexiang Communication unit 196 indicates processing of information sent etc..In addition, output processing part 140 transmits the image printed in printed medium Processing to Printing Department 198 is carried out as output processing.
Operating portion 160 (operation equipment) is that player (user) is used for the operation equipment of input operation information, its function energy Enough by direction indication key, operation button, simulation rod, bar, various sensors (angular-rate sensor, acceleration transducer etc.), Microphone or touch panel escope etc. are realized.
Sensor 162 detects the sensor information for obtaining subject information.Sensor 162 can for example include coloured silk Colour sensor 164 (color camera), depth transducer 166 (depth cameras).
Storage part 170 (memory) is the working region of processing unit 100 or the grade of communication unit 196, and its function can pass through RAM, SSD, HDD etc. are realized.Moreover, games or the execution necessary game data of games are maintained at the storage part 170. Storage part 170 includes object information storage part 172, draw buffer 178.
Information storage media 180 (by medium of embodied on computer readable) stores program or data etc., and its function can lead to Cross the realization such as CD (DVD, CD etc.), HDD (hard disk drive) or memory (ROM etc.).Processing unit 100 is according to being contained in letter Cease the various processing that the program (data) in storage medium 180 carries out present embodiment.In the information storage medium 180, energy It is enough to store for making computer (possessing operating portion, processing unit, storage part, the device of output section) as each portion of present embodiment The program (being used for the program of processing for making computer perform each several part) of point function.
Display part 190 output by present embodiment generation image, its function can by LCD, organic el display, CRT or HMD etc. is realized.The sound that the output of audio output unit 192 is generated by present embodiment, its function can pass through loudspeaker Or earphone etc. is realized.
I/F (interface) portion 194 handled with the interface of portable type information storage medium 195, and its function can pass through I/F ASIC of processing etc. is realized.Portable type information storage medium 195 is that user is used to preserve the storage mediums of various information, be There is no the storage device for the storage that these information are also kept during power supply supply.Portable type information storage medium 195 can pass through IC Card (storage card), USB storage or magnetic card etc. are realized.
Communication unit 196 is led between network and external device (ED) (other image generation systems, server system etc.) Letter, its function can be realized by the hardware or communication firmware (firmware) of communication ASIC or communication program etc..
Printed medium printing image of the Printing Department 198 such as in print paper or seal paper.The Printing Department 198 for example can Enough realized by printing head, conveying mechanism of printed medium etc..Specifically, (the print of printing treatment portion 132 of processing unit 100 Brush the program module of processing) processing for selecting printing object image is carried out, indicate the selected printing object of Printing Department 198 The printing of image.Thus, the printed article for being printed with printing object image is paid from Fig. 2 described later mouth 152 of paying.Printing object Image is, for example, the image taken pictures to the image of the role of player in game, or player role and other roles one Play image of souvenir photo of shooting etc..
In addition, making computer as the program (data) of each several part function of present embodiment from server system The information storage medium that (master device) has via network and communication unit 196 be assigned to information storage medium 180 (or storage Portion 170).The use of the information storage medium of this server system is intended to be included within the scope of the present invention.
Fig. 2, Fig. 3 (A), Fig. 3 (B) are the examples for showing to be applicable the hardware unit of the image generation system of present embodiment The figure of son.
Fig. 2 is the example of the business game device for the image generation system for being applicable present embodiment.This is commercially with game Device is included by operation button, the operating portion 160 of direction indicator button realization, with color sensor 164, depth transducer It is 166 sensor 162, the display part 190 by realizations such as LCD or CRT, the audio output unit 192 realized by loudspeaker, hard The printed article of coin input port 150, photo etc. pays mouth 152.The game image that player PL viewing display parts 190 are shown, and And carry out for playing the various actions played.The player PL action (action at the position of trick etc. is detected by sensor 162 Or the movement of position).Also, the game processing based on testing result is carried out, the trip based on game processing is shown in display part 190 Play image.
Be applicable the image generation system of present embodiment hardware unit be not limited to Fig. 2 that commercially use game device, The personal computer that can also be shown suitable for such as Fig. 3 (A) home-use game device, Fig. 3 (B) shown is (at information Manage device) etc. various hardware units.In Fig. 3 (A), by the sensor with color sensor 164 or depth transducer 166 162 (sensor devices) are connected to the agent set of home-use game device, and the sensor 162 is configured and is for example being used as display Near the TV in portion.Also, by detecting the action of player by sensor 162, the agent set of home-use game device is held The various game processings of row, game image is shown in the picture of TV.In Fig. 3 (B), sensor 162 is connected to individual The agent set of computer, the action of player is detected by sensor 162, and base is being shown as the liquid crystal display of display part In the game image of the result of game processing.
Also, as shown in figure 1, image generation system (video generation device, game device, the game system of present embodiment System) include:Input processing portion 102, obtains the photographed images of subject;At extraction process portion 115, the extraction for carrying out color information Reason;And image production part 120, generate the position of the appointed part of the subject included by the photographed images of subject Image synthesizes the composograph in character image.
Subject is, for example, the player PL shown in Fig. 2.Subject can be animal beyond people or dynamic Object beyond thing.Photographed images are that the color sensor 164 (color camera) for example having by sensor 162 is photographed Image (coloured image of RGB image etc.).The color sensor 164 and depth transducer 166 can be separately provided.For example, from Station diagram picture (such as face figure of the appointed part of subject is cut out in the photographed images taken pictures by color sensor 164 Picture), the station diagram picture generation cut out is synthesized in the composograph of the image of role.For example, for the appointed part pair with role The part answered, synthesizes the station diagram picture of subject.
Extraction process portion 115 carries out the Skin Color Information of subject according to the station diagram picture of the appointed part of subject The extraction process of (broadly for bulletin colour information).For example, carrying out the color information of the pixel according to station diagram picture, extraction is taken The extraction process of the Skin Color Information of body.Specifically, according to color information consistent with the colour of skin in the pixel of station diagram picture, ask Obtain Skin Color Information.
Moreover, outside Skin Color Information of the image production part 120 according to the subject of extraction, the appointed part for setting role Position color information, generate role image.For example, role is used as the portion outside appointed part with first to N positions Position (such as the position outside face), be set to first i-th can set the colour of skin as Essential colour to jth position into N position Position (position such as hand, pin, chest).Now, it is set as this i-th to jth the Skin Color Information of the subject of extraction The information of the Essential colour at position.
So, not only with the station diagram that is synthesized for role as corresponding appointed part, for the portion outside appointed part Position (i-th to jth position) can also reflect the Skin Color Information of subject.Therefore, it is possible to generate the position of synthesis subject Image and the also image of the role of the Skin Color Information of reflection subject.
In addition, in the present embodiment, station diagram picture is, for example, the face image of subject, the basis of extraction process portion 115 The face image, carries out the extraction process of Skin Color Information.Then, image production part 120 is believed according to the colour of skin extracted from face image Breath, sets the color information at the position outside the face of role, generates the image of role.Also, generate the figure in the role generated The composograph of face image is synthesized as in.
For example by from the overall photographed images for mirroring subject, cutting the part of face, acquisition is used as specifying part The face image of the station diagram picture of the face of position.Can be according to passing through subject information as described later in the position of the face of photographed images The subject information (framework information etc.) that obtaining portion 114 is obtained is determined.Moreover, extraction process portion 115 is according to from subject Photographed images in the face image that obtains, extract the Skin Color Information of the colour of skin of face equivalent to subject.Then, according to extraction Skin Color Information, set as the position outside the face of role the position such as hand, pin or chest color information (Essential colour Information).Thus, it can also make from the Skin Color Information reflection of the face extraction of subject to such as hand, pin or the chest outside face Deng position etc..In addition, be not limited to face as the position of the subject of the extracting object of Skin Color Information, can also from face it Extract Skin Color Information in outer position (such as hand, pin, chest).Furthermore it is possible to carry out the position from the appointed part of subject The color information outside Skin Color Information is extracted in image, the color that the color information setting of extraction is the position outside the appointed part is believed The deformation implementation of breath.
In addition, role (subject role) is, for example, the model object being made up of multiple objects.E.g. by many The threedimensional model object that individual three dimensional object (component object) is constituted.Each object for example passes through multiple polygonals (broadly for basal plane) Constitute.
Now, image production part 120 in multiple objects of model object, is set according to the Skin Color Information of subject The color information (Essential colour information) of the object at the position outside appointed part, by the perspective for carrying out model object (rendering) handle, generate the image of the model object as role.
Specifically, image production part 120 is in object space, to configuring in the position pair with subject (player etc.) The model object for the position answered carries out perspective processing, generates the image of role.For example according to by subject information obtaining section 114 The subject information of acquisition, determines the position of subject, in object space, in the position pair with the subject of determination The position answered configures the model object as role.Moreover, carrying out the perspective processing of the model object, generate from imaginary camera The image of the role of observation.Now, the perspective that image production part 120 carries out model object according to lighting model is handled, and generates angle The image of color.So, as the image of role, the image that the light source based on lighting model applies shade can be generated.In addition, energy Enough deformation implementations for carrying out using two dimensional image (animation image) as the image of role.
In addition, image production part 120 is corrected processing in the image of role and the borderline region of station diagram picture.For example It is set to the skin of first position (such as face) corresponding with the appointed part of role and setting from the position image zooming-out of subject The second position (such as chest) of color information is adjacent position.Now, enter in the borderline region of first position and second position The row correction process.That is, station diagram picture (face image) and the second position of role in the position of first position (face) are being synthesized The borderline region of the image (image generated according to the Skin Color Information of extraction) in (chest) is corrected processing.For example, according to conjunction Into the color information of station diagram picture and the color information of image of second position of role be corrected processing.
For example, image production part 120 carries out the color information and the color information of station diagram picture of the image of role in borderline region Translucent synthesis processing (α mixed processings).That is, carry out for reconcile role image color and the color of station diagram picture Correction process.For example, in borderline region, translucent synthesis processing is carried out, closer to station diagram picture, the color information of station diagram picture Translucent synthesis composite rate it is higher, closer to the image of role, the translucent synthesis of the color information of the image of role it is mixed Conjunction rate is higher.For example, borderline region is set to the borderline region between above-mentioned first position (face) and second position (chest).Now, Closer to first position, the composite rate of the translucent synthesis of the color information of station diagram picture is higher, closer to second position, role's The translucent synthesis processing that the composite rate of the translucent synthesis of the color information of image is higher is carried out in borderline region.
In addition, image production part 120 can carry out moditied processing to borderline region is used as correction process.In such manner, it is possible to logical Cross the image of moditied processing mediation role and the borderline region of station diagram picture and make it unobtrusively.In addition, being used as moditied processing, example The display thing (object) of modification is such as configured in borderline region, processing of various image effects etc. is carried out to borderline region.
In addition, extraction process portion 115 is extracted and the consistent pixel of the colour of skin from the station diagram picture of subject, according to The color information of the pixel of extraction, tries to achieve the Skin Color Information of subject.For example, from multiple pixels of constituting parts image, carrying Take the pixel consistent with the colour of skin.Specifically, the color information of the pixel of appointed part is for example converted into HSV from rgb value It is worth (form and aspect, chroma, lightness), judges whether the HVS values of each pixel are consistent with the colour of skin.Then, it is pair consistent with the colour of skin Pixel color information (such as rgb value) average processing specific calculation process, according to operation result, try to achieve skin Color information.So, in the case where the appointed part presence of subject is not the part (such as eye, hair) of the colour of skin, energy The influence of the color information of the part is enough removed, Skin Color Information is extracted.
In addition, subject information obtaining section 114 (program module of subject information acquisition processing) basis carrys out autobiography The sensor information of sensor 162, obtains the subject information of the action for determining subject.Also, image production part The image of 120 generation roles, the image of the role corresponds to the pass the action (portion of the subject of subject information determination The action of position or position it is mobile etc.) and act.For example, subject information obtaining section 114 is according to the biography from sensor 162 Sensor information, obtains the framework information as subject information (broadly for action message).According to the framework information, it is determined that The action of subject, the image for the role that the generation of image production part 120 is acted corresponding to the action of subject.For example, Regenerated by the action based on framework information, make role action, by carrying out perspective processing to the role, generate the figure of the role Picture.
For example, image production part 120 determines the appointed part of subject according to subject information, from subject Photographed images in cut out the station diagram picture of the appointed part, the station diagram picture cut out is synthesized to the image in role.For example, logical Cross cut out from photographed images on the basis of the position of the appointed part of determination to size region, obtain appointed part Station diagram picture, the station diagram picture cut out is synthesized to the image in role.Now, extraction process portion 115 is according to based on being taken The station diagram picture of the appointed part of the determination of body information, carries out the extraction process of the Skin Color Information of subject.That is, according to being clapped Take the photograph body information, determine the position of the appointed part of subject, extraction process portion 115 from as it is determined that appointed part position The Skin Color Information of subject is extracted in the station diagram picture for the image put.Moreover, according to the Skin Color Information of extraction, setting role's The color information at the position outside appointed part.
Specifically, subject information obtaining section 114 is according to sensor information, and the framework information for obtaining subject is made For subject information.Then, extraction process portion 115 is entered according to the station diagram picture of the appointed part determined based on framework information The extraction process of the Skin Color Information of row subject.For example, according to the framework information of the subject detected by sensor 162, Determine the position of the appointed part of subject, from the photographed images of subject, cut out as it is determined that appointed part Position image station diagram picture, according to the color information of the station diagram picture, extract the Skin Color Information of subject.So, energy It is enough effectively to utilize the framework information for passing through the subject that sensor 162 is detected, the Skin Color Information of extraction subject.
2. the method for present embodiment
Secondly, the method to present embodiment is described in detail.In addition, hereinafter, illustrating according to game The action of person makes the situation that the method for present embodiment is applicable in the game of role action, but the method for present embodiment is not limited In this, various game (RPG game, music game, combat game, communicating game, robot game, card trip can be applied to Play, motor play or action game etc.).
The explanation of 2.1 game
First, the example of game realized by the image generation system of present embodiment is illustrated.In the present embodiment, Player PL (broadly subject) action is detected by Fig. 2 sensor 162, the trip for the action that reflection is detected is carried out Play is handled, and generates game image.
As shown in Fig. 2 sensor 162 (imaging sensor) is arranged to such as its shooting direction (optical axis direction) towards game Person PL (user, operator).For example, the shooting direction of sensor 162 can also be taken pictures with the children of even very little to its entirety Mode, be set as the angle of depression direction for horizontal plane.In addition, in fig. 2, sensor 162 is arranged on the side of display part 190 Side, but set location is not limited to this, can be arranged on arbitrary position (the bottom, top such as display part 190).
Sensor 162 includes color sensor 164 and depth transducer 166.Color sensor 164 is to coloured image (RGB Image) imaged, it can be realized by cmos sensor or CCD etc..In depth transducer 166, projection infrared ray etc. Light, by detecting the time that the reflected intensity or projection light of the projection light are returned, obtains depth information (depth information).For example it is deep Degree sensor 166 can be constituted by projecting ultrared infrared projector and infrared camera.Moreover, for example passing through TOF (Time Of Flight:Flight time) mode obtains depth information.Depth information is in each location of pixels set depth value The information of (deeply value).In depth information, the depth value (deep value) of the landscape on player or its periphery is for example set as ash Colour code angle value.
In addition, sensor 162 can be the sensor for setting color sensor 164 and depth transducer 166 respectively, also may be used To be the sensor for being combined in complex color sensor 164 and depth transducer 166.In addition, depth transducer 166 can also be The sensor of mode (such as pumped FIR laser) outside TOF modes.In addition, can be carried out as the preparation method of depth information various Deformation implementation, such as can by using distance measuring sensor ultrasonic wave etc. obtain depth information.
Fig. 4 (A), Fig. 4 (B) are depth information, the example of body index information for being based respectively on the acquisition of sensor 162 Son.Fig. 4 (A) schematically shows depth information, and the depth information is to represent being taken for from sensor 162 player etc. The information of the distance of body.The body index information of Fig. 4 (B) is the information for representing people's object area.Pass through the body index information It can determine position or the shape of the body of player.
Fig. 5 is the example of the framework information obtained based on the sensor information from sensor 162.In Figure 5, as bone Frame information, the positional information (three-dimensional coordinate) for constituting the bone of skeleton is obtained as joint (position) C0~C20 positional information. C0, C1, C2, C3, C4, C5, C6 respectively correspond to waist, spine, shoulder middle part, right shoulder, left shoulder, neck, head.C7、C8、C9、C10、 C11, C12 respectively correspond to right elbow, right finesse, the right hand, left elbow, left finesse, left hand.C13、C14、C15、C16、C17、C18、 C19, C20 are each equivalent to right waist, left waist, right knee, right crus of diaphragm heel, right crus of diaphragm, left knee, left heel, left foot.Constitute each of skeleton Bone is corresponding with each position for the player that sensor 162 is mirrored.For example, subject information obtaining section 114 is according to from sensing The sensor information (color image information, depth information) of device 162, determines the 3D shape of player.Moreover, using three-dimensional shaped The information of shape or the action vector (light stream) of image etc., estimate each position of player, estimate the position in the joint at each position.So Afterwards, the two-dimensional coordinate of the location of pixels in, depth information corresponding with the joint position of determination and be set in the pixel position The depth value put, tries to achieve the three-dimensional coordinate information of the position in the joint of skeleton, obtains the framework information shown in Fig. 5.By using The framework information of acquisition, can determine the action (action at the position of trick etc. or the movement of position) of player.Thereby, it is possible to Role action is made according to the action of player.For example, being in linkage with this in the case of player PL tricks action in fig. 2, showing The trick for role's (player characters) that portion 190 is shown can also be acted.In addition, when player PL is all around moved, linkage In this, role can be all around moved in object space.
In addition, in the present embodiment, generation is taken pictures by the color sensor 164 of sensor 162 to player PL The composograph of the image of the photographed images arrived and role's (player characters).Specifically, generation is cut out from photographed images Face image (broadly subject image, station diagram picture), the composograph of the image of role.
For example in figure 6, role CHP (clothes) model object is prepared, in polygonal SCI (billboard polygonal, screen Curtain) the player PL face image IMF that is cut out from photographed images of display.Role CHP model object by face (head) it The object at outer multiple positions is constituted, as the role for wearing beautiful clothes.
Moreover, in figure 6, in prolonging for the viewpoint position and the line of the position of role CHP face for connecting imaginary camera VC On long line, display polygonal SCI face image IMF.By by imaginary camera VC, role CHP, polygonal SCI face image IMF is set as this configuration relation, and the face image IMF of player image can be synthesized in the part generation of role CHP face.
In addition, the image synthesis gimmick for role CHP face image IMF is not limited to method shown in Fig. 6.For example it is right , can be using mapping face image IMF (subject image, station diagram in the object for the face (appointed part) for constituting role CHP Picture) the various image combining methods such as structure.
Such as Fig. 7 (A) is the player PL imaged by the color sensor 164 of sensor 162 photographed images Example.Player PL face image IMF (finishing) is cut out from the photographed images, passes through the image combining method by Fig. 6 and angle Color CHP image synthesis, can generate the composograph that Fig. 7 (B) is shown.In Fig. 7 (B), the part of face turns into game Part outside person PL face image IMF, face turns into the role CHP of CG images image.Thus, player PL is due to for making For the role CHP attended to anything else of oneself, it can be allowed to put on the clothes of hero's dress in children's stories or the animation world to play game, energy It is enough to improve the interest of game or the degree of being keen to of player.
Fig. 8 is the game figure of display of display part 190 generated by the image generation system of present embodiment, in Fig. 2 The example of picture.Hold magic cane ST in synthesis player PL face image IMF role CHP such as its hand.If moreover, Fig. 2 Player PL moves wrist, then the action of the wrist is detected by sensor 162, as Fig. 5 framework information, obtains wrist The information of the skeleton of bone action.Thus, the list action of the role CHP on the game image shown in Fig. 8, can pass through magic cane ST hits hostile role EN1, EN2, EN3 and attacked.Then, by knocking the number of enemy or the acquisition of reward points down Deng the gaming performance of computing player.
According to the method for this present embodiment, player PL can make the role CHP that attends to anything else of oneself put on children's stories or dynamic The clothes of unrestrained hero, feel just like as the hero, play Fig. 8 game.Therefore, it is possible to realize the player of children etc. The interesting game that PL can be keen to.
The extraction and setting of 2.2 Skin Color Informations
As shown in Figure 6 to 8, in the present embodiment, generation synthesizes the face image IMF of player figure in role CHP The synthesis character image of picture, role CHP is acted by the action corresponding to player PL, and player is played.In the conjunction Into in character image (CHP, IMF), because by the face image IMF that the site substitution of face is player, player can be by the angle Color CHP plays game as role corresponding with itself.
But, in the synthesis character image, by the face image IMF that the site substitution of face is player, the portion outside it Position is generated by so-called CG (computer graphics) images.Accordingly, there exist following problem:It currently there are no one and can give and swim This role of play person is this sensation of really attending to anything else of oneself.
For example, when the whiter player of the colour of skin plays game, the color of role CHP face is by synthesizing the face of the player Image IMF and as the whiter colour of skin, but on the position in the hand outside face, pin, chest etc., keep role CHP CG images The color of (fluoroscopy images).Accordingly, there exist the color of the role CHP face situation different with the color at the position in hand, pin, chest etc., And the image as not harmony.Similarly, when the colour of skin slightly black player plays game, the color of role CHP face, which passes through, closes Into the player face image IMF and as the slightly black colour of skin, but on the position in the hand outside face, pin, chest etc., keep angle The color of color CHP CG images.Accordingly, there exist the color of the role CHP face situation different with the color at the position in hand, pin, chest etc., And the image as not harmony.So, only synthesized in face image IMF method in the part of role CHP face, there are currently no one The individual player characters CHP that can give is this sensation of really attending to anything else of oneself, and it is possible to which the colourity for producing the colour of skin is different There is the image of not harmony for reason.
Here, in the present embodiment, adopting with the following method:The station diagram picture of player is synthesized in role CHP, and According to the Skin Color Information from position image zooming-out, the color information at setting role CHP other positions.
Specifically, as shown in figure 9, by the face image IMF included by player PL photographed images IM (broadly for portion Bit image) synthesize in the part of role CHP face (broadly for appointed part), and extract the colour of skin from face image IMF Information.For example, according to the color information of the pixel of face image IMF flesh tone portion, extracting Skin Color Information.Then, according to extraction The color information at the position outside Skin Color Information, setting face.For example, as shown in figure 9, regarding the Skin Color Information of extraction as chest NC The color information setting at the position (broadly for the position outside appointed part) of (neck, chest), hand AR, AL, pin LR, LL etc..That is, will The Skin Color Information carries out perspective processing as Essential colour information to the object at chest NC, hand AR, AL, pin LR, LL etc. position.
More specifically, as shown in Figure 10, player PL is mirrored from what is imaged by the color sensor 164 of sensor 162 Overall photographed images IM in cut out face image IMF.This can be by believing according to player PL illustrated in fig. 5 skeleton Breath, determines the position of player PL face, regard the image in the region of the specific dimensions centered on the position using the face as face figure Realized as IMF cuts out (trimming).Then, the face image IMF so cut out is synthesized in the part of role CHP face, and The Skin Color Information extracted from face image IMF is set as to the Essential colour letter at chest NC, hand AR, AL, pin LR, LL etc. position Breath.
According to the method for this present embodiment, such as when the player PL colour of skin is whiter, not only synthesize in role CHP Face image IMF the colour of skin, the colour of skin at other positions of chest, hand, pin etc. is also set to the whiter colour of skin.Similarly, for example When the player PL colour of skin is slightly black, not only synthesize in the role CHP face image IMF colour of skin, chest, hand, pin etc. other The colour of skin at position is also set to the slightly black colour of skin.Therefore, according to present embodiment, the not only position of face, the position outside it The player PL colour of skin can be reflected.It is this sensation of really attending to anything else of oneself thereby, it is possible to give player characters CHP.In addition, Due to the colour of skin at the position that can unify the colour of skin of face, the chest outside it, hand, pin etc., it can suppress to produce the colourity of the colour of skin The image for not being all reason and there is not harmony situation.
Figure 11 is the flow chart of one of the extraction process for showing Skin Color Information.First, from the photographed images of player Cut out face image (step S1).For example, as shown in Figure 10, from the overall photographed images of player, being believed using Fig. 5 skeleton Breath etc., cuts out the part of face image.Then, for the face image cut out, resolution ratio reduction processing or Fuzzy Processing etc. are implemented Image procossing (step S2).For example, in order to reduce processing load, reducing the pixel count of the face image cut out, in order to extract substantially Color information, blur filter processing etc. is carried out to face image.
Then, the rgb value of the face image after image procossing is converted into HSV value, extracted and the consistent pixel of the colour of skin (step S3).For example, on the tone of the HSV value of each pixel of face image, chroma, each value of lightness, by judge whether into Enter the tone of the colour of skin, chroma, in the particular range of lightness, determine whether the pixel consistent with the colour of skin, extract consistent Pixel.Then, according to the color information (rgb value) of the pixel of extraction, the Skin Color Information (step of the Essential colour as the colour of skin is tried to achieve S4).For example, in the case of as pixel extraction M pixel consistent with the colour of skin, trying to achieve the color information of this M pixel The average value of (rgb value), the average value for the color information tried to achieve is tried to achieve as the Skin Color Information of face image.In such manner, it is possible to It is not the part (such as eye, hair) of the colour of skin in each several part for the face for removing player, by the flat of the colour of skin of the face of player Equal color is extracted as Skin Color Information.Then, the Skin Color Information of extraction is set as to Fig. 9, Figure 10 chest NC, hand AR, AL, pin The Essential colour at LR, LL etc. position, perspective processing, generation role CHP image are carried out to the role CHP as model object. In this present embodiment, extracted and colour of skin condition one from the face image IMF (station diagram picture) of player's (subject) The pixel of cause, according to the color information of the pixel of extraction, tries to achieve the Skin Color Information of player.
Such as Figure 12 (A) is the example of role CHP model object information.It is empty that the model object information is included in object The information or the information in direction of the position of interior model object.In addition, multiple object OB1 including composition model object, OB2, OB3 ... information.
Figure 12 (B) is the example of object OB1, OB2, the OB3 ... for the model object for constituting role CHP object information Son.The object information makes its Essential colour or constitutes polygonal data pass of each object for each object OB1, OB2, OB3 ... Connection.In the present embodiment, as Figure 12 (B) Essential colour information, be set by Figure 11 method extract the colour of skin letter Breath.In other words, the Skin Color Information of extraction is carried out perspective processing to each object of model object, generates role as Essential colour CHP image.
In addition, in the present embodiment, the role CHP that display part 190 is shown in fig. 13 turns into be made up of multiple objects Model object.For example, the three-dimensional CG objects being made up of the basal plane by polygonal etc. generate role CHP image.Moreover, As shown in figure 13, as role CHP image, player PL (subject) is shown in the position generation of the face of the model object Face image IMF image.This role CHP and face image IMF synthesis character image for example can be by being illustrated in Figure 6 Image combining method generation.
Then, as shown in figure 13, when player PL (subject) is acted, according to player PL action, generation exists The composograph of face image IMF (station diagram picture) synthesising position change, shows in display part 190 in synthetic object image.That is, Moved right in player PL, in the case that its face moves right, corresponding to this, the face image IMF shown by display part 190 Move to the right.Similarly, be moved to the left in player PL, in the case that its face is moved to the left, corresponding to this, face image IMF Also move to the left.
Specifically, in the present embodiment, using framework information illustrated in fig. 5, control role CHP action.Example Such as by the action data based on framework information, role CHP action regeneration is carried out, role CHP is acted.
Figure 14 is the figure for schematically illustrating skeleton.The skeleton for acting role CHP is the position that configuration is set in role CHP 3D shape bone, SK represents to project the skeleton in screen SC S.Skeleton is made up of multiple joints as illustrated in Figure 5, The position in each joint is represented by three-dimensional coordinate.Skeleton and Figure 13 player PL action linkage, bone action.If moreover, The bone action of skeleton, then be in linkage with this, position corresponding with the role CHP bone is also acted.If for example, player PL wrists are dynamic Make, be then in linkage with this, the bone action of the wrist of skeleton, role CHP wrist is also acted.In addition, player PL foot-propelleds are made, then join Move in this, the bone action of the pin of skeleton, role CHP pin is also acted.In addition, if player's PL headworks, are in linkage with this, The bone action on the head of skeleton, role CHP head is also acted.In addition, in fig. 14, such as the viewpoint in imaginary camera VC Position sensors configured 162, the shooting direction imaginary camera VC of direction of setting sensor 162 direction of visual lines.
So, in the present embodiment, player PL action, role CHP actions are in linkage with.Therefore, if player PL Head is moved, and the face image IMF shown by display part 190 is also in linkage with this and acted.Here, in the present embodiment, using In Fig. 5, framework information illustrated in fig. 14, the position of player PL face (head) is determined, according to the position of the face of determination, such as Fig. 9, Figure 10 cut out face image IMF from photographed images IM like that, synthesize in role CHP, and colour of skin letter is extracted from face image IMF Breath.
In addition, in the present embodiment, as role CHP, model object is constituted using by multiple objects.Then, right In image space (three-dimensional imaginary space), by configuring the mould in position corresponding with player PL (subject) position Type object carries out perspective processing, generation role CHP image.For example in fig .15, role CHP is in object space, and configuration exists Position corresponding with the position of the player PL in Figure 13.For example, if player PL is all around moved, role CHP also exists Can all around it be moved in object space.
Then, light source LS illumination mould is used as shown in figure 15 as the role CHP of model object perspective processing basis Type is carried out.Specifically, processing (masking processing) is illuminated based on lighting model.The illumination process uses light source LS information (light source vector, light source colour, brightness, light source species etc.), imaginary camera VC line of sight, the object for constituting role CHP Normal vector, raw material (color, material) of object etc. are carried out.In addition, as lighting model, existing and only considering ambient light and diffused light Lambert diffused illumination model, or further contemplate in addition to ambient light, diffused light reflected light Phong (Feng) lighting models or Blinn-Phong lighting models etc..
By being illuminated processing based on this lighting model, it can generate and suitably be applied by light source LS for role CHP Plus the image of the lifelike high-quality of shade.For example, be illuminated processing using the light source of spotlight, can generate such as by The role CHP (clothes) of optically focused light irradiation image, it is possible to increase imaginary presence of player etc..
Moreover, in the present embodiment, being used as Essential colour (Figure 12 of the object at the position outside the face of chest, hand, pin etc. (B)), set the Skin Color Information extracted from face image IMF, carry out the processing of Figure 15 that perspective.So, the colour of skin of setting is believed Breath can generate the lifelike role for applying shade by using Figure 15 light source LS etc. illumination process as Essential colour CHP image.
As above, in the present embodiment, the face image IMF included by the photographed images IM of player's (subject) (refers to Determine the station diagram picture at position) generation synthesize in the composograph of role CHP image.Now, such as Fig. 9 to Figure 11 explanation, root According to the face image IMF (the station diagram picture of the appointed part of subject) of role, role CHP Skin Color Information is extracted.Then, root According to the Skin Color Information of extraction, the color information at the position (position outside appointed part) outside role CHP face is set, angle is generated Color CHP image, generates the composograph in role CHP image synthesis face image IMF.
Now, such as in Figure 12 the explanation in (A), Figure 12 (B), role CHP turns into the model being made up of multiple objects Object.Then, according to the Skin Color Information of player, in multiple objects of setting model object, the position (appointed part outside face Outside position) object color information.That is, according to the Skin Color Information of extraction, the object at the position of setting chest, hand, pin etc. Color information.Then, as shown in figure 15, handled by carrying out the perspective of model object, generate the role CHP as model object Image.
Now, in the present embodiment, according to the sensor information from sensor 162, obtain for determining player The subject information of PL (subject) action.Then, such as in Figure 13 explanation, what generation was determined by subject information With the role CHP of player PL action respective action image.
For example, according to subject information, determining the position (appointed part of subject) of role CHP face, such as scheme Shown in 10, face image IMF (station diagram picture) is cut out from player PL photographed images IM, the face image IMF cut out is synthesized In role CHP image.Now, the face image IMF (the station diagram picture of appointed part) determined according to subject information, is carried out The extraction process of player PL (subject) Skin Color Information.
More specifically, according to the sensor information from sensor 162, obtain in Fig. 5, that trip illustrated in fig. 14 Play person PL framework information is used as subject information.Then, as shown in figure 16, according to the face determined based on framework information (SK) Image IMF, carries out the extraction process of Skin Color Information.For example in figure 16, according to framework information (SK), it can determine player PL Head position (C6).I.e., as shown in figure 13, acted by player PL, in the case of the position movement of head, by sensor 162 detect player PL action to obtain framework information, can track the position of mobile head.Therefore, using framework information, The position of player PL head is determined, using the information of defined location, from the photographed images imaged by color sensor 164 Face image IMF is cut out in IM, Skin Color Information can be extracted.That is, be in linkage with player PL action and role CHP actions that Plant in game, the player PL of the tracking movement such as left and right forwards, backwards face image IMF, the extraction colour of skin letter from face image IMF The Skin Color Information of extraction, can be set as the Essential colour of the colour of skin at the role CHP of action other positions by breath.
2.3 correction process
In the present embodiment, as shown in Figure 9, Figure 10, Skin Color Information is extracted from player PL face image IMF, if It is set to the Essential colour at other positions such as synthesis face image IMF role CHP chest, hand, pin.Now, in role CHP image Color and face image IMF color between there may be the difference of color.For example, in fig. 17, face image IMF's and role CHP In the borderline region BD in chest (chest, neck), color changes and (changed to discreteness) with being possible to discontinuity.
For example, in fig. 17, using the face image IMF colour of skin as CLF, regarding the colour of skin in role CHP chest as CLC. Now, color CLF and CLC are not identical colors.For example, color CLC is set according to the Skin Color Information extracted from face image IMF, But the colour of skin and color CLC extracted are strictly upper to turn into identical color.In addition, the color CLC in the chest on role CHP, such as in Figure 15 Middle explanation, as the color for applying shade by the illumination process based on lighting model.Therefore, the color CLF of face image and chest Color CLC is not identical color, produces the difference of color.
Here, in the present embodiment, in the borderline region BD of role CHP image and face image IMF (station diagram picture) It is interior, in order that the difference of this color is not substantially corrected processing.For example, carrying out from face image IMF color CLF to chest The correction process (Gradation processing) that color CLC gradually changes colour.
Specifically, as the correction process, the color CLC and face image of role CHP image are carried out in borderline region BD IMF (station diagram picture) color CLF translucent synthesis processing.For example, using the composite rate of color CLC translucent synthesis as α, will The composite rate of color CLF translucent synthesis regard the color of generation as CL as β.Now, in fig. 17, by CL=α × CLC+ β The translucent synthesis processing that × CLF table shows is carried out as correction process.Here, composite rate α is with from face image IMF position Become big value to the position in chest, composite rate β is as the position from chest to face image IMF position becomes big value, such as β =α -1 relational expression is set up.
So, with from the position in the face image IMF chest for being positioned against role CHP, borderline region BD color CL by Gradually become quality CLC from color CLF.Therefore, it is possible to effectively suppress in borderline region BD the change of color discontinuity and image not from Right situation.
In addition, the translucent synthesis processing is preferably in the image and face image IMF figure of role CHP as illustrated in Figure 6 Carried out during as synthesis processing.I.e., as shown in figure 18, when role CHP image and face image IMF image synthesis processing, setting For role CHP composite rate α, the composite rate β for face image IMF is set.Then, in role CHP region, in Figure 17 Borderline region BD outside region in, set α=1, β=0.Similarly, in face image IMF region, in borderline region BD Outside region in, be set as β=1, α=0.Moreover, in borderline region BD, be set as the < β < 1 of 0 < α < 1,0, carry out with The translucent synthesis processing that CL=α × CLC+ β × CLF table shows.So, closed in role CHP image and face image IMF image Into during processing, can also perform translucent synthesis processing in borderline region BD, efficient activity of processing etc. is realized.
In addition, in the present embodiment, moditied processing can be carried out to borderline region BD and be used as correction process.
For example, in Figure 19, moditied processing, example are carried out to the borderline region BD between face image IMF and role CHP chest The display thing AC (object, article) of the modification of scarf or necklace is such as configured in the way of covering borderline region BD.So, lead to The display thing AC of modification is crossed, borderline region BD part can be hidden.That is, it can be hidden in by the display part AC of modification The change of borderline region BD discontinuous color, can effectively suppress the unnatural situation of the image of generation.
In addition, the display thing (object) used in moditied processing can be the display thing in the article that player holds, Can be associated with role CHP and pay with display thing.In addition, as moditied processing, can carry out repairing to borderline region BD The image effect processing of the effect of decorations.
In addition, the method for the correction process as borderline region BD, it is considered to various methods.For example, to borderline region BD's Image carry out blur filter processing, in order to reconcile face image IMF and chest image carry out brightness adjustment processing, carry out color information The various modifications such as handling averagely (smoothing techniques) implement.
In addition, in the present embodiment, environment that can be according to sensor 162 (color sensor 164) in photography The environmental information of monochrome information etc., the correction process of the Skin Color Information extracted.For example, when photography environment is dark environment When, the Skin Color Information from the position image zooming-out of face image etc. is likely to become dark skin compared with the actual colour of skin of player Color.Therefore, in this case, the Skin Color Information extracted from station diagram picture is corrected to the brighter colour of skin.On the other hand, taking the photograph In the case that shadow environment is brighter environment, the Skin Color Information extracted from the station diagram picture of face image etc. and player's reality The colour of skin is compared, and is likely to become the brighter colour of skin.Therefore, in this case, it will be corrected to from the Skin Color Information of position image zooming-out The dark colour of skin.So, the colour of skin set at the position of role CHP hand, pin, chest etc. can be close to actual player's The colour of skin, can generate more appropriate character image.
In addition, in the present embodiment, processing can be corrected, the station diagram picture of face image etc. is made according to game state Skin Color Information or be set as based on the Skin Color Information other positions the colour of skin color information discoloration.For example according to game state, The color for carrying out the role CHP of the player colour of skin is good, or the color that loses color deteriorates game performance.By such mode, The colour of skin at role CHP position (hand, pin, chest, face etc.) various change according to game state, can prevent the colour of skin from turning into Single color.
3. detailed processing
Then, the detailed processing example of present embodiment is illustrated using Figure 20 flow chart.
First, according to the framework information of player, the position of the face of player is determined, according to defined location, from game Face image (step S11) is cut out in the photographed images of person.For example, using in Fig. 5, Figure 14, framework information illustrated in fig. 16 etc., The face image IMF of the player of the photographed images of coloured image position is determined as, by trimming centered on the position The region for the size given, cuts out face image IMF.
Then, Skin Color Information (step S12) is extracted from face image.For example, by method for being illustrated in Figure 11 etc., from face Skin Color Information is extracted in image.Then, according to the Skin Color Information of extraction, set role face outside position the colour of skin it is basic Color (step S13).For example, the explanation such as in Fig. 9, Figure 10, chest NC's, hand AR, AL, pin LR, LL outside setting face etc. The Essential colour of the colour of skin at position.Object information of the setting of the Essential colour of the colour of skin for example for (B) with Figure 12 is corresponding right As carrying out.
Then, according to lighting model.The perspective processing of role is carried out, the image (step S14) of role is generated.For example, such as Explanation in fig .15, carries out the illumination process using light source LS, while carrying out perspective processing, generates the fluoroscopy images of role (image for applying shade).Then, the image of role and the synthesis processing of face image are carried out, now, in the image and face of role Correction process (step S15) is performed in the borderline region of image.For example as shown in figure 18, role CHP is carried out using composite rate α, β Image and face image IMF image synthesis processing, using composite rate α, β, as shown in figure 17, role CHP image and Correction process (translucent synthesis processing) is performed in face image IMF borderline region BD.
Although in addition, present embodiment is described in detail as described above, the present invention is not departed from substantially New solution and effect on the premise of various deformation, will be appreciated that to those skilled in the art.Therefore, This modified example is intended to be included within.For example, and it is more broadly or synonymous. in specification or accompanying drawing at least one times The term (player, face, face image etc.) recorded together of different terms (subject, appointed part, station diagram picture etc.), nothing The different term can be substituted for by any position of specification or accompanying drawing.In addition, the extraction process of Skin Color Information, figure As synthesis processing, the setting processing of color information, correction process, perspective processing, particular procedure of the action of subject etc. also not It is defined in what is illustrated in present embodiment, the method impartial with it is also contained in the scope of the present invention.

Claims (11)

1. a kind of image generation system, it is characterised in that including:
Input processing portion, obtains the photographed images of subject;
Extraction process portion, carries out the extraction process of color information;And
Image production part, generates composograph, the composograph is by included by the photographed images of the subject The station diagram picture of the appointed part of the subject synthesizes the image in role,
The extraction process portion according to the station diagram picture of the appointed part of the subject, carry out described in be taken The extraction process of the Skin Color Information of body,
Described image generating unit sets the position outside the appointed part of the role according to the Skin Color Information of extraction Color information, generate the image of the role.
2. image generation system according to claim 1, it is characterised in that
The station diagram seems the face image of the subject,
The extraction process portion carries out the extraction process of the Skin Color Information according to the face image,
Described image generating unit sets the portion outside the face of the role according to the Skin Color Information extracted from the face image The color information of position.
3. image generation system according to claim 1, it is characterised in that
The role is the model object being made up of multiple objects,
Described image generating unit is according to the Skin Color Information of the subject, in the multiple object of the model object In, the color information of the object at the position outside the setting appointed part is handled by the perspective for carrying out the model object, raw Into the image of the model object.
4. image generation system according to any one of claim 1 to 3, it is characterised in that
Described image generating unit is corrected processing in the image of the role and the borderline region of the station diagram picture.
5. image generation system according to claim 4, it is characterised in that
Described image generating unit carries out the color information and the color of the station diagram picture of the image of the role in the borderline region The translucent synthesis processing of information is used as the correction process.
6. image generation system according to claim 4, it is characterised in that
Described image generating unit carries out moditied processing to the borderline region and is used as the correction process.
7. image generation system according to any one of claim 1 to 3, it is characterised in that
Extracted and the consistent pixel of the colour of skin, root from the station diagram picture of the subject in the extraction process portion According to the color information of the pixel of extraction, the Skin Color Information of the subject is tried to achieve.
8. image generation system according to any one of claim 1 to 3, it is characterised in that also include:
Subject information obtaining section, according to the sensor information from sensor, is obtained for determining the subject The subject information of action,
The generation of described image generating unit is moved by the corresponding with the action of the subject of subject information determination The image of the role made.
9. image generation system according to claim 8, it is characterised in that
Described image generating unit determines the appointed part of the subject according to the subject information, from described The station diagram picture is cut out in the photographed images of subject, by the station diagram cut out as synthesizing in the role Image,
The extraction process portion is entered according to the station diagram picture of the appointed part determined based on the subject information The extraction process of the Skin Color Information of the row subject.
10. image generation system according to claim 9, it is characterised in that
The subject information obtaining section is according to the sensor information, and the framework information for obtaining the subject is used as institute Subject information is stated,
The extraction process portion carries out institute according to the station diagram picture of the appointed part determined based on the framework information State the extraction process of the Skin Color Information of subject.
11. a kind of image processing method, it is characterised in that carry out:
Input processing, obtains the photographed images of subject;
Extraction process, carries out the extraction process of color information;And
Image generation is handled, and generates composograph, and the composograph is included the photographed images of the subject The station diagram picture of appointed part of the subject synthesize image in role,
In the extraction process, according to the station diagram picture of the appointed part of the subject, the quilt is carried out The extraction process of the Skin Color Information of shooting body,
In described image generation processing, according to the Skin Color Information of extraction, set the role the appointed part it The color information at outer position, generates the image of the role.
CN201710064323.0A 2016-02-05 2017-02-04 Image generation system and image processing method Active CN107067457B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-021039 2016-02-05
JP2016021039A JP6370820B2 (en) 2016-02-05 2016-02-05 Image generation system, game device, and program.

Publications (2)

Publication Number Publication Date
CN107067457A true CN107067457A (en) 2017-08-18
CN107067457B CN107067457B (en) 2024-04-02

Family

ID=59565021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710064323.0A Active CN107067457B (en) 2016-02-05 2017-02-04 Image generation system and image processing method

Country Status (2)

Country Link
JP (1) JP6370820B2 (en)
CN (1) CN107067457B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108479070A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 Dummy model generation method and device
CN110579222A (en) * 2018-06-07 2019-12-17 百度在线网络技术(北京)有限公司 Navigation route processing method, device and equipment
CN111210490A (en) * 2020-01-06 2020-05-29 北京百度网讯科技有限公司 Electronic map construction method, device, equipment and medium
CN113413594A (en) * 2021-06-24 2021-09-21 网易(杭州)网络有限公司 Virtual photographing method and device for virtual character, storage medium and computer equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2020129115A1 (en) * 2018-12-17 2021-11-04 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program
JP2020149174A (en) * 2019-03-12 2020-09-17 ソニー株式会社 Image processing apparatus, image processing method, and program
CN110286975B (en) * 2019-05-23 2021-02-23 华为技术有限公司 Display method of foreground elements and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001096062A (en) * 2000-06-28 2001-04-10 Kce Japan:Kk Game system
JP2001292305A (en) * 2000-02-02 2001-10-19 Casio Comput Co Ltd Image data synthesizer, image data synthesis system, image data synthesis method and recording medium
JP2011203835A (en) * 2010-03-24 2011-10-13 Konami Digital Entertainment Co Ltd Image generating device, image processing method, and program
CN103127717A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control and operation of game
JP2014016886A (en) * 2012-07-10 2014-01-30 Furyu Kk Image processor and image processing method
CN103731601A (en) * 2012-10-12 2014-04-16 卡西欧计算机株式会社 Image processing apparatus and image processing method
JP2015093009A (en) * 2013-11-11 2015-05-18 株式会社バンダイナムコゲームス Program, game device, and game system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09106419A (en) * 1995-08-04 1997-04-22 Sanyo Electric Co Ltd Clothes fitting simulation method
JP4521101B2 (en) * 2000-07-19 2010-08-11 デジタルファッション株式会社 Display control apparatus and method, and computer-readable recording medium recording display control program
JP2003342820A (en) * 2002-05-22 2003-12-03 B's Japan:Kk Coordinate system, method, program recording medium and program
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system
JP2013219544A (en) * 2012-04-09 2013-10-24 Ricoh Co Ltd Image processing apparatus, image processing method, and image processing program
JP6018707B2 (en) * 2012-06-21 2016-11-02 マイクロソフト コーポレーション Building an avatar using a depth camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001292305A (en) * 2000-02-02 2001-10-19 Casio Comput Co Ltd Image data synthesizer, image data synthesis system, image data synthesis method and recording medium
JP2001096062A (en) * 2000-06-28 2001-04-10 Kce Japan:Kk Game system
JP2011203835A (en) * 2010-03-24 2011-10-13 Konami Digital Entertainment Co Ltd Image generating device, image processing method, and program
CN103127717A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control and operation of game
JP2014016886A (en) * 2012-07-10 2014-01-30 Furyu Kk Image processor and image processing method
CN103731601A (en) * 2012-10-12 2014-04-16 卡西欧计算机株式会社 Image processing apparatus and image processing method
JP2015093009A (en) * 2013-11-11 2015-05-18 株式会社バンダイナムコゲームス Program, game device, and game system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108479070A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 Dummy model generation method and device
CN110579222A (en) * 2018-06-07 2019-12-17 百度在线网络技术(北京)有限公司 Navigation route processing method, device and equipment
CN111210490A (en) * 2020-01-06 2020-05-29 北京百度网讯科技有限公司 Electronic map construction method, device, equipment and medium
CN111210490B (en) * 2020-01-06 2023-09-19 北京百度网讯科技有限公司 Electronic map construction method, device, equipment and medium
CN113413594A (en) * 2021-06-24 2021-09-21 网易(杭州)网络有限公司 Virtual photographing method and device for virtual character, storage medium and computer equipment

Also Published As

Publication number Publication date
CN107067457B (en) 2024-04-02
JP6370820B2 (en) 2018-08-08
JP2017138913A (en) 2017-08-10

Similar Documents

Publication Publication Date Title
CN107067457A (en) Image generation system and image processing method
JP6362634B2 (en) Image generation system, game device, and program
JP6340017B2 (en) An imaging system that synthesizes a subject and a three-dimensional virtual space in real time
JP5756198B2 (en) Interactive user-controlled avatar animation
US20080215974A1 (en) Interactive user controlled avatar animations
JP5671349B2 (en) Image processing program, image processing apparatus, image processing system, and image processing method
JP5128276B2 (en) GAME DEVICE, GAME PROGRAM, COMPUTER-READABLE INFORMATION STORAGE MEDIUM, GAME SYSTEM, AND GAME PROCESSING METHOD
CN104011788B (en) For strengthening and the system and method for virtual reality
GB2556347A (en) Virtual reality
JP2017138915A (en) Image generation system and program
CN107376349A (en) The virtual image being blocked is shown
WO2019155889A1 (en) Simulation system, processing method, and information storage medium
TW201143866A (en) Tracking groups of users in motion capture system
US11823316B2 (en) Photoreal character configurations for spatial computing
CN102918489A (en) Limiting avatar gesture display
CN109716397A (en) Simulation system, processing method and information storage medium
WO2008106197A1 (en) Interactive user controlled avatar animations
JP2019152899A (en) Simulation system and program
JP2012234441A (en) Program, information storage medium, image generation system and server system
JP5995304B2 (en) Program, information storage medium, terminal and server
JP4804122B2 (en) Program, texture data structure, information storage medium, and image generation system
JP6732463B2 (en) Image generation system and program
JP2008027064A (en) Program, information recording medium, and image forming system
JP2007226572A (en) Program, information storage medium and image creation system
JP7104539B2 (en) Simulation system and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant