Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase
Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling image of the disclosure or the implementation of the device for handling image
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as image processing class is answered on terminal device 101,102,103
With, web browser applications, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software
Deng.
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading
(Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player
Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression
Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is
When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein
It is specific to limit.
Server 105 can be to provide the server of various services, such as shoot and obtain to terminal device 101,102,103
The image processing server that is handled of target object light image.Image processing server can be to the object received
The data such as body light image carry out the processing such as analyzing, and obtain processing result (such as result images).In practice, server may be used also
Processing result obtained is fed back to terminal device.
It should be noted that can be by server 105 for handling the method for image provided by embodiment of the disclosure
It executes, can also be executed by terminal device 101,102,103, correspondingly, the device for handling image can be set in service
In device 105, also it can be set in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software
It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module)
It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.Used data during generating result images
It does not need in the case where long-range obtain, above system framework can not include network, and only include terminal device or server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling image according to the disclosure is shown
200.The method for being used to handle image, comprising the following steps:
Step 201, target object light image and destination virtual subject image are obtained.
It in the present embodiment, can be with for handling the executing subject (such as server 105 shown in FIG. 1) of the method for image
By wired connection mode or radio connection from remotely or locally obtaining target object light image and destination virtual
Subject image.Wherein, target object light image is the image to be handled it.Target object light image includes object
Shadow image corresponding to image and subject image.Specifically, target object light image can be for the object in illumination scene
Body carries out shooting image obtained.Shooting obtain target object light image illumination scene in light source be directional light or
Sunlight.It is appreciated that under illumination scene, when object blocks light source, shade can be generated.
In the present embodiment, destination virtual subject image is the image for being handled target object light image.
Destination virtual subject image can according to the predetermined image of the shape of dummy object.Specifically, can be pre-rendered
Image, or, or the image extracted from existing image in advance according to the profile of object.It should be noted that
Herein, " virtual " of destination virtual subject image is to refer to destination virtual for target object light image
Dummy object corresponding to subject image is substantially not present in for shooting the true field for obtaining target object light image
Jing Zhong.
Step 202, the shadow extraction model that the input of target object light image is trained in advance, obtaining includes range information
Result shadow image.
In the present embodiment, based on target object light image obtained in step 201, above-mentioned executing subject can be by mesh
Object light image input shadow extraction model trained in advance is marked, the result shadow image including range information is obtained.Wherein,
As a result shadow image can be shadow image extract from target object light image, to be added to range information.Distance
Information for being characterized in target object light image, the pixel of shadow image pixel corresponding with subject image away from
From.Specifically, some point in object due to blocking light source, can generate yin on perspective plane (such as ground, metope, desktop etc.)
Shadow point herein, will can be used to generate shadow spots, point on object as object point corresponding with shadow spots in turn, and
Object point on object corresponds to the pixel in subject image, and the shadow spots in shade correspond to the pixel in shadow image, into
And can using pixel corresponding to the object point for being used to generate shadow spots corresponding to the pixel in shadow image as with
The corresponding pixel of pixel in shadow image.
In the present embodiment, range information can emerge from result shadow image in a variety of manners.As an example,
Range information can be recorded in digital form in result shadow image.Specifically, each pixel in result shadow image
Point can correspond to a number, which can be corresponding pixel at a distance from pixel corresponding in subject image.
In some optional implementations of the present embodiment, range information can be the pixel in result shadow image
Pixel value.Distance is characterized using pixel value in various manners specifically, can adopt.As an example, can be got over using pixel value
Greatly, the remoter mode of distance;Alternatively, can also be smaller using pixel value, the remoter mode of distance.
In the present embodiment, it is corresponding with result shadow image to can be used for characterizing object light image for shadow extraction model
Relationship.Specifically, as an example, shadow extraction model can be technical staff be in advance based on to a large amount of object light image and
The statistics of result shadow image corresponding to object light image and pre-establish, be stored with multiple objects light image with it is right
The mapping table for the result shadow image answered;Or it is based on preset training sample, using machine learning method to first
The model that beginning model (such as neural network) obtains after being trained.
In some optional implementations of the present embodiment, shadow extraction model can by above-mentioned executing subject or other
Electronic equipment is obtained by following steps training:
Firstly, obtaining preset training sample set, wherein training sample includes sample object light image and for sample
The predetermined sample results shadow image of object light image.
Herein, sample object light image can be that shooting acquisition is carried out to the sample object under illumination scene
Image.Sample object light image may include sample object image and sample shadow image.Sample results shadow image can be with
And to add sample in extracted sample shadow image by extracting sample shadow image from sample object light image
The image obtained after range information.
Then, the production confrontation network pre-established is obtained, wherein production confrontation network includes generating network and sentencing
Other network generates network for result shadow image to be identified and exported to the object light image inputted, differentiates network
For determining whether inputted image makes a living into the image that network is exported.
Herein, above-mentioned production confrontation network can be the production confrontation network of various structures.For example, production pair
Anti- network can be depth convolution production confrontation network (Deep Convolutional Generative Adversarial
Network, DCGAN).It should be noted that after above-mentioned production confrontation network can be unbred, initiation parameter
Confrontation network is generated, the generation confrontation network of trained mistake is also possible to.
Specifically, generating network can be convolutional neural networks for carrying out image procossing (such as comprising convolutional layer, pond
Change the convolutional neural networks of the various structures of layer, anti-pond layer, warp lamination).Above-mentioned differentiation network is also possible to convolutional Neural
Network (such as the convolutional neural networks of the various structures comprising full articulamentum, wherein classification function may be implemented in above-mentioned full articulamentum
Can).In addition, differentiating that network is also possible to for realizing other models of classification feature, such as support vector machines (Support
Vector Machine, SVM).Herein, network is differentiated if it is determined that input differentiates that the image of network is the figure for generating network and being exported
Picture can then export 1 (or 0);If it is determined that not being the image for generating network and being exported, then 0 (or 1) can be exported.It needs to illustrate
, differentiate that network can also export other pre-set information to characterize and differentiate as a result, being not limited to numerical value 1 and 0.
Finally, it is based on machine learning method, the sample object light image for including by the training sample that training sample is concentrated
As the input for generating network, by generate network output result shadow image and with the sample object light image phase that is inputted
Corresponding sample results shadow image will be trained as the input for differentiating network to generating network and differentiating that network is trained
Generation network afterwards is determined as shadow extraction model.
Specifically, the ginseng for generating network and differentiating any network (can be described as first network) in network can be fixed first
Number, optimizes the network (can be described as the second network) of unlocked parameter;The parameter for fixing the second network again, to first network
It improves.Above-mentioned iteration is constantly carried out, so that the image for differentiating the indistinguishable input of network whether is generated network and is exported.
At this point, above-mentioned generation network result shadow image generated and sample results shadow image are close, above-mentioned differentiation network can not
It accurately distinguishes truthful data and generates data (i.e. accuracy rate is 50%), generation network at this time can be determined as shadow extraction
Model.
It should be noted that above-mentioned executing subject or other electronic equipments can use existing back-propagation algorithm and ladder
Degree descent algorithm is trained to network and differentiation network is generated.The parameter meeting for generating network and differentiating network after training every time
Be adjusted, using the generation network obtained after each adjusting parameter and differentiate network as next time training used in generate network with
Differentiate network.
Step 203, it is based on result shadow image, generates direction of illumination information corresponding to target object light image.
In the present embodiment, based on result shadow image obtained in step 202, target is can be generated in above-mentioned executing subject
Direction of illumination information corresponding to object light image.Wherein, direction of illumination information can serve to indicate that direction of illumination, can wrap
It includes but is not limited at least one of following: text, number, symbol, image.Specifically, as an example, direction of illumination information can be
The arrow marked out in result shadow image, here, the direction of arrow can be direction of illumination;Alternatively, direction of illumination information
It can be bivector, here, direction corresponding to bivector can be direction of illumination.
It should be noted that in the present embodiment, direction of illumination indicated by direction of illumination information is under three-dimensional system of coordinate
Projection of the practical direction of illumination on the perspective plane where the shade under three-dimensional system of coordinate.It is appreciated that in practice, illumination side
It is usually consistent with the extending direction of shade to (i.e. practical direction of illumination shade projection on the projection surface).In turn, above-mentioned
Executing subject can determine the extension of shade based on range information corresponding to the pixel and pixel in result shadow image
Direction, and then determine direction of illumination.Specifically, as an example, above-mentioned executing subject can be chosen from result shadow image
The nearest pixel of the distance that corresponding range information is characterized is as the first pixel, and chooses corresponding distance letter
The farthest pixel of characterized distance is ceased as the second pixel, and in turn, above-mentioned executing subject can refer to the first pixel
It is determined as direction of illumination to the direction of the second pixel.
Step 204, it is based on direction of illumination information, generates dummy object illumination pattern corresponding to destination virtual subject image
Picture.
In the present embodiment, based on the direction of illumination information obtained in step 203, target is can be generated in above-mentioned executing subject
Dummy object light image corresponding to virtual object image.Wherein, dummy object light image includes above-mentioned destination virtual object
Virtual shadow image corresponding to body image and destination virtual subject image.Virtual shadow image in dummy object light image
Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information.Here, match and refer to virtual yin
Direction of illumination corresponding to shadow image is less than or equal to pre- relative to the angular deviation of direction of illumination indicated by direction of illumination information
If angle.
Specifically, above-mentioned executing subject can be based on direction of illumination information, destination virtual object is generated using various methods
Dummy object light image corresponding to image.
As an example, light source can be constructed in rendering engine based on direction of illumination indicated by direction of illumination information, into
And destination virtual subject image is rendered based on the light source of building, dummy object light image can be obtained.It needs to illustrate
, the direction of illumination as indicated by direction of illumination information is that practical direction of illumination projects on perspective plane where in shade, institute
To need to be primarily based on direction of illumination information and determine practical direction of illumination, and then be based on practical light during constructing light source
Light source is constructed according to direction.It should be noted that in practice, practical direction of illumination can by shade illumination side on the projection surface
Direction of illumination on the perspective plane vertical with perspective plane where shade is determining, and in the present embodiment, it is thrown with where shade
Direction of illumination on the vertical perspective plane in shadow face can be predetermined.
As another example, above-mentioned executing subject is previously stored with initial virtual corresponding to destination virtual subject image
Shadow image.Then above-mentioned executing subject can be based on direction of illumination information, be adjusted to initial virtual shadow image, in acquisition
Virtual shadow image is stated, in turn, virtual shadow image and destination virtual subject image are combined, generates dummy object illumination
Image.
It should be noted that the light source as corresponding to target object light image be directional light or sunlight, so this
In, it is believed that direction of illumination corresponding to the virtual shadow image in dummy object light image and above-mentioned direction of illumination information
Indicated direction of illumination matches, and is added to the position in target object light image without regard to dummy object light image
Set the influence to direction of illumination corresponding to virtual shadow image.
Step 205, dummy object light image and target object light image are merged, by dummy object illumination
Image is added in target object light image, obtains result images.
In the present embodiment, based on dummy object light image obtained in step 204, above-mentioned executing subject can be to void
Quasi- object light image and target object light image are merged, and dummy object light image is added to target object light
According in image, result images are obtained.Wherein, result images are the target object illumination pattern for being added to dummy object light image
Picture.
Herein, it can be predetermined that dummy object light image, which is added to the position in target object light image,
(such as can be the center of image), or pass through (the example determined after identifying to target object light image
Such as can after identifying subject image and the shadow image in target object light image, by target object light image not
Region including subject image and shadow image is determined as the position for adding dummy object light image).
In some optional implementations of the present embodiment, after obtaining result images, above-mentioned executing subject can be right
Result images obtained are shown.
In some optional implementations of the present embodiment, above-mentioned executing subject can also be by result images obtained
It is sent to the user terminal of communication connection, and control user terminal shows result images.Wherein, user terminal is to use
Terminal used in family, with the communication connection of above-mentioned executing subject.Specifically, above-mentioned executing subject can be sent to user terminal
Signal is controlled, and then controls user terminal and result images is shown.
Herein, since the virtual object image in result images is corresponding with virtual shadow image, and it is corresponding virtual
The direction of illumination of shadow image and the direction of illumination of shadow image corresponding to real-world object image match, so this realization side
Formula can control user terminal and show more true result images, with this, improve the display effect of image.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling image of the present embodiment
Figure.In the application scenarios of Fig. 3, server 301 obtains the light image 302 (target object light image) and football of cat first
Image 303 (destination virtual subject image), wherein the light image 302 of cat includes the shadow of image (subject image) and cat of cat
Subgraph (shadow image).Then, the light image 302 of cat can be inputted shadow extraction mould trained in advance by server 301
Type 304 obtains the shadow image 305 (result shadow image) of the cat including range information, wherein range information is for being characterized in
In the light image 302 of cat, at a distance from the pixel of the shadow image of cat and the corresponding pixel of the image of cat.Then, it services
Device 301 can generate illumination side corresponding to the light image 302 of cat based on the shadow image 305 for the cat for including range information
To information 306.Then, server 301 can be based on direction of illumination information 306, generate football light corresponding to football image 304
According to image 307 (dummy object light image), wherein football shadow image (the virtual shadow figure in football light image 307
Picture) corresponding to direction of illumination match with direction of illumination indicated by direction of illumination information 306.Finally, server 301 can be with
The light image 302 of football light image 307 and cat is merged, football light image 307 is added to the illumination of cat
In image 302, result images 308 are obtained.
Currently, when being shot to the object in illumination scene, it will usually take the shade of the object in scene.And it uses
In the virtual object image being added in real scene image usually do not include shadow image, at this point, virtual object image is added
It is added in real scene image the authenticity that can then reduce image, influences the display effect of image.Above-described embodiment of the disclosure
Dummy object light image corresponding to virtual object image can be generated in the method for offer, can be dummy object figure with this
As increasing corresponding virtual shadow image, and then merged to dummy object light image and target object light image
Afterwards, the authenticity of result images generated can be improved;In addition, the disclosure can be based on the yin in target object light image
The direction of illumination of shadow image determines that the direction of illumination of virtual shadow image corresponding to virtual object image can be made with this
Virtual object image is preferably fused in target object light image, is further increased the authenticity of result images, is facilitated
Improve the display effect of result images.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling image.The use
In the process 400 of the method for processing image, comprising the following steps:
Step 401, target object light image and destination virtual subject image are obtained.
It in the present embodiment, can be with for handling the executing subject (such as server 105 shown in FIG. 1) of the method for image
By wired connection mode or radio connection from remotely or locally obtaining target object light image and destination virtual
Subject image.Wherein, target object light image is the image to be handled it.Target object light image includes object
Shadow image corresponding to image and subject image.Destination virtual subject image is for target object light image
The image of reason.Destination virtual subject image can according to the predetermined image of the shape of dummy object.
Step 402, the shadow extraction model that the input of target object light image is trained in advance, obtaining includes range information
Result shadow image.
In the present embodiment, based on target object light image obtained in step 401, above-mentioned executing subject can be by mesh
Object light image input shadow extraction model trained in advance is marked, the result shadow image including range information is obtained.Wherein,
As a result shadow image can be shadow image extract from target object light image, to be added to range information.Distance
Information for being characterized in target object light image, the pixel of shadow image pixel corresponding with subject image away from
From.Shadow extraction model can be used for characterizing the corresponding relationship of object light image Yu result shadow image.
Step 403, the direction of illumination identification model that the input of result shadow image is trained in advance, obtains direction of illumination letter
Breath.
In the present embodiment, based on result shadow image obtained in step 402, above-mentioned executing subject can be by result yin
Shadow image input direction of illumination identification model trained in advance, obtains direction of illumination information.Wherein, direction of illumination information can be used
In instruction direction of illumination, can include but is not limited at least one of following: text, number, symbol, image.
In the present embodiment, direction of illumination identification model can be used for characterization result shadow image and direction of illumination information
Corresponding relationship.Specifically, as an example, direction of illumination identification model can be technical staff is in advance based on to a large amount of result yin
The statistics of direction of illumination information corresponding to shadow image and result shadow image and pre-establish, be stored with multiple result shades
The mapping table of image and corresponding direction of illumination information;Or it is based on preset training sample, utilize machine learning
The model that method obtains after being trained to initial model (such as neural network).
Step 404, it is based on direction of illumination information, generates dummy object illumination pattern corresponding to destination virtual subject image
Picture.
In the present embodiment, based on the direction of illumination information obtained in step 403, target is can be generated in above-mentioned executing subject
Dummy object light image corresponding to virtual object image.Wherein, dummy object light image includes above-mentioned destination virtual object
Virtual shadow image corresponding to body image and destination virtual subject image.Virtual shadow image in dummy object light image
Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information.Here, match and refer to virtual yin
Direction of illumination corresponding to shadow image is less than or equal to pre- relative to the angular deviation of direction of illumination indicated by direction of illumination information
If angle.
Step 405, dummy object light image and target object light image are merged, by dummy object illumination
Image is added in target object light image, obtains result images.
In the present embodiment, based on dummy object light image obtained in step 404, above-mentioned executing subject can be to void
Quasi- object light image and target object light image are merged, and dummy object light image is added to target object light
According in image, result images are obtained.Wherein, result images are the target object illumination pattern for being added to dummy object light image
Picture.
Above-mentioned steps 401, step 402, step 404, step 405 can be respectively adopted and the step in previous embodiment
201, the similar mode of step 202, step 204 and step 205 executes, above with respect to step 201, step 202, step 204 and
The description of step 205 is also applied for step 401, step 402, step 404 and step 405, and details are not described herein again.
Figure 4, it is seen that the method for handling image compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 the step of highlighting using direction of illumination identification model, generating direction of illumination information.The present embodiment describes as a result,
Scheme more can easily determine direction of illumination corresponding to target object light image, and then more can rapidly give birth to
At result images, the efficiency of image procossing is improved.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind for handling figure
One embodiment of the device of picture, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for handling image of the present embodiment includes: that image acquisition unit 501, image are defeated
Enter unit 502, information generating unit 503, image generation unit 504 and image fusion unit 505.Wherein, image acquisition unit
501 are configured to obtain target object light image and destination virtual subject image, wherein target object light image includes object
Shadow image corresponding to body image and subject image;Image input units 502 are configured to target object light image is defeated
Enter shadow extraction model trained in advance, obtain the result shadow image including range information, wherein range information is for characterizing
In target object light image, the pixel of shadow image is with subject image at a distance from corresponding pixel;Information generates single
Member 503 is configured to generate direction of illumination information corresponding to target object light image based on result shadow image;Image is raw
It is configured to generate dummy object illumination pattern corresponding to destination virtual subject image based on direction of illumination information at unit 504
Picture, wherein indicated by direction of illumination corresponding to the virtual shadow image in dummy object light image and direction of illumination information
Direction of illumination match;Image fusion unit 505 is configured to dummy object light image and target object light image
It is merged, dummy object light image is added in target object light image, obtain result images.
It in the present embodiment, can be by wired connection side for handling the image acquisition unit 501 of the device 500 of image
Formula or radio connection are from remotely or locally obtaining target object light image and destination virtual subject image.Wherein,
Target object light image is the image to be handled it.Target object light image includes subject image and subject image
Corresponding shadow image.Destination virtual subject image is the image for being handled target object light image.Target
Virtual object image can according to the predetermined image of the shape of dummy object.
In the present embodiment, the target object light image obtained based on image acquisition unit 501, image input units
502 can input target object light image shadow extraction model trained in advance, obtain the result yin including range information
Shadow image.Wherein, as a result shadow image can be yin extract from target object light image, to be added to range information
Shadow image.For range information for being characterized in target object light image, the pixel of shadow image is corresponding with subject image
The distance of pixel.Range information can emerge from result shadow image in a variety of manners.Shadow extraction model can be with
For characterizing the corresponding relationship of object light image Yu result shadow image.
In the present embodiment, the result shadow image obtained based on image input units 502, information generating unit 503 can
To generate direction of illumination information corresponding to target object light image.Wherein, direction of illumination information can serve to indicate that illumination
Direction can include but is not limited at least one of following: text, number, symbol, image.
In the present embodiment, the direction of illumination information obtained based on information generating unit 503, image generation unit 504 are raw
At dummy object light image corresponding to destination virtual subject image.Wherein, dummy object light image includes above-mentioned target
Virtual shadow image corresponding to virtual object image and destination virtual subject image.Virtual yin in dummy object light image
Direction of illumination corresponding to shadow image matches with direction of illumination indicated by direction of illumination information.Here, match and refer to
Direction of illumination corresponding to virtual shadow image is less than relative to the angular deviation of direction of illumination indicated by direction of illumination information
Equal to predetermined angle.
In the present embodiment, the dummy object light image obtained based on image generation unit 504, image fusion unit
505 can merge dummy object light image and target object light image, and dummy object light image is added
Into target object light image, result images are obtained.Wherein, result images are the target for being added to dummy object light image
Object light image.
In some optional implementations of the present embodiment, information generating unit 503 can be further configured to: will
As a result shadow image input direction of illumination identification model trained in advance, obtains direction of illumination information.
In some optional implementations of the present embodiment, range information is the picture of the pixel in result shadow image
Element value.
In some optional implementations of the present embodiment, shadow extraction model can be trained by following steps
To: obtain preset training sample set, wherein training sample includes sample object light image and for sample object illumination pattern
As predetermined sample results shadow image;Obtain the production confrontation network pre-established, wherein production fights network
Including generating network and differentiating network, network is generated for result yin to be identified and exported to the object light image inputted
Shadow image differentiates network for determining whether inputted image makes a living into the image that network is exported;Based on machine learning side
Method, the sample object light image for including using the training sample that training sample is concentrated will generate net as the input for generating network
The result shadow image of network output and sample results shadow image conduct corresponding with the sample object light image inputted
Generation network after training is determined as shadow extraction to generating network and differentiating that network is trained by the input for differentiating network
Model.
In some optional implementations of the present embodiment, device 500 can also include: image-display units (in figure
It is not shown), it is configured to show result images obtained.
In some optional implementations of the present embodiment, device 500 can also include: image transmission unit (in figure
It is not shown), it is configured to for result images obtained being sent to the user terminal of communication connection, and control user terminal pair
Result images are shown.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its
In include unit, details are not described herein.
Dummy object light corresponding to virtual object image can be generated in the device provided by the above embodiment 500 of the disclosure
Corresponding virtual shadow image can be increased for virtual object image with this according to image, and then to dummy object light image
After being merged with target object light image, the authenticity of result images generated can be improved;In addition, the disclosure can be with
Virtual shadow corresponding to virtual object image is determined based on the direction of illumination of the shadow image in target object light image
The direction of illumination of image can be such that virtual object image is preferably fused in target object light image, further be mentioned with this
The authenticity of high result images helps to improve the display effect of result images.
Below with reference to Fig. 6, it illustrates the electronic equipment (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure
End equipment or server) 600 structural schematic diagram.Terminal device in the embodiment of the present disclosure can include but is not limited to such as move
Mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable more
Media player), the mobile terminal and such as number TV, desktop computer of car-mounted terminal (such as vehicle mounted guidance terminal) etc.
Etc. fixed terminal.Electronic equipment shown in Fig. 6 is only an example, should not function and use to the embodiment of the present disclosure
Range band carrys out any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.)
601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608
Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment
Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604.
Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph
As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration
The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device
609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool
There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with
Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608
It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed
Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated,
In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned
Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not
It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more
When a program is executed by the electronic equipment, so that the electronic equipment: obtaining target object light image and destination virtual object figure
Picture, wherein target object light image includes shadow image corresponding to subject image and subject image;By target object illumination
Image input shadow extraction model trained in advance, obtains the result shadow image including range information, wherein range information is used
In being characterized in target object light image, the pixel of shadow image is with subject image at a distance from corresponding pixel;It is based on
As a result shadow image generates direction of illumination information corresponding to target object light image;Based on direction of illumination information, mesh is generated
Mark dummy object light image corresponding to virtual object image, wherein the virtual shadow image in dummy object light image
Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information;To dummy object light image and target
Object light image is merged, and dummy object light image is added in target object light image, obtains result figure
Picture.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard
The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, figure
As acquiring unit is also described as " obtaining the unit of target object light image and virtual object image ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure
Can technical characteristic replaced mutually and the technical solution that is formed.