CN110033423A - Method and apparatus for handling image - Google Patents

Method and apparatus for handling image Download PDF

Info

Publication number
CN110033423A
CN110033423A CN201910302471.0A CN201910302471A CN110033423A CN 110033423 A CN110033423 A CN 110033423A CN 201910302471 A CN201910302471 A CN 201910302471A CN 110033423 A CN110033423 A CN 110033423A
Authority
CN
China
Prior art keywords
image
object light
shadow
light image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910302471.0A
Other languages
Chinese (zh)
Other versions
CN110033423B (en
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910302471.0A priority Critical patent/CN110033423B/en
Publication of CN110033423A publication Critical patent/CN110033423A/en
Priority to PCT/CN2020/078582 priority patent/WO2020211573A1/en
Application granted granted Critical
Publication of CN110033423B publication Critical patent/CN110033423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

Embodiment of the disclosure discloses the method and apparatus for handling image.One specific embodiment of this method includes: to obtain target object light image and destination virtual subject image, wherein target object light image includes shadow image corresponding to subject image and subject image;By target object light image input shadow extraction model trained in advance, the result shadow image including range information is obtained;Based on result shadow image, direction of illumination information corresponding to target object light image is generated;Based on direction of illumination information, dummy object light image corresponding to destination virtual subject image is generated;Dummy object light image and target object light image are merged, dummy object light image is added in target object light image, obtains result images.The embodiment can be such that virtual object image is preferably fused in target object light image, improve the authenticity of result images, help to improve the display effect of image.

Description

Method and apparatus for handling image
Technical field
Embodiment of the disclosure is related to field of computer technology, more particularly, to handles the method and apparatus of image.
Background technique
With the development of image processing techniques, people can add virtual object image in the image of shooting, be increased with this The display effect of strong image.
Currently, the virtual object image for being added in real scene image is usually technical staff according to dummy object The pre-set image of shape.
Summary of the invention
Embodiment of the disclosure proposes the method and apparatus for handling image.
In a first aspect, embodiment of the disclosure provides a kind of method for handling image, this method comprises: obtaining mesh Mark object light image and destination virtual subject image, wherein target object light image includes subject image and subject image Corresponding shadow image;By target object light image input shadow extraction model trained in advance, obtaining includes distance letter The result shadow image of breath, wherein range information for being characterized in target object light image, the pixel of shadow image with The distance of the corresponding pixel of subject image;Based on result shadow image, illumination corresponding to target object light image is generated Directional information;Based on direction of illumination information, dummy object light image corresponding to destination virtual subject image is generated, wherein Illumination side indicated by direction of illumination corresponding to virtual shadow image in dummy object light image and direction of illumination information To matching;Dummy object light image and target object light image are merged, dummy object light image is added It is added in target object light image, obtains result images.
In some embodiments, it is based on result shadow image, generates direction of illumination corresponding to target object light image Information includes: the direction of illumination identification model that the input of result shadow image is trained in advance, obtains direction of illumination information.
In some embodiments, range information is the pixel value of the pixel in result shadow image.
In some embodiments, shadow extraction model is obtained by following steps training: preset training sample set is obtained, Wherein, training sample includes sample object light image and for the predetermined sample results shade of sample object light image Image;Obtaining the production confrontation network pre-established, wherein production fights network including generation network and differentiates network, Network is generated for result shadow image to be identified and exported to the object light image inputted, differentiates network for determining Whether the image inputted makes a living into the image that network is exported;Based on machine learning method, the training that training sample is concentrated The sample object light image that sample includes as generate network input, by generate network output result shadow image and with The corresponding sample results shadow image of the sample object light image inputted is as the input for differentiating network, to generation network It is trained with differentiation network, the generation network after training is determined as shadow extraction model.
In some embodiments, this method further include: result images obtained are shown.
In some embodiments, this method further include: the user that result images obtained are sent to communication connection is whole End, and control user terminal show result images.
Second aspect, embodiment of the disclosure provide a kind of for handling the device of image, which includes: that image obtains Unit is taken, is configured to obtain target object light image and destination virtual subject image, wherein target object light image packet Include shadow image corresponding to subject image and subject image;Image input units are configured to target object light image Input shadow extraction model trained in advance, obtains the result shadow image including range information, wherein range information is used for table Sign is in target object light image, and the pixel of shadow image is with subject image at a distance from corresponding pixel;Information generates Unit is configured to generate direction of illumination information corresponding to target object light image based on result shadow image;Image is raw At unit, it is configured to generate dummy object light image corresponding to destination virtual subject image based on direction of illumination information, Wherein, direction of illumination corresponding to the virtual shadow image in dummy object light image and light indicated by direction of illumination information Match according to direction;Image fusion unit is configured to melt dummy object light image and target object light image It closes, dummy object light image is added in target object light image, obtain result images.
In some embodiments, information generating unit is further configured to: result shadow image is inputted training in advance Direction of illumination identification model, obtain direction of illumination information.
In some embodiments, range information is the pixel value of the pixel in result shadow image.
In some embodiments, shadow extraction model is obtained by following steps training: preset training sample set is obtained, Wherein, training sample includes sample object light image and for the predetermined sample results shade of sample object light image Image;Obtaining the production confrontation network pre-established, wherein production fights network including generation network and differentiates network, Network is generated for result shadow image to be identified and exported to the object light image inputted, differentiates network for determining Whether the image inputted makes a living into the image that network is exported;Based on machine learning method, the training that training sample is concentrated The sample object light image that sample includes as generate network input, by generate network output result shadow image and with The corresponding sample results shadow image of the sample object light image inputted is as the input for differentiating network, to generation network It is trained with differentiation network, the generation network after training is determined as shadow extraction model.
In some embodiments, the device further include: image-display units, be configured to result images obtained into Row display.
In some embodiments, device further include: image transmission unit is configured to send out result images obtained The user terminal of communication connection is given, and control user terminal shows result images.
The third aspect, embodiment of the disclosure provide a kind of electronic equipment, comprising: one or more processors;Storage Device is stored thereon with one or more programs, when one or more programs are executed by one or more processors, so that one Or the method that multiple processors realize any embodiment in the above-mentioned method for handling image.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program, The program realizes any embodiment in the above-mentioned method for handling image method when being executed by processor.
The method and apparatus for handling image that embodiment of the disclosure provides, by obtaining target object light image With destination virtual subject image, wherein target object light image includes echo corresponding to subject image and subject image Picture, then that the input of target object light image is trained in advance shadow extraction model obtain the result yin including range information Shadow image, wherein range information is for being characterized in target object light image, the pixel and subject image pair of shadow image The distance for the pixel answered then is based on result shadow image, generates the letter of direction of illumination corresponding to target object light image Breath is then based on direction of illumination information, generates dummy object light image corresponding to destination virtual subject image, wherein empty Direction of illumination indicated by direction of illumination corresponding to virtual shadow image in quasi- object light image and direction of illumination information Match, finally dummy object light image and target object light image are merged, by dummy object light image It is added in target object light image, obtains result images, so as to is adding virtual object for target object light image When body image, based on identified direction of illumination, shadow image corresponding to virtual object image is generated, can be made virtually with this Subject image is preferably fused in target object light image, is improved the authenticity of result images, is helped to improve image Display effect.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for handling image of the disclosure;
Fig. 3 is according to an embodiment of the present disclosure for handling the schematic diagram of an application scenarios of the method for image;
Fig. 4 is the flow chart according to another embodiment of the method for handling image of the disclosure;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for handling image of the disclosure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for handling image of the disclosure or the implementation of the device for handling image The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as image processing class is answered on terminal device 101,102,103 With, web browser applications, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software Deng.
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with camera, including but not limited to smart phone, tablet computer, e-book reading (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein It is specific to limit.
Server 105 can be to provide the server of various services, such as shoot and obtain to terminal device 101,102,103 The image processing server that is handled of target object light image.Image processing server can be to the object received The data such as body light image carry out the processing such as analyzing, and obtain processing result (such as result images).In practice, server may be used also Processing result obtained is fed back to terminal device.
It should be noted that can be by server 105 for handling the method for image provided by embodiment of the disclosure It executes, can also be executed by terminal device 101,102,103, correspondingly, the device for handling image can be set in service In device 105, also it can be set in terminal device 101,102,103.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.Used data during generating result images It does not need in the case where long-range obtain, above system framework can not include network, and only include terminal device or server.
With continued reference to Fig. 2, the process of one embodiment of the method for handling image according to the disclosure is shown 200.The method for being used to handle image, comprising the following steps:
Step 201, target object light image and destination virtual subject image are obtained.
It in the present embodiment, can be with for handling the executing subject (such as server 105 shown in FIG. 1) of the method for image By wired connection mode or radio connection from remotely or locally obtaining target object light image and destination virtual Subject image.Wherein, target object light image is the image to be handled it.Target object light image includes object Shadow image corresponding to image and subject image.Specifically, target object light image can be for the object in illumination scene Body carries out shooting image obtained.Shooting obtain target object light image illumination scene in light source be directional light or Sunlight.It is appreciated that under illumination scene, when object blocks light source, shade can be generated.
In the present embodiment, destination virtual subject image is the image for being handled target object light image. Destination virtual subject image can according to the predetermined image of the shape of dummy object.Specifically, can be pre-rendered Image, or, or the image extracted from existing image in advance according to the profile of object.It should be noted that Herein, " virtual " of destination virtual subject image is to refer to destination virtual for target object light image Dummy object corresponding to subject image is substantially not present in for shooting the true field for obtaining target object light image Jing Zhong.
Step 202, the shadow extraction model that the input of target object light image is trained in advance, obtaining includes range information Result shadow image.
In the present embodiment, based on target object light image obtained in step 201, above-mentioned executing subject can be by mesh Object light image input shadow extraction model trained in advance is marked, the result shadow image including range information is obtained.Wherein, As a result shadow image can be shadow image extract from target object light image, to be added to range information.Distance Information for being characterized in target object light image, the pixel of shadow image pixel corresponding with subject image away from From.Specifically, some point in object due to blocking light source, can generate yin on perspective plane (such as ground, metope, desktop etc.) Shadow point herein, will can be used to generate shadow spots, point on object as object point corresponding with shadow spots in turn, and Object point on object corresponds to the pixel in subject image, and the shadow spots in shade correspond to the pixel in shadow image, into And can using pixel corresponding to the object point for being used to generate shadow spots corresponding to the pixel in shadow image as with The corresponding pixel of pixel in shadow image.
In the present embodiment, range information can emerge from result shadow image in a variety of manners.As an example, Range information can be recorded in digital form in result shadow image.Specifically, each pixel in result shadow image Point can correspond to a number, which can be corresponding pixel at a distance from pixel corresponding in subject image.
In some optional implementations of the present embodiment, range information can be the pixel in result shadow image Pixel value.Distance is characterized using pixel value in various manners specifically, can adopt.As an example, can be got over using pixel value Greatly, the remoter mode of distance;Alternatively, can also be smaller using pixel value, the remoter mode of distance.
In the present embodiment, it is corresponding with result shadow image to can be used for characterizing object light image for shadow extraction model Relationship.Specifically, as an example, shadow extraction model can be technical staff be in advance based on to a large amount of object light image and The statistics of result shadow image corresponding to object light image and pre-establish, be stored with multiple objects light image with it is right The mapping table for the result shadow image answered;Or it is based on preset training sample, using machine learning method to first The model that beginning model (such as neural network) obtains after being trained.
In some optional implementations of the present embodiment, shadow extraction model can by above-mentioned executing subject or other Electronic equipment is obtained by following steps training:
Firstly, obtaining preset training sample set, wherein training sample includes sample object light image and for sample The predetermined sample results shadow image of object light image.
Herein, sample object light image can be that shooting acquisition is carried out to the sample object under illumination scene Image.Sample object light image may include sample object image and sample shadow image.Sample results shadow image can be with And to add sample in extracted sample shadow image by extracting sample shadow image from sample object light image The image obtained after range information.
Then, the production confrontation network pre-established is obtained, wherein production confrontation network includes generating network and sentencing Other network generates network for result shadow image to be identified and exported to the object light image inputted, differentiates network For determining whether inputted image makes a living into the image that network is exported.
Herein, above-mentioned production confrontation network can be the production confrontation network of various structures.For example, production pair Anti- network can be depth convolution production confrontation network (Deep Convolutional Generative Adversarial Network, DCGAN).It should be noted that after above-mentioned production confrontation network can be unbred, initiation parameter Confrontation network is generated, the generation confrontation network of trained mistake is also possible to.
Specifically, generating network can be convolutional neural networks for carrying out image procossing (such as comprising convolutional layer, pond Change the convolutional neural networks of the various structures of layer, anti-pond layer, warp lamination).Above-mentioned differentiation network is also possible to convolutional Neural Network (such as the convolutional neural networks of the various structures comprising full articulamentum, wherein classification function may be implemented in above-mentioned full articulamentum Can).In addition, differentiating that network is also possible to for realizing other models of classification feature, such as support vector machines (Support Vector Machine, SVM).Herein, network is differentiated if it is determined that input differentiates that the image of network is the figure for generating network and being exported Picture can then export 1 (or 0);If it is determined that not being the image for generating network and being exported, then 0 (or 1) can be exported.It needs to illustrate , differentiate that network can also export other pre-set information to characterize and differentiate as a result, being not limited to numerical value 1 and 0.
Finally, it is based on machine learning method, the sample object light image for including by the training sample that training sample is concentrated As the input for generating network, by generate network output result shadow image and with the sample object light image phase that is inputted Corresponding sample results shadow image will be trained as the input for differentiating network to generating network and differentiating that network is trained Generation network afterwards is determined as shadow extraction model.
Specifically, the ginseng for generating network and differentiating any network (can be described as first network) in network can be fixed first Number, optimizes the network (can be described as the second network) of unlocked parameter;The parameter for fixing the second network again, to first network It improves.Above-mentioned iteration is constantly carried out, so that the image for differentiating the indistinguishable input of network whether is generated network and is exported. At this point, above-mentioned generation network result shadow image generated and sample results shadow image are close, above-mentioned differentiation network can not It accurately distinguishes truthful data and generates data (i.e. accuracy rate is 50%), generation network at this time can be determined as shadow extraction Model.
It should be noted that above-mentioned executing subject or other electronic equipments can use existing back-propagation algorithm and ladder Degree descent algorithm is trained to network and differentiation network is generated.The parameter meeting for generating network and differentiating network after training every time Be adjusted, using the generation network obtained after each adjusting parameter and differentiate network as next time training used in generate network with Differentiate network.
Step 203, it is based on result shadow image, generates direction of illumination information corresponding to target object light image.
In the present embodiment, based on result shadow image obtained in step 202, target is can be generated in above-mentioned executing subject Direction of illumination information corresponding to object light image.Wherein, direction of illumination information can serve to indicate that direction of illumination, can wrap It includes but is not limited at least one of following: text, number, symbol, image.Specifically, as an example, direction of illumination information can be The arrow marked out in result shadow image, here, the direction of arrow can be direction of illumination;Alternatively, direction of illumination information It can be bivector, here, direction corresponding to bivector can be direction of illumination.
It should be noted that in the present embodiment, direction of illumination indicated by direction of illumination information is under three-dimensional system of coordinate Projection of the practical direction of illumination on the perspective plane where the shade under three-dimensional system of coordinate.It is appreciated that in practice, illumination side It is usually consistent with the extending direction of shade to (i.e. practical direction of illumination shade projection on the projection surface).In turn, above-mentioned Executing subject can determine the extension of shade based on range information corresponding to the pixel and pixel in result shadow image Direction, and then determine direction of illumination.Specifically, as an example, above-mentioned executing subject can be chosen from result shadow image The nearest pixel of the distance that corresponding range information is characterized is as the first pixel, and chooses corresponding distance letter The farthest pixel of characterized distance is ceased as the second pixel, and in turn, above-mentioned executing subject can refer to the first pixel It is determined as direction of illumination to the direction of the second pixel.
Step 204, it is based on direction of illumination information, generates dummy object illumination pattern corresponding to destination virtual subject image Picture.
In the present embodiment, based on the direction of illumination information obtained in step 203, target is can be generated in above-mentioned executing subject Dummy object light image corresponding to virtual object image.Wherein, dummy object light image includes above-mentioned destination virtual object Virtual shadow image corresponding to body image and destination virtual subject image.Virtual shadow image in dummy object light image Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information.Here, match and refer to virtual yin Direction of illumination corresponding to shadow image is less than or equal to pre- relative to the angular deviation of direction of illumination indicated by direction of illumination information If angle.
Specifically, above-mentioned executing subject can be based on direction of illumination information, destination virtual object is generated using various methods Dummy object light image corresponding to image.
As an example, light source can be constructed in rendering engine based on direction of illumination indicated by direction of illumination information, into And destination virtual subject image is rendered based on the light source of building, dummy object light image can be obtained.It needs to illustrate , the direction of illumination as indicated by direction of illumination information is that practical direction of illumination projects on perspective plane where in shade, institute To need to be primarily based on direction of illumination information and determine practical direction of illumination, and then be based on practical light during constructing light source Light source is constructed according to direction.It should be noted that in practice, practical direction of illumination can by shade illumination side on the projection surface Direction of illumination on the perspective plane vertical with perspective plane where shade is determining, and in the present embodiment, it is thrown with where shade Direction of illumination on the vertical perspective plane in shadow face can be predetermined.
As another example, above-mentioned executing subject is previously stored with initial virtual corresponding to destination virtual subject image Shadow image.Then above-mentioned executing subject can be based on direction of illumination information, be adjusted to initial virtual shadow image, in acquisition Virtual shadow image is stated, in turn, virtual shadow image and destination virtual subject image are combined, generates dummy object illumination Image.
It should be noted that the light source as corresponding to target object light image be directional light or sunlight, so this In, it is believed that direction of illumination corresponding to the virtual shadow image in dummy object light image and above-mentioned direction of illumination information Indicated direction of illumination matches, and is added to the position in target object light image without regard to dummy object light image Set the influence to direction of illumination corresponding to virtual shadow image.
Step 205, dummy object light image and target object light image are merged, by dummy object illumination Image is added in target object light image, obtains result images.
In the present embodiment, based on dummy object light image obtained in step 204, above-mentioned executing subject can be to void Quasi- object light image and target object light image are merged, and dummy object light image is added to target object light According in image, result images are obtained.Wherein, result images are the target object illumination pattern for being added to dummy object light image Picture.
Herein, it can be predetermined that dummy object light image, which is added to the position in target object light image, (such as can be the center of image), or pass through (the example determined after identifying to target object light image Such as can after identifying subject image and the shadow image in target object light image, by target object light image not Region including subject image and shadow image is determined as the position for adding dummy object light image).
In some optional implementations of the present embodiment, after obtaining result images, above-mentioned executing subject can be right Result images obtained are shown.
In some optional implementations of the present embodiment, above-mentioned executing subject can also be by result images obtained It is sent to the user terminal of communication connection, and control user terminal shows result images.Wherein, user terminal is to use Terminal used in family, with the communication connection of above-mentioned executing subject.Specifically, above-mentioned executing subject can be sent to user terminal Signal is controlled, and then controls user terminal and result images is shown.
Herein, since the virtual object image in result images is corresponding with virtual shadow image, and it is corresponding virtual The direction of illumination of shadow image and the direction of illumination of shadow image corresponding to real-world object image match, so this realization side Formula can control user terminal and show more true result images, with this, improve the display effect of image.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for handling image of the present embodiment Figure.In the application scenarios of Fig. 3, server 301 obtains the light image 302 (target object light image) and football of cat first Image 303 (destination virtual subject image), wherein the light image 302 of cat includes the shadow of image (subject image) and cat of cat Subgraph (shadow image).Then, the light image 302 of cat can be inputted shadow extraction mould trained in advance by server 301 Type 304 obtains the shadow image 305 (result shadow image) of the cat including range information, wherein range information is for being characterized in In the light image 302 of cat, at a distance from the pixel of the shadow image of cat and the corresponding pixel of the image of cat.Then, it services Device 301 can generate illumination side corresponding to the light image 302 of cat based on the shadow image 305 for the cat for including range information To information 306.Then, server 301 can be based on direction of illumination information 306, generate football light corresponding to football image 304 According to image 307 (dummy object light image), wherein football shadow image (the virtual shadow figure in football light image 307 Picture) corresponding to direction of illumination match with direction of illumination indicated by direction of illumination information 306.Finally, server 301 can be with The light image 302 of football light image 307 and cat is merged, football light image 307 is added to the illumination of cat In image 302, result images 308 are obtained.
Currently, when being shot to the object in illumination scene, it will usually take the shade of the object in scene.And it uses In the virtual object image being added in real scene image usually do not include shadow image, at this point, virtual object image is added It is added in real scene image the authenticity that can then reduce image, influences the display effect of image.Above-described embodiment of the disclosure Dummy object light image corresponding to virtual object image can be generated in the method for offer, can be dummy object figure with this As increasing corresponding virtual shadow image, and then merged to dummy object light image and target object light image Afterwards, the authenticity of result images generated can be improved;In addition, the disclosure can be based on the yin in target object light image The direction of illumination of shadow image determines that the direction of illumination of virtual shadow image corresponding to virtual object image can be made with this Virtual object image is preferably fused in target object light image, is further increased the authenticity of result images, is facilitated Improve the display effect of result images.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for handling image.The use In the process 400 of the method for processing image, comprising the following steps:
Step 401, target object light image and destination virtual subject image are obtained.
It in the present embodiment, can be with for handling the executing subject (such as server 105 shown in FIG. 1) of the method for image By wired connection mode or radio connection from remotely or locally obtaining target object light image and destination virtual Subject image.Wherein, target object light image is the image to be handled it.Target object light image includes object Shadow image corresponding to image and subject image.Destination virtual subject image is for target object light image The image of reason.Destination virtual subject image can according to the predetermined image of the shape of dummy object.
Step 402, the shadow extraction model that the input of target object light image is trained in advance, obtaining includes range information Result shadow image.
In the present embodiment, based on target object light image obtained in step 401, above-mentioned executing subject can be by mesh Object light image input shadow extraction model trained in advance is marked, the result shadow image including range information is obtained.Wherein, As a result shadow image can be shadow image extract from target object light image, to be added to range information.Distance Information for being characterized in target object light image, the pixel of shadow image pixel corresponding with subject image away from From.Shadow extraction model can be used for characterizing the corresponding relationship of object light image Yu result shadow image.
Step 403, the direction of illumination identification model that the input of result shadow image is trained in advance, obtains direction of illumination letter Breath.
In the present embodiment, based on result shadow image obtained in step 402, above-mentioned executing subject can be by result yin Shadow image input direction of illumination identification model trained in advance, obtains direction of illumination information.Wherein, direction of illumination information can be used In instruction direction of illumination, can include but is not limited at least one of following: text, number, symbol, image.
In the present embodiment, direction of illumination identification model can be used for characterization result shadow image and direction of illumination information Corresponding relationship.Specifically, as an example, direction of illumination identification model can be technical staff is in advance based on to a large amount of result yin The statistics of direction of illumination information corresponding to shadow image and result shadow image and pre-establish, be stored with multiple result shades The mapping table of image and corresponding direction of illumination information;Or it is based on preset training sample, utilize machine learning The model that method obtains after being trained to initial model (such as neural network).
Step 404, it is based on direction of illumination information, generates dummy object illumination pattern corresponding to destination virtual subject image Picture.
In the present embodiment, based on the direction of illumination information obtained in step 403, target is can be generated in above-mentioned executing subject Dummy object light image corresponding to virtual object image.Wherein, dummy object light image includes above-mentioned destination virtual object Virtual shadow image corresponding to body image and destination virtual subject image.Virtual shadow image in dummy object light image Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information.Here, match and refer to virtual yin Direction of illumination corresponding to shadow image is less than or equal to pre- relative to the angular deviation of direction of illumination indicated by direction of illumination information If angle.
Step 405, dummy object light image and target object light image are merged, by dummy object illumination Image is added in target object light image, obtains result images.
In the present embodiment, based on dummy object light image obtained in step 404, above-mentioned executing subject can be to void Quasi- object light image and target object light image are merged, and dummy object light image is added to target object light According in image, result images are obtained.Wherein, result images are the target object illumination pattern for being added to dummy object light image Picture.
Above-mentioned steps 401, step 402, step 404, step 405 can be respectively adopted and the step in previous embodiment 201, the similar mode of step 202, step 204 and step 205 executes, above with respect to step 201, step 202, step 204 and The description of step 205 is also applied for step 401, step 402, step 404 and step 405, and details are not described herein again.
Figure 4, it is seen that the method for handling image compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 the step of highlighting using direction of illumination identification model, generating direction of illumination information.The present embodiment describes as a result, Scheme more can easily determine direction of illumination corresponding to target object light image, and then more can rapidly give birth to At result images, the efficiency of image procossing is improved.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, present disclose provides one kind for handling figure One embodiment of the device of picture, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the device 500 for handling image of the present embodiment includes: that image acquisition unit 501, image are defeated Enter unit 502, information generating unit 503, image generation unit 504 and image fusion unit 505.Wherein, image acquisition unit 501 are configured to obtain target object light image and destination virtual subject image, wherein target object light image includes object Shadow image corresponding to body image and subject image;Image input units 502 are configured to target object light image is defeated Enter shadow extraction model trained in advance, obtain the result shadow image including range information, wherein range information is for characterizing In target object light image, the pixel of shadow image is with subject image at a distance from corresponding pixel;Information generates single Member 503 is configured to generate direction of illumination information corresponding to target object light image based on result shadow image;Image is raw It is configured to generate dummy object illumination pattern corresponding to destination virtual subject image based on direction of illumination information at unit 504 Picture, wherein indicated by direction of illumination corresponding to the virtual shadow image in dummy object light image and direction of illumination information Direction of illumination match;Image fusion unit 505 is configured to dummy object light image and target object light image It is merged, dummy object light image is added in target object light image, obtain result images.
It in the present embodiment, can be by wired connection side for handling the image acquisition unit 501 of the device 500 of image Formula or radio connection are from remotely or locally obtaining target object light image and destination virtual subject image.Wherein, Target object light image is the image to be handled it.Target object light image includes subject image and subject image Corresponding shadow image.Destination virtual subject image is the image for being handled target object light image.Target Virtual object image can according to the predetermined image of the shape of dummy object.
In the present embodiment, the target object light image obtained based on image acquisition unit 501, image input units 502 can input target object light image shadow extraction model trained in advance, obtain the result yin including range information Shadow image.Wherein, as a result shadow image can be yin extract from target object light image, to be added to range information Shadow image.For range information for being characterized in target object light image, the pixel of shadow image is corresponding with subject image The distance of pixel.Range information can emerge from result shadow image in a variety of manners.Shadow extraction model can be with For characterizing the corresponding relationship of object light image Yu result shadow image.
In the present embodiment, the result shadow image obtained based on image input units 502, information generating unit 503 can To generate direction of illumination information corresponding to target object light image.Wherein, direction of illumination information can serve to indicate that illumination Direction can include but is not limited at least one of following: text, number, symbol, image.
In the present embodiment, the direction of illumination information obtained based on information generating unit 503, image generation unit 504 are raw At dummy object light image corresponding to destination virtual subject image.Wherein, dummy object light image includes above-mentioned target Virtual shadow image corresponding to virtual object image and destination virtual subject image.Virtual yin in dummy object light image Direction of illumination corresponding to shadow image matches with direction of illumination indicated by direction of illumination information.Here, match and refer to Direction of illumination corresponding to virtual shadow image is less than relative to the angular deviation of direction of illumination indicated by direction of illumination information Equal to predetermined angle.
In the present embodiment, the dummy object light image obtained based on image generation unit 504, image fusion unit 505 can merge dummy object light image and target object light image, and dummy object light image is added Into target object light image, result images are obtained.Wherein, result images are the target for being added to dummy object light image Object light image.
In some optional implementations of the present embodiment, information generating unit 503 can be further configured to: will As a result shadow image input direction of illumination identification model trained in advance, obtains direction of illumination information.
In some optional implementations of the present embodiment, range information is the picture of the pixel in result shadow image Element value.
In some optional implementations of the present embodiment, shadow extraction model can be trained by following steps To: obtain preset training sample set, wherein training sample includes sample object light image and for sample object illumination pattern As predetermined sample results shadow image;Obtain the production confrontation network pre-established, wherein production fights network Including generating network and differentiating network, network is generated for result yin to be identified and exported to the object light image inputted Shadow image differentiates network for determining whether inputted image makes a living into the image that network is exported;Based on machine learning side Method, the sample object light image for including using the training sample that training sample is concentrated will generate net as the input for generating network The result shadow image of network output and sample results shadow image conduct corresponding with the sample object light image inputted Generation network after training is determined as shadow extraction to generating network and differentiating that network is trained by the input for differentiating network Model.
In some optional implementations of the present embodiment, device 500 can also include: image-display units (in figure It is not shown), it is configured to show result images obtained.
In some optional implementations of the present embodiment, device 500 can also include: image transmission unit (in figure It is not shown), it is configured to for result images obtained being sent to the user terminal of communication connection, and control user terminal pair Result images are shown.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its In include unit, details are not described herein.
Dummy object light corresponding to virtual object image can be generated in the device provided by the above embodiment 500 of the disclosure Corresponding virtual shadow image can be increased for virtual object image with this according to image, and then to dummy object light image After being merged with target object light image, the authenticity of result images generated can be improved;In addition, the disclosure can be with Virtual shadow corresponding to virtual object image is determined based on the direction of illumination of the shadow image in target object light image The direction of illumination of image can be such that virtual object image is preferably fused in target object light image, further be mentioned with this The authenticity of high result images helps to improve the display effect of result images.
Below with reference to Fig. 6, it illustrates the electronic equipment (end of example as shown in figure 1 for being suitable for being used to realize the embodiment of the present disclosure End equipment or server) 600 structural schematic diagram.Terminal device in the embodiment of the present disclosure can include but is not limited to such as move Mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable more Media player), the mobile terminal and such as number TV, desktop computer of car-mounted terminal (such as vehicle mounted guidance terminal) etc. Etc. fixed terminal.Electronic equipment shown in Fig. 6 is only an example, should not function and use to the embodiment of the present disclosure Range band carrys out any restrictions.
As shown in fig. 6, electronic equipment 600 may include processing unit (such as central processing unit, graphics processor etc.) 601, random access can be loaded into according to the program being stored in read-only memory (ROM) 602 or from storage device 608 Program in memory (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with electronic equipment Various programs and data needed for 600 operations.Processing unit 601, ROM 602 and RAM603 are connected with each other by bus 604. Input/output (I/O) interface 605 is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 606 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 607 of dynamic device etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.Communication device 609, which can permit electronic equipment 600, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 6 shows tool There is the electronic equipment 600 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608 It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that computer-readable medium described in the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more When a program is executed by the electronic equipment, so that the electronic equipment: obtaining target object light image and destination virtual object figure Picture, wherein target object light image includes shadow image corresponding to subject image and subject image;By target object illumination Image input shadow extraction model trained in advance, obtains the result shadow image including range information, wherein range information is used In being characterized in target object light image, the pixel of shadow image is with subject image at a distance from corresponding pixel;It is based on As a result shadow image generates direction of illumination information corresponding to target object light image;Based on direction of illumination information, mesh is generated Mark dummy object light image corresponding to virtual object image, wherein the virtual shadow image in dummy object light image Corresponding direction of illumination matches with direction of illumination indicated by direction of illumination information;To dummy object light image and target Object light image is merged, and dummy object light image is added in target object light image, obtains result figure Picture.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, figure As acquiring unit is also described as " obtaining the unit of target object light image and virtual object image ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that the open scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from design disclosed above, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (14)

1. a kind of method for handling image, comprising:
Obtain target object light image and destination virtual subject image, wherein the target object light image includes object Shadow image corresponding to image and subject image;
By target object light image input shadow extraction model trained in advance, the result yin including range information is obtained Shadow image, wherein range information is for being characterized in the target object light image, the pixel and object figure of shadow image As the distance of corresponding pixel;
Based on the result shadow image, direction of illumination information corresponding to the target object light image is generated;
Based on the direction of illumination information, dummy object light image corresponding to the destination virtual subject image is generated, In, direction of illumination corresponding to the virtual shadow image in the dummy object light image and the direction of illumination information are signified The direction of illumination shown matches;
The dummy object light image and the target object light image are merged, by dummy object light image It is added in the target object light image, obtains result images.
2. it is described to be based on the result shadow image according to the method described in claim 1, wherein, generate the target object Direction of illumination information corresponding to light image includes:
By result shadow image input direction of illumination identification model trained in advance, direction of illumination information is obtained.
3. according to the method described in claim 1, wherein, the range information is the pixel of the pixel in result shadow image Value.
4. according to the method described in claim 1, wherein, the shadow extraction model is obtained by following steps training:
Obtain preset training sample set, wherein training sample includes sample object light image and for sample object illumination The predetermined sample results shadow image of image;
Obtain the production confrontation network pre-established, wherein the production confrontation network is including generating network and differentiating net Network generates network for result shadow image to be identified and exported to the object light image inputted, differentiates that network is used for Determine whether inputted image makes a living into the image that network is exported;
Based on machine learning method, the sample object light image for including using the training sample that the training sample is concentrated is as life At the input of network, the result shadow image of network output and corresponding with the sample object light image inputted will be generated Sample results shadow image is trained, after training as the input for differentiating network to network and differentiation network is generated Generation network be determined as shadow extraction model.
5. method described in one of -4 according to claim 1, wherein the method also includes:
Result images obtained are shown.
6. method described in one of -4 according to claim 1, wherein the method also includes:
Result images obtained are sent to the user terminal of communication connection, and the control user terminal to the result Image is shown.
7. a kind of for handling the device of image, comprising:
Image acquisition unit is configured to obtain target object light image and destination virtual subject image, wherein the target Object light image includes shadow image corresponding to subject image and subject image;
Image input units are configured to inputting the target object light image into shadow extraction model trained in advance, obtain Obtain the result shadow image including range information, wherein range information is for being characterized in the target object light image, yin The pixel of shadow image is with subject image at a distance from corresponding pixel;
Information generating unit is configured to generate corresponding to the target object light image based on the result shadow image Direction of illumination information;
Image generation unit is configured to generate corresponding to the destination virtual subject image based on the direction of illumination information Dummy object light image, wherein direction of illumination corresponding to the virtual shadow image in the dummy object light image Match with direction of illumination indicated by the direction of illumination information;
Image fusion unit is configured to melt the dummy object light image and the target object light image It closes, dummy object light image is added in the target object light image, obtain result images.
8. device according to claim 7, wherein the information generating unit is further configured to:
By result shadow image input direction of illumination identification model trained in advance, direction of illumination information is obtained.
9. device according to claim 7, wherein the range information is the pixel of the pixel in result shadow image Value.
10. device according to claim 7, wherein the shadow extraction model is obtained by following steps training:
Obtain preset training sample set, wherein training sample includes sample object light image and for sample object illumination The predetermined sample results shadow image of image;
Obtain the production confrontation network pre-established, wherein the production confrontation network is including generating network and differentiating net Network generates network for result shadow image to be identified and exported to the object light image inputted, differentiates that network is used for Determine whether inputted image makes a living into the image that network is exported;
Based on machine learning method, the sample object light image for including using the training sample that the training sample is concentrated is as life At the input of network, the result shadow image of network output and corresponding with the sample object light image inputted will be generated Sample results shadow image is trained, after training as the input for differentiating network to network and differentiation network is generated Generation network be determined as shadow extraction model.
11. the device according to one of claim 7-10, wherein described device further include:
Image-display units are configured to show result images obtained.
12. the device according to one of claim 7-10, wherein described device further include:
Image transmission unit is configured to for result images obtained being sent to the user terminal of communication connection, and control The user terminal shows the result images.
13. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 6.
CN201910302471.0A 2019-04-16 2019-04-16 Method and apparatus for processing image Active CN110033423B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910302471.0A CN110033423B (en) 2019-04-16 2019-04-16 Method and apparatus for processing image
PCT/CN2020/078582 WO2020211573A1 (en) 2019-04-16 2020-03-10 Method and device for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910302471.0A CN110033423B (en) 2019-04-16 2019-04-16 Method and apparatus for processing image

Publications (2)

Publication Number Publication Date
CN110033423A true CN110033423A (en) 2019-07-19
CN110033423B CN110033423B (en) 2020-08-28

Family

ID=67238554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910302471.0A Active CN110033423B (en) 2019-04-16 2019-04-16 Method and apparatus for processing image

Country Status (2)

Country Link
CN (1) CN110033423B (en)
WO (1) WO2020211573A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144491A (en) * 2019-12-26 2020-05-12 南京旷云科技有限公司 Image processing method, device and electronic system
CN111292408A (en) * 2020-01-21 2020-06-16 武汉大学 Shadow generation method based on attention mechanism
CN111667420A (en) * 2020-05-21 2020-09-15 维沃移动通信有限公司 Image processing method and device
WO2020211573A1 (en) * 2019-04-16 2020-10-22 北京字节跳动网络技术有限公司 Method and device for processing image
CN111915642A (en) * 2020-09-14 2020-11-10 北京百度网讯科技有限公司 Image sample generation method, device, equipment and readable storage medium
CN112686988A (en) * 2020-12-31 2021-04-20 北京北信源软件股份有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
CN101520904A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method with real environment estimation and reality augmenting system
US20090318800A1 (en) * 2008-06-18 2009-12-24 Lutz Gundel Method and visualization module for visualizing bumps of the inner surface of a hollow organ, image processing device and tomographic system
CN102426695A (en) * 2011-09-30 2012-04-25 北京航空航天大学 Virtual-real illumination fusion method of single image scene
CN104766270A (en) * 2015-03-20 2015-07-08 北京理工大学 Virtual and real lighting fusion method based on fish-eye lens
CN108986199A (en) * 2018-06-14 2018-12-11 北京小米移动软件有限公司 Dummy model processing method, device, electronic equipment and storage medium
CN109214351A (en) * 2018-09-20 2019-01-15 太平洋未来科技(深圳)有限公司 A kind of AR imaging method, device and electronic equipment
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110234631A1 (en) * 2010-03-25 2011-09-29 Bizmodeline Co., Ltd. Augmented reality systems
CN104913784B (en) * 2015-06-19 2017-10-10 北京理工大学 A kind of autonomous extracting method of planetary surface navigation characteristic
KR20170034727A (en) * 2015-09-21 2017-03-29 삼성전자주식회사 Shadow information storing method and apparatus, 3d rendering method and apparatus
CN107808409B (en) * 2016-09-07 2022-04-12 中兴通讯股份有限公司 Method and device for performing illumination rendering in augmented reality and mobile terminal
US10846934B2 (en) * 2017-10-03 2020-11-24 ExtendView Inc. Camera-based object tracking and monitoring
CN110033423B (en) * 2019-04-16 2020-08-28 北京字节跳动网络技术有限公司 Method and apparatus for processing image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
US20090318800A1 (en) * 2008-06-18 2009-12-24 Lutz Gundel Method and visualization module for visualizing bumps of the inner surface of a hollow organ, image processing device and tomographic system
CN101520904A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method with real environment estimation and reality augmenting system
CN102426695A (en) * 2011-09-30 2012-04-25 北京航空航天大学 Virtual-real illumination fusion method of single image scene
CN104766270A (en) * 2015-03-20 2015-07-08 北京理工大学 Virtual and real lighting fusion method based on fish-eye lens
CN108986199A (en) * 2018-06-14 2018-12-11 北京小米移动软件有限公司 Dummy model processing method, device, electronic equipment and storage medium
CN109214351A (en) * 2018-09-20 2019-01-15 太平洋未来科技(深圳)有限公司 A kind of AR imaging method, device and electronic equipment
CN109523617A (en) * 2018-10-15 2019-03-26 中山大学 A kind of illumination estimation method based on monocular-camera

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020211573A1 (en) * 2019-04-16 2020-10-22 北京字节跳动网络技术有限公司 Method and device for processing image
CN111144491A (en) * 2019-12-26 2020-05-12 南京旷云科技有限公司 Image processing method, device and electronic system
CN111292408A (en) * 2020-01-21 2020-06-16 武汉大学 Shadow generation method based on attention mechanism
CN111667420A (en) * 2020-05-21 2020-09-15 维沃移动通信有限公司 Image processing method and device
WO2021233215A1 (en) * 2020-05-21 2021-11-25 维沃移动通信有限公司 Image processing method and apparatus
CN111667420B (en) * 2020-05-21 2023-10-24 维沃移动通信有限公司 Image processing method and device
CN111915642A (en) * 2020-09-14 2020-11-10 北京百度网讯科技有限公司 Image sample generation method, device, equipment and readable storage medium
CN112686988A (en) * 2020-12-31 2021-04-20 北京北信源软件股份有限公司 Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2020211573A1 (en) 2020-10-22
CN110033423B (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN110033423A (en) Method and apparatus for handling image
CN109816589A (en) Method and apparatus for generating cartoon style transformation model
CN109858445A (en) Method and apparatus for generating model
CN108898185A (en) Method and apparatus for generating image recognition model
CN109191514A (en) Method and apparatus for generating depth detection model
CN108446387A (en) Method and apparatus for updating face registration library
CN108133201B (en) Face character recognition methods and device
CN109410253B (en) For generating method, apparatus, electronic equipment and the computer-readable medium of information
CN109255830A (en) Three-dimensional facial reconstruction method and device
CN108985257A (en) Method and apparatus for generating information
CN108363995A (en) Method and apparatus for generating data
CN110188719A (en) Method for tracking target and device
CN108629823A (en) The generation method and device of multi-view image
CN108154547A (en) Image generating method and device
CN109359170A (en) Method and apparatus for generating information
CN109815365A (en) Method and apparatus for handling video
CN108280413A (en) Face identification method and device
CN109800730A (en) The method and apparatus for generating model for generating head portrait
CN109215121A (en) Method and apparatus for generating information
CN109754464B (en) Method and apparatus for generating information
CN110516678A (en) Image processing method and device
CN108171206A (en) information generating method and device
CN108460366A (en) Identity identifying method and device
CN108510466A (en) Method and apparatus for verifying face
CN110032978A (en) Method and apparatus for handling video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.