CN108491809A - The method and apparatus for generating model for generating near-infrared image - Google Patents
The method and apparatus for generating model for generating near-infrared image Download PDFInfo
- Publication number
- CN108491809A CN108491809A CN201810263904.1A CN201810263904A CN108491809A CN 108491809 A CN108491809 A CN 108491809A CN 201810263904 A CN201810263904 A CN 201810263904A CN 108491809 A CN108491809 A CN 108491809A
- Authority
- CN
- China
- Prior art keywords
- network
- training sample
- differentiation
- infrared image
- generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating model for generating near-infrared image.One specific implementation mode of this method includes:Obtain training sample set, wherein training sample includes visible images or near-infrared image;Obtain the first production confrontation network pre-established and the second production confrontation network;Using machine learning method, network, the first differentiation network, the second generation network and the second differentiation network are generated based on training sample set pair first and are trained, the first generation network after training, which is determined as near-infrared image, generates model.The embodiment, which realizes, generates near-infrared image generation model.
Description
Technical field
The invention relates to field of computer technology, and in particular to the side of model is generated for generating near-infrared image
Method and device.
Background technology
Near-infrared recognition of face is a solution proposed to solve the lighting issues in recognition of face.It is close red
Outer recognition of face includes two parts:Active near-infrared face imaging device and the unrelated face recognition algorithms of corresponding illumination.Specifically
Method is:Use intensity is imaged higher than the positive near infrared light source of ambient light, is coordinated the optical filter of corresponding wave band, is obtained ring
The unrelated facial image of border light.Obtained facial image can only be monotonically changed with the distance change of people and camera.Closely
Infrared face recognition faces the few problem of near-infrared image quantity.
Invention content
The embodiment of the present application proposes the method and apparatus for generating model for generating near-infrared image.
In a first aspect, the embodiment of the present application provides a kind of method generating model for generating near-infrared image, the packet
It includes:Obtain training sample set, wherein training sample includes visible images or near-infrared image;Obtain the pre-established
One production fights network and the second production fights network, wherein it includes the first generation network that the first production, which fights network,
Differentiate network with first, it includes that the second generation network and second differentiate network that the second production, which fights network, and first generates network
Correspondence for characterizing visible images and near-infrared image, first differentiates that network is used to determine that inputted image to be raw
At near-infrared image or true near-infrared image, the second generation network is for characterizing near-infrared image and visible images
Correspondence, the second differentiation network is for determining that inputted image is the visible images generated or true visible light
Image;Using machine learning method, network, the first differentiation network, the second generation network are generated based on training sample set pair first
It is trained with the second differentiation network, the first generation network after training, which is determined as near-infrared image, generates model.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generation model, including:Include the training sample of visible images for training sample concentration, fixes the ginseng of the first generation network
Number, the input which is generated network as first, the image for first being generated network output differentiate net as first
The input of network obtains differentiation result corresponding with the training sample;It is identified based on obtained differentiation result and the first negative sample
Between difference, using machine learning method pair first differentiate network be trained;Wherein, the first negative sample mark is for characterizing
The input picture of first differentiation network is the near-infrared image generated;And/or include near-infrared image for training sample concentration
Training sample, using the training sample as first differentiate network input, obtain differentiation result corresponding with the training sample;
Based on the difference between obtained differentiation result and the first positive sample mark, network is differentiated using machine learning method pair first
It is trained, wherein input picture of the first positive sample mark for characterizing the first differentiation network is true near-infrared image.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generating model, further include:Include the training sample of near-infrared image for training sample concentration, fixes the second generation network
Parameter, the input which is generated network as second, the image for second being generated network output differentiate as second
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the second negative sample mark
Difference between knowledge differentiates that network is trained, wherein the second negative sample mark is used for table using machine learning method pair second
The input picture of sign the second differentiation network is the visible images generated;And/or include visible light figure for training sample concentration
The training sample of picture, the input which is differentiated network as second obtain differentiation knot corresponding with the training sample
Fruit;Based on the difference between obtained differentiation result and the second positive sample mark, differentiated using machine learning method pair second
Network is trained, wherein input picture of the second positive sample mark for characterizing the second differentiation network is true visible light
Image.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generating model, further include:Training sample is concentrated include visible images training sample, which is inputted the
All one's life at network, first is generated the input that the image of network output generates network as second;It is defeated that network is generated based on second
Difference between the image gone out and the training sample, using machine learning method pair first generate network and second generate network into
Row training.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generating model, further include:Training sample is concentrated include near-infrared image training sample, which is inputted the
Two generate network, second are generated the input that the image of network output generates network as first;It is defeated that network is generated based on first
Difference between the image gone out and the training sample, using machine learning method pair first generate network and second generate network into
Row training.
In some embodiments, this method further includes:Obtain pending visible images;Pending visible images are defeated
Enter to near-infrared image and generate model, obtains corresponding near-infrared image.
Second aspect, the embodiment of the present application provide a kind of device generating model for generating near-infrared image, including:
Sample set acquiring unit is configured to obtain training sample set, wherein training sample includes visible images or near-infrared figure
Picture;Production fights network acquiring unit, is configured to obtain the first production confrontation network pre-established and the second generation
Formula fights network, wherein it includes that the first generation network and first differentiate network, the second production pair that the first production, which fights network,
Anti- network includes the second generation network and second differentiates network, and the first generation network is for characterizing visible images and near-infrared figure
The correspondence of picture, the first differentiation network is for determining that inputted image is the near-infrared image generated or true close red
Outer image, the second generation network are used to characterize the correspondence of near-infrared image and visible images, and the second differentiation network is used for
Determine that inputted image is the visible images generated or true visible images;Training unit is configured to utilize
Machine learning method generates network, the first differentiation network, the second generation network and second based on training sample set pair first and differentiates
Network is trained, and the first generation network after training, which is determined as near-infrared image, generates model.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generation model, including:Include the training sample of visible images for training sample concentration, fixes the ginseng of the first generation network
Number, the input which is generated network as first, the image for first being generated network output differentiate net as first
The input of network obtains differentiation result corresponding with the training sample;It is identified based on obtained differentiation result and the first negative sample
Between difference, using machine learning method pair first differentiate network be trained, wherein the first negative sample identify for characterizing
The input picture of first differentiation network is the near-infrared image generated;And/or include near-infrared image for training sample concentration
Training sample, using the training sample as first differentiate network input, obtain differentiation result corresponding with the training sample;
Based on the difference between obtained differentiation result and the first positive sample mark, network is differentiated using machine learning method pair first
It is trained, wherein input picture of the first positive sample mark for characterizing the first differentiation network is true near-infrared image.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generating model, further include:Include the training sample of near-infrared image for training sample concentration, fixes the second generation network
Parameter, the input which is generated network as second, the image for second being generated network output differentiate as second
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the second negative sample mark
Difference between knowledge differentiates that network is trained, wherein the second negative sample mark is used for table using machine learning method pair second
The input picture of sign the second differentiation network is the visible images generated;And/or include visible light figure for training sample concentration
The training sample of picture, the input which is differentiated network as second obtain differentiation knot corresponding with the training sample
Fruit;Based on the difference between obtained differentiation result and the second positive sample mark, differentiated using machine learning method pair second
Network is trained, wherein input picture of the second positive sample mark for characterizing the second differentiation network is true visible light
Image.
In some embodiments, it is based on training sample set pair first and generates network, the first differentiation network, the second generation network
It is trained with the second differentiation network, the first generation network after training, which is determined as near-infrared image, generates model, further includes:
Include the training sample of visible images for training sample concentration, training sample input first is generated into network, by first
Generate the input that the image of network output generates network as second;The image and the training sample for generating network output based on second
Difference between this generates network using machine learning method pair first and the second generation network is trained.
In some embodiments, using machine learning method, network, the first differentiation are generated based on training sample set pair first
Network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared figure
As generating model, further include:Training sample is concentrated include near-infrared image training sample, which is inputted the
Two generate network, second are generated the input that the image of network output generates network as first;It is defeated that network is generated based on first
Difference between the image gone out and the training sample, using machine learning method pair first generate network and second generate network into
Row training.
In some embodiments, which further includes:Image acquisition unit is configured to obtain pending visible light figure
Picture;Image generation unit is configured to pending visible images being input to near-infrared image generation model, obtain corresponding
Near-infrared image.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress
It sets, for storing one or more programs, when one or more programs are executed by one or more processors so that one or more
A processor realizes the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
In, the method as described in any realization method in first aspect is realized when which is executed by processor.
The method and apparatus provided by the embodiments of the present application for generating model for generating near-infrared image, obtain training first
Sample;Later, the first production confrontation network pre-established and the second production confrontation network are obtained;Finally, it is based on training
Sample pair first generates network, the first differentiation network, the second generation network and the second differentiation network and is trained, after training
First generation network is determined as near-infrared image and generates model.By this process, realizes and generate near-infrared image generation model.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow according to one embodiment of the method for generating near-infrared image generation model of the application
Figure;
Fig. 3 is the applied field according to one embodiment of the method for generating near-infrared image generation model of the application
Jing Tu;
Fig. 4 is the flow according to another embodiment of the method for generating near-infrared image generation model of the application
Figure;
Fig. 5 is shown according to the structure of one embodiment of the device for generating near-infrared image generation model of the application
It is intended to;
Fig. 6 is suitable for being used for realizing the structural schematic diagram of the computer system of the electronic equipment of the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows that the near-infrared image that is used to generate that can apply the embodiment of the present application generates the method for model or is used for
Generate the exemplary system architecture 100 that near-infrared image generates the device of model.
As shown in Figure 1, system architecture 100 may include terminal 101,102,103, network 104 and server 105.Network
104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 may include
Various connection types, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101,102,103 is interacted by network 104 with server 105, such as the image of shooting is sent to
Server.All kinds of applications of taking pictures, picture processing application etc. can be installed on terminal device 101,102,103.
Terminal device 101,102,103 can be hardware, can also be software.When terminal device 101,102,103 is hard
Can be the equipment that there is camera and visible light or near-infrared image can be shot, including but not limited to when part:Near-infrared at
As equipment, camera, the mobile phone for having camera function, the tablet computer for having camera function, the portable meter for having camera function
Calculation machine etc..When terminal device 101,102,103 is software, may be mounted in above-mentioned cited electronic equipment.It can be with
It is implemented as multiple softwares or software module (such as providing the service of taking pictures), single software or software mould can also be implemented as
Block.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as sent using terminal device 101,102,103
Image generates the server that near-infrared image generates model.
It should be noted that the method for generating model for generating near-infrared image that the embodiment of the present application is provided can be with
It is executed, can also be executed by terminal device by server 105.Correspondingly, the device of model is generated for generating near-infrared image
It can be set in server 105, can also be set in terminal device.
It should be noted that can be used for generating near-infrared image generation model in terminal device 101,102,103.This
When, it can also be executed by terminal device 101,102,103 for generating the method that near-infrared image generates model.Correspondingly, it uses
It can also be set in terminal device 101,102,103 in the device for generating near-infrared image generation model.At this point, exemplary system
Server 105 and network 104 can be not present in system framework 100.
It should be noted that server can be hardware, can also be software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server can also be implemented as.It, can when server is software
To be implemented as multiple softwares or software module (such as providing Distributed Services), single software or software can also be implemented as
Module.It is not specifically limited herein.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, generate the method for model for generating near-infrared image one according to the application is shown
The flow 200 of embodiment.This is used to generate the method that near-infrared image generates model, includes the following steps:
Step 201, training sample set is obtained.
In the present embodiment, (such as shown in FIG. 1 for generating the executive agent for the method that near-infrared image generates model
Server 105) by wired connection mode or radio connection training sample set can be received from terminal.Wherein, training
Sample may include visible images or near-infrared image.In addition, above-mentioned training sample set is stored in executive agent
It is local.At this point, above-mentioned executive agent can also obtain training sample set from local.As an example, above-mentioned training sample set can be with
It is the image set increased income, it can also be by the image shot or the image set formed from the image of the Internet download.
Here, it is seen that light can be the appreciable electromagnetic wave of human eye.The wavelength of visible light does not have accurate range.Generally
The wavelength of the appreciable electromagnetic wave of eyes of people is within the scope of 400nm~760nm.Since target object is sent out or is reflected visible
It is visible images that light, which is formed by image,.Visible images can also be generally perceived by the human eye.As an example, visible light figure
As that can be RGB image.Wherein, RGB image can be by the variation to red (R), green (G), blue (B) three Color Channels with
And their mutual superpositions obtain the image of miscellaneous color.RGB is to represent three channels of red, green, blue
Color.In addition, near-infrared image can be the image obtained using active near-infrared image forming apparatus.Specific method is:Using strong
Degree coordinates the optical filter of corresponding wave band higher than the positive near infrared light source of ambient light, obtains unrelated close red of ambient light
Outer image.
Step 202, the first production confrontation network pre-established and the second production confrontation network are obtained.
In the present embodiment, above-mentioned executive agent can obtain the first production confrontation network pre-established and the second life
An accepted way of doing sth fights network.Wherein, the first production confrontation network may include the first generation network and first differentiates network, the second life
Accepted way of doing sth confrontation network may include the second generation network and second differentiates that network, the first generation network can be used for characterizing visible light
The correspondence of image and near-infrared image, it is the near-infrared generated that the first differentiation network, which is determined for inputted image,
Image or true near-infrared image, the second generation network can be used for characterizing near-infrared image corresponding with visible images
Relationship, it is the visible images generated or true visible light figure that the second differentiation network, which is determined for inputted image,
Picture.Above-mentioned first production confrontation network and the second production confrontation network can be various types of production confrontation networks
(Generative Adversarial Nets, GAN).As an example, can be that depth convolution generates confrontation network (Deep
Convolutional Generative Adversarial Network, DCGAN).For example, first generates network and the second life
Can be for carrying out the convolutional neural networks of image procossing (such as comprising convolutional layer, pond layer, anti-pond layer, anti-at network
The various convolutional neural networks structures of convolutional layer can carry out down-sampled and up-sampling successively).First generates network and the second life
It can be the same or different at the structure of network, be not limited herein.For example, first differentiates that network and second differentiates that network can
To be convolutional neural networks (for example, the various convolutional neural networks structures comprising full articulamentum, wherein above-mentioned full articulamentum can
To realize classification feature).In addition, above-mentioned first differentiates that network and second differentiates that network can also be that can be used to implement classification work(
Other model structures of energy, for example, support vector machines (Support Vector Machine, SVM).
Step 203, using machine learning method, network is generated based on training sample set pair first, first differentiates network, the
Two generation networks and the second differentiation network are trained, and the first generation network after training, which is determined as near-infrared image, generates mould
Type.
In the present embodiment, above-mentioned executive agent can profit (such as Training, unsupervised training etc. in various manners
Mode) based on the first generation of training sample set training network, the first differentiation network, the second generation network obtained in step 201
Differentiate network with second.
As an example, above-mentioned executive agent can utilize Training method pair first to generate network, the first differentiation net
Network, the second generation network and the second differentiation network are trained.Specifically, following training step can be executed:
The first step obtains multiple visible images and corresponding multiple near-infrared images.Wherein, for multiple visible light figures
Visible images as in, there are corresponding near-infrared images, and vice versa.Corresponding visible images and near-infrared image
Can use camera and near-infrared image forming apparatus, for being clapped with identical shooting angle and distance same person
It takes the photograph, obtained visible images and near-infrared image.
Second step, it will be seen that the input that light image generates network as first, using corresponding near-infrared image as first
The desired output for generating network generates the difference between the image and desired output of network output based on first, utilizes engineering
Learning method training first generates network;The input that near-infrared image is generated into network as second, by corresponding visible images
The desired output for generating network as second generates the difference between the image and desired output of network output, profit based on second
Network is generated with machine learning method training second.
Third walks, it will be seen that the input that light image generates network as first, the image for first being generated network output are made
The input for generating network for second, the image and input first for generating network output based on second generate the visible images of network
Between difference, using machine learning method pair first generate network and second generation network be trained.
In some optional realization methods of the present embodiment, step 203 may include:Include for training sample concentration
The training sample of visible images fixes the parameter of the first generation network, and the defeated of network is generated using the training sample as first
Enter, first is generated the input that the image of network output differentiates network as first, obtain differentiation corresponding with the training sample
As a result;Based on the difference between obtained differentiation result and the first negative sample mark, sentenced using machine learning method pair first
Other network is trained;Wherein, input picture of the first negative sample mark for characterizing the first differentiation network is the close red of generation
Outer image;And/or include the training sample of near-infrared image for training sample concentration, differentiate the training sample as first
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the first positive sample mark
Difference between knowledge differentiates that network is trained, wherein the first positive sample mark is used for table using machine learning method pair first
Sign first differentiates that the input picture of network is true near-infrared image.
In some optional realization methods of the present embodiment, step 203 can also include:Training sample is concentrated and is wrapped
The training sample for including near-infrared image fixes the parameter of the second generation network, and network is generated using the training sample as second
Input generates the input that the image of network output differentiates network as second using second, obtains corresponding with the training sample sentencing
Other result;Based on the difference between obtained differentiation result and the second negative sample mark, machine learning method pair second is utilized
Differentiate network be trained, wherein the second negative sample mark for characterize second differentiation network input picture be generation can
Light-exposed image;And/or include the training sample of visible images for training sample concentration, sentence the training sample as second
The input of other network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the second positive sample
Difference between mark differentiates that network is trained, wherein the second positive sample mark is used for using machine learning method pair second
Characterization second differentiates that the input picture of network is true visible images.
As an example, negative sample mark can be 0, positive sample mark can be 1.Negative sample identifies and positive sample mark
It can be based on presetting exporting other numerical value, be not limited to 1 and 0.
In some optional realization methods of the present embodiment, step 203 can also include:Training sample is concentrated and is wrapped
Training sample input first is generated network, the image for generating network output by first by the training sample for including visible images
The input for generating network as second;The training sample of the image and input the first generation network that generate network output based on second
Between difference, using machine learning method pair first generate network and second generation network be trained.
In some optional realization methods of the present embodiment, step 203 can also include:Training sample is concentrated and is wrapped
Training sample input second is generated network, the image for generating network output by second by the training sample for including near-infrared image
The input for generating network as first;The training sample of the image and input the second generation network that generate network output based on first
Between difference, using machine learning method pair first generate network and second generation network be trained.
As an example, the difference between the image and the training sample that network exports, the first life can be generated based on second
At the difference between the image and the training sample of network output, loss function is determined.As an example, can be hinge loss
(hinge loses, L1loss), can also be squared hinge loss (Squared Error Loss, L2loss).L1loss and L2loss
It is common loss function in machine learning.Later, based on determining loss function, as an example, backpropagation can be utilized
Algorithm will lose and reversely be passed to, and training first generates network and second and generates network.
It is the application for generating the method for model for generating near-infrared image according to the present embodiment with continued reference to Fig. 3, Fig. 3
One schematic diagram of scene.In the application scenarios 300 of Fig. 3, executive agent is server 301.Server 301 can obtain first
It takes and is stored in the local training sample set 302 increased income.Later, first generation network, first are sentenced based on training sample set 302
Other network, the second generation network and the second differentiation network are trained.In this application scene, above-mentioned first generates network, the
One differentiates that network, second generate network and second and differentiate that network is respectively to the first convolutional neural networks 303, the first supporting vector
Machine 305, the second convolutional neural networks 304 and the second support vector machines 306.Finally, by the first convolutional neural networks after training
It is determined as near-infrared image and generates model.
With further reference to Fig. 4, it illustrates the methods for generating model for generating near-infrared image according to the application
The flow 400 of another embodiment.The flow 400, includes the following steps:
Step 401, training sample set is obtained.
Step 402, the first production confrontation network pre-established and the second production confrontation network are obtained.
Step 403, using machine learning method, network is generated based on training sample set pair first, first differentiates network, the
Two generation networks and the second differentiation network are trained, and the first generation network after training, which is determined as near-infrared image, generates mould
Type.
In the present embodiment, step 401, step 402 and step 403 concrete processing procedure and its technique effect can refer to respectively
The related description of step 201, step 202 and step 203 in Fig. 2 corresponding embodiments, details are not described herein.
Step 404, pending visible images are obtained.
In the present embodiment, above-mentioned executive agent can be held by wired connection mode or radio connection from above-mentioned
Other electronic equipments (for example, terminal device shown in FIG. 1) of row major network connection obtain pending visible images.This
Outside, pending visible images are stored in executive agent local.At this point, above-mentioned executive agent can directly be obtained from local
Take pending visible images.
Step 405, pending visible images are input to near-infrared image and generate model, obtain corresponding near-infrared figure
Picture.
In the present embodiment, above-mentioned executive agent pending visible images can be input in step 403 determine it is close
IR image enhancement model obtains corresponding near-infrared image.The present embodiment generates model based on determining near-infrared image, obtains
To the corresponding near-infrared image of pending visible images.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, it is close for generating that this application provides one kind
One embodiment of the device of IR image enhancement model, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, should
Device specifically can be applied in various electronic equipments.
As shown in figure 5, the device 500 for generating model for generating near-infrared image of the present embodiment includes:Sample set obtains
Take unit 501, production confrontation network acquiring unit 502 and training unit 503.Wherein, the configuration of sample set acquiring unit 501 is used
In acquisition training sample set, wherein training sample includes visible images or near-infrared image.Production is fought network and is obtained
Unit 502 is configured to obtain the first production confrontation network pre-established and the second production confrontation network.Training unit
503 are configured to utilize machine learning method, and network, the first differentiation network, the second life are generated based on training sample set pair first
It is trained at network and the second differentiation network, the first generation network after training, which is determined as near-infrared image, generates model.
In the present embodiment, the first production confrontation network includes the first generation network and first differentiates network, the second life
It includes that the second generation network and second differentiate that network, the first generation network are used for the visible light figure to being inputted that an accepted way of doing sth, which fights network,
As being adjusted and exporting near-infrared image, first differentiates that network is used to determine whether inputted image to be the first generation network
The image exported, the second generation network are used to being adjusted the near-infrared image inputted and exporting visible images, the
Two differentiate networks for determine inputted image whether the image exported by the second generation network.
In the present embodiment, in the device 500 that model is generated for generating near-infrared image:Sample set acquiring unit 501,
The concrete processing procedure and its caused technique effect of production confrontation network acquiring unit 502 and training unit 503 can divide
Step 201, the related description of step 202 and step 203 in other 2 corresponding embodiment of reference chart, details are not described herein.
In some optional realization methods of the present embodiment, using machine learning method, based on training sample set pair the
All one's life is trained at network, the first differentiation network, the second generation network and the second differentiation network, and first after training is generated
Network is determined as near-infrared image and generates model, may include:Include the training sample of visible images for training sample concentration
This, fixes the parameter of the first generation network, the input which is generated network as first, and it is defeated to generate network by first
The input that the image gone out differentiates network as first, obtains differentiation result corresponding with the training sample;Sentenced based on obtained
Difference between other result and the first negative sample mark differentiates that network is trained using machine learning method pair first, wherein
Input picture of the first negative sample mark for characterizing the first differentiation network is the near-infrared image generated;And/or for training
Sample set includes the training sample of near-infrared image, the input which is differentiated network as first, obtains and is somebody's turn to do
The corresponding differentiation result of training sample;Based on the difference between obtained differentiation result and the first positive sample mark, machine is utilized
Device learning method pair first differentiates that network is trained, wherein the first positive sample mark is for characterizing the defeated of the first differentiation network
It is true near-infrared image to enter image.
In some optional realization methods of the present embodiment, using machine learning method, based on training sample set pair the
All one's life is trained at network, the first differentiation network, the second generation network and the second differentiation network, and first after training is generated
Network is determined as near-infrared image and generates model, can also include:Include the training of near-infrared image for training sample concentration
Sample fixes the parameter of the second generation network, the input which is generated network as second, and network is generated by second
The input that the image of output differentiates network as second, obtains differentiation result corresponding with the training sample;Based on obtained
Differentiate the difference between result and the second negative sample mark, differentiate that network is trained using machine learning method pair second,
In, input picture of the second negative sample mark for characterizing the second differentiation network is the visible images generated;And/or for instruction
Practice sample set include visible images training sample, using the training sample as second differentiate network input, obtain and
The corresponding differentiation result of the training sample;Based on the difference between obtained differentiation result and the second positive sample mark, utilize
Machine learning method pair second differentiates that network is trained, wherein the second positive sample mark is for characterizing the second differentiation network
Input picture is true visible images.
In some optional realization methods of the present embodiment, sentenced based on the generation of training sample set pair first network, first
Other network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as near-infrared
Image generates model, can also include:Include the training sample of visible images for training sample concentration, by the training sample
Input first generates network, first is generated the input that the image of network output generates network as second;It is generated based on second
Difference between the image and the training sample of network output generates network and second using machine learning method pair first and generates
Network is trained.
In some optional realization methods of the present embodiment, using machine learning method, based on training sample set pair the
All one's life is trained at network, the first differentiation network, the second generation network and the second differentiation network, and first after training is generated
Network is determined as near-infrared image and generates model, can also include:Include the training of near-infrared image for training sample concentration
Training sample input second is generated network by sample, and the image for second being generated network output generates network as first
Input;The difference between the image and the training sample of network output is generated based on first, utilizes machine learning method pair first
It generates network and the second generation network is trained.
In some optional realization methods of the present embodiment, which can also include:Image acquisition unit (figure
In be not shown) and image generation unit (not shown).Wherein, image acquisition unit is configured to obtain pending visible light
Image.Image generation unit is configured to pending visible images being input to near-infrared image generation model, is corresponded to
Near-infrared image.
The device that above-described embodiment of the application provides, training unit 503 is based on acquired in sample set acquiring unit 501
Sample, the network obtained to production confrontation network acquiring unit 502 are trained, and are realized and are generated near-infrared image generation mould
Type.
Below with reference to Fig. 6, it illustrates the computer systems 600 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various actions appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
It is connected to I/O interfaces 605 with lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two arbitrarily combines.Computer readable storage medium for example can be but unlimited
In the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or device, or the arbitrary above combination.Computer can
The more specific example for reading storage medium can include but is not limited to:Being electrically connected with one or more conducting wires, portable meter
Calculation machine disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, can be any include computer readable storage medium or storage program
Tangible medium, the program can be commanded execution system, device either device use or it is in connection.And in this Shen
Please in, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
In carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to
Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable
Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by
Instruction execution system, device either device use or program in connection.The journey for including on computer-readable medium
Sequence code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer,
Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include sample set acquiring unit, production confrontation network acquiring unit and training unit.Wherein, the title of these units is in certain feelings
The restriction to the unit itself is not constituted under condition, for example, sample set acquiring unit is also described as " for obtaining training
The unit of sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
When row so that the electronic equipment:Obtain training sample set, wherein training sample includes visible images or near-infrared image;
Obtain the first production confrontation network pre-established and the second production confrontation network;Using machine learning method, based on instruction
Practice the generation of sample set pair first network, the first differentiation network, the second generation network and the second differentiation network to be trained, will train
The first generation network afterwards is determined as near-infrared image and generates model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (14)
1. a kind of method generating model for generating near-infrared image, including:
Obtain training sample set, wherein training sample includes visible images or near-infrared image;
Obtain the first production confrontation network pre-established and the second production confrontation network, wherein first production
It fights network and generates network and the first differentiation network including first, the second production confrontation network includes the second generation network
Differentiate network with second, the first generation network is used to characterize the correspondence of visible images and near-infrared image, described
First differentiation network is for determining that inputted image is the near-infrared image generated or true near-infrared image, described the
Two generation networks are used to characterize the correspondence of near-infrared image and visible images, and the second differentiation network is for determining institute
The image of input is the visible images generated or true visible images;
Using machine learning method, network, the first differentiation network, institute are generated based on described in the training sample set pair first
It states the second generation network and the second differentiation network is trained, the first generation network after training is determined as described
Near-infrared image generates model.
2. it is described to utilize machine learning method according to the method described in claim 1, wherein, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, including:
Include the training sample of visible images for training sample concentration, fixed described first generates the parameter of network,
The input that the training sample is generated network as described first generates the image of network output as described the using described first
One differentiates the input of network, obtains differentiation result corresponding with the training sample;It is negative based on obtained differentiation result and first
Difference between sample identification is trained the first differentiation network using machine learning method, wherein the first negative sample
The input picture identified for characterizing the first differentiation network is the near-infrared image generated;And/or
Include the training sample of near-infrared image for training sample concentration, differentiates the training sample as described first
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the first positive sample mark
Difference between knowledge is trained the first differentiation network using machine learning method, wherein the first positive sample mark is used
Differentiate that the input picture of network is true near-infrared image in characterizing described first.
3. it is described to utilize machine learning method according to the method described in claim 2, wherein, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, further includes:
Include the training sample of near-infrared image for training sample concentration, fixed described second generates the parameter of network,
The input that the training sample is generated network as described second generates the image of network output as described the using described second
Two differentiate the input of network, obtain differentiation result corresponding with the training sample;It is negative based on obtained differentiation result and second
Difference between sample identification is trained the second differentiation network using machine learning method, wherein the second negative sample
The input picture identified for characterizing the second differentiation network is the visible images generated;And/or
Include the training sample of visible images for training sample concentration, differentiates the training sample as described second
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the second positive sample mark
Difference between knowledge is trained the second differentiation network using machine learning method, wherein the second positive sample mark is used
Differentiate that the input picture of network is true visible images in characterizing described second.
4. it is described to utilize machine learning method according to the method described in claim 3, wherein, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, further includes:
Include the training sample of visible images for training sample concentration, training sample input described first is generated
Network generates the input that the image of network output generates network as described second using described first;It is generated based on described second
Difference between the image and the training sample of network output generates network and described using machine learning method to described first
Second generation network is trained.
5. it is described to utilize machine learning method according to the method described in claim 4, wherein, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, further includes:
Include the training sample of near-infrared image for training sample concentration, training sample input described second is generated
Network generates the input that the image of network output generates network as described first using described second;It is generated based on described first
Difference between the image and the training sample of network output generates network and described using machine learning method to described first
Second generation network is trained.
6. further including according to any method, the method in claim 1-5:
Obtain pending visible images;
The pending visible images are input to the near-infrared image and generate model, obtain corresponding near-infrared image.
7. a kind of device generating model for generating near-infrared image, including:
Sample set acquiring unit is configured to obtain training sample set, wherein training sample includes visible images or close red
Outer image;
Production fights network acquiring unit, is configured to obtain the first production confrontation network pre-established and the second generation
Formula fights network, wherein and first production confrontation network includes first generating network and first and differentiating network, and described second
It includes that the second generation network and second differentiate network that production, which fights network, and the first generation network is for characterizing visible light figure
As the correspondence with near-infrared image, the first differentiation network is for determining that inputted image is the near-infrared figure generated
As still true near-infrared image, the second generation network are used to characterize near-infrared image pass corresponding with visible images
System, the second differentiation network is for determining that inputted image is the visible images generated or true visible light figure
Picture;
Training unit is configured to utilize machine learning method, and network, the are generated based on described in the training sample set pair first
One differentiation network, the second generation network and the second differentiation network are trained, and the first generation network after training is determined as institute
It states near-infrared image and generates model.
8. device according to claim 7, wherein it is described to utilize machine learning method, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, including:
Include the training sample of visible images for training sample concentration, fixed described first generates the parameter of network,
The input that the training sample is generated network as described first generates the image of network output as described the using described first
One differentiates the input of network, obtains differentiation result corresponding with the training sample;It is negative based on obtained differentiation result and first
Difference between sample identification is trained the first differentiation network using machine learning method, wherein the first negative sample
The input picture identified for characterizing the first differentiation network is the near-infrared image generated;And/or
Include the training sample of near-infrared image for training sample concentration, differentiates the training sample as described first
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the first positive sample mark
Difference between knowledge is trained the first differentiation network using machine learning method, wherein the first positive sample mark is used
Differentiate that the input picture of network is true near-infrared image in characterizing described first.
9. device according to claim 8, wherein it is described to utilize machine learning method, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, further includes:
Include the training sample of near-infrared image for training sample concentration, fixed described second generates the parameter of network,
The input that the training sample is generated network as described second generates the image of network output as described the using described second
Two differentiate the input of network, obtain differentiation result corresponding with the training sample;It is negative based on obtained differentiation result and second
Difference between sample identification is trained the second differentiation network using machine learning method, wherein the second negative sample
The input picture identified for characterizing the second differentiation network is the visible images generated;And/or
Include the training sample of visible images for training sample concentration, differentiates the training sample as described second
The input of network obtains differentiation result corresponding with the training sample;Based on obtained differentiation result and the second positive sample mark
Difference between knowledge is trained the second differentiation network using machine learning method, wherein the second positive sample mark is used
Differentiate that the input picture of network is true visible images in characterizing described second.
10. device according to claim 9, wherein it is described to utilize machine learning method, it is based on the training sample set pair
The first generation network, the first differentiation network, the second generation network and the second differentiation network are trained,
The first generation network after training is determined as the near-infrared image and generates model, further includes:
Include the training sample of visible images for training sample concentration, training sample input described first is generated
Network generates the input that the image of network output generates network as described second using described first;It is generated based on described second
Difference between the image and the training sample of network output generates network and described using machine learning method to described first
Second generation network is trained.
11. device according to claim 10, wherein it is described to utilize machine learning method, it is based on the training sample set
The first generation network, the first differentiation network, the second generation network and the second differentiation network are instructed
Practice, the first generation network after training, which is determined as the near-infrared image, generates model, further includes:
Include the training sample of near-infrared image for training sample concentration, training sample input described second is generated
Network generates the input that the image of network output generates network as described first using described second;It is generated based on described first
Difference between the image and the training sample of network output generates network and described using machine learning method to described first
Second generation network is trained.
12. further including according to any device, described device in claim 7-11:
Image acquisition unit is configured to obtain pending visible images;
Image generation unit is configured to the pending visible images being input to the near-infrared image generation model,
Obtain corresponding near-infrared image.
13. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are executed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-6.
14. a kind of computer-readable medium, is stored thereon with computer program, wherein the program is realized when being executed by processor
Method as described in any in claim 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810263904.1A CN108491809B (en) | 2018-03-28 | 2018-03-28 | Method and apparatus for generating near infrared image generation model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810263904.1A CN108491809B (en) | 2018-03-28 | 2018-03-28 | Method and apparatus for generating near infrared image generation model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108491809A true CN108491809A (en) | 2018-09-04 |
CN108491809B CN108491809B (en) | 2023-09-22 |
Family
ID=63316492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810263904.1A Active CN108491809B (en) | 2018-03-28 | 2018-03-28 | Method and apparatus for generating near infrared image generation model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108491809B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109508647A (en) * | 2018-10-22 | 2019-03-22 | 北京理工大学 | A kind of spectra database extended method based on generation confrontation network |
CN109800730A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating head portrait |
CN109816589A (en) * | 2019-01-30 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating cartoon style transformation model |
CN109840467A (en) * | 2018-12-13 | 2019-06-04 | 北京飞搜科技有限公司 | A kind of in-vivo detection method and system |
CN109840926A (en) * | 2018-12-29 | 2019-06-04 | 中国电子科技集团公司信息科学研究院 | A kind of image generating method, device and equipment |
CN110021052A (en) * | 2019-04-11 | 2019-07-16 | 北京百度网讯科技有限公司 | The method and apparatus for generating model for generating eye fundus image |
CN110119685A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | A kind of infrared face image method for transformation based on DCGAN |
CN111259814A (en) * | 2020-01-17 | 2020-06-09 | 杭州涂鸦信息技术有限公司 | Living body detection method and system |
CN111292234A (en) * | 2018-12-07 | 2020-06-16 | 大唐移动通信设备有限公司 | Panoramic image generation method and device |
CN111297388A (en) * | 2020-04-03 | 2020-06-19 | 中山大学 | Fractional flow reserve measurement method and device |
CN111489289A (en) * | 2019-04-02 | 2020-08-04 | 同观科技(深圳)有限公司 | Image processing method, image processing device and terminal equipment |
CN111915566A (en) * | 2020-07-03 | 2020-11-10 | 天津大学 | Infrared sample target detection method based on cyclic consistency countermeasure network |
CN112912896A (en) * | 2018-12-14 | 2021-06-04 | 苹果公司 | Machine learning assisted image prediction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106250877A (en) * | 2016-08-19 | 2016-12-21 | 深圳市赛为智能股份有限公司 | Near-infrared face identification method and device |
CN106485274A (en) * | 2016-10-09 | 2017-03-08 | 湖南穗富眼电子科技有限公司 | A kind of object classification method based on target property figure |
US20170132450A1 (en) * | 2014-06-16 | 2017-05-11 | Siemens Healthcare Diagnostics Inc. | Analyzing Digital Holographic Microscopy Data for Hematology Applications |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
-
2018
- 2018-03-28 CN CN201810263904.1A patent/CN108491809B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132450A1 (en) * | 2014-06-16 | 2017-05-11 | Siemens Healthcare Diagnostics Inc. | Analyzing Digital Holographic Microscopy Data for Hematology Applications |
CN106250877A (en) * | 2016-08-19 | 2016-12-21 | 深圳市赛为智能股份有限公司 | Near-infrared face identification method and device |
CN106485274A (en) * | 2016-10-09 | 2017-03-08 | 湖南穗富眼电子科技有限公司 | A kind of object classification method based on target property figure |
CN107491771A (en) * | 2017-09-21 | 2017-12-19 | 百度在线网络技术(北京)有限公司 | Method for detecting human face and device |
Non-Patent Citations (2)
Title |
---|
LINGXIAO SONG: "《Adversarial Discriminative Heterogeneous Face Recognition》", 《ARXIV:1709.03675V1》 * |
LINGXIAO SONG: "《Adversarial Discriminative Heterogeneous Face Recognition》", 《ARXIV:1709.03675V1》, 12 September 2017 (2017-09-12), pages 1 - 7 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109508647A (en) * | 2018-10-22 | 2019-03-22 | 北京理工大学 | A kind of spectra database extended method based on generation confrontation network |
CN111292234A (en) * | 2018-12-07 | 2020-06-16 | 大唐移动通信设备有限公司 | Panoramic image generation method and device |
CN109840467A (en) * | 2018-12-13 | 2019-06-04 | 北京飞搜科技有限公司 | A kind of in-vivo detection method and system |
US11915460B2 (en) | 2018-12-14 | 2024-02-27 | Apple Inc. | Machine learning assisted image prediction |
CN112912896A (en) * | 2018-12-14 | 2021-06-04 | 苹果公司 | Machine learning assisted image prediction |
CN109840926A (en) * | 2018-12-29 | 2019-06-04 | 中国电子科技集团公司信息科学研究院 | A kind of image generating method, device and equipment |
CN109840926B (en) * | 2018-12-29 | 2023-06-20 | 中国电子科技集团公司信息科学研究院 | Image generation method, device and equipment |
CN109816589B (en) * | 2019-01-30 | 2020-07-17 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating cartoon style conversion model |
CN109800730A (en) * | 2019-01-30 | 2019-05-24 | 北京字节跳动网络技术有限公司 | The method and apparatus for generating model for generating head portrait |
CN109816589A (en) * | 2019-01-30 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating cartoon style transformation model |
CN111489289B (en) * | 2019-04-02 | 2023-09-12 | 长信智控网络科技有限公司 | Image processing method, image processing device and terminal equipment |
CN111489289A (en) * | 2019-04-02 | 2020-08-04 | 同观科技(深圳)有限公司 | Image processing method, image processing device and terminal equipment |
CN110021052A (en) * | 2019-04-11 | 2019-07-16 | 北京百度网讯科技有限公司 | The method and apparatus for generating model for generating eye fundus image |
CN110119685A (en) * | 2019-04-12 | 2019-08-13 | 天津大学 | A kind of infrared face image method for transformation based on DCGAN |
CN111259814A (en) * | 2020-01-17 | 2020-06-09 | 杭州涂鸦信息技术有限公司 | Living body detection method and system |
CN111259814B (en) * | 2020-01-17 | 2023-10-31 | 杭州涂鸦信息技术有限公司 | Living body detection method and system |
CN111297388A (en) * | 2020-04-03 | 2020-06-19 | 中山大学 | Fractional flow reserve measurement method and device |
CN111915566B (en) * | 2020-07-03 | 2022-03-15 | 天津大学 | Infrared sample target detection method based on cyclic consistency countermeasure network |
CN111915566A (en) * | 2020-07-03 | 2020-11-10 | 天津大学 | Infrared sample target detection method based on cyclic consistency countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN108491809B (en) | 2023-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108491809A (en) | The method and apparatus for generating model for generating near-infrared image | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
US20190102603A1 (en) | Method and apparatus for determining image quality | |
CN108133201B (en) | Face character recognition methods and device | |
US20230081645A1 (en) | Detecting forged facial images using frequency domain information and local correlation | |
CN108446651A (en) | Face identification method and device | |
CN108363995A (en) | Method and apparatus for generating data | |
CN109816589A (en) | Method and apparatus for generating cartoon style transformation model | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN108154547B (en) | Image generating method and device | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
CN108960090A (en) | Method of video image processing and device, computer-readable medium and electronic equipment | |
CN108280413A (en) | Face identification method and device | |
CN108171206B (en) | Information generating method and device | |
CN110175555A (en) | Facial image clustering method and device | |
CN108229419A (en) | For clustering the method and apparatus of image | |
CN109344752A (en) | Method and apparatus for handling mouth image | |
CN109308681A (en) | Image processing method and device | |
CN108171204B (en) | Detection method and device | |
CN109087238A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107507153A (en) | Image de-noising method and device | |
CN108460366A (en) | Identity identifying method and device | |
CN108364029A (en) | Method and apparatus for generating model | |
CN109344762A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |