CN109741250A - Image processing method and device, storage medium and electronic equipment - Google Patents
Image processing method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN109741250A CN109741250A CN201910008670.0A CN201910008670A CN109741250A CN 109741250 A CN109741250 A CN 109741250A CN 201910008670 A CN201910008670 A CN 201910008670A CN 109741250 A CN109741250 A CN 109741250A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- confidence
- barrel
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image processing method and devices, storage medium and electronic equipment, are related to technical field of image processing.The image processing method comprises determining that multiple series of images as training sample;Wherein, every group of image in multiple series of images includes original image and barrel-shaped image corresponding with original image;To the generation network in a confrontation neural network and differentiate that network is trained using training sample, with the housebroken generation network of determination;Image to be processed is inputted into housebroken generation network, to determine barrel-shaped image corresponding with image to be processed;The corresponding barrel-shaped image of image to be processed is shown on the screen of virtual reality device.The predistortion processing of image on virtual reality device may be implemented in the disclosure.
Description
Technical field
This disclosure relates to which technical field of image processing, fills in particular to a kind of image processing method, image procossing
It sets, storage medium and electronic equipment.
Background technique
With being constantly progressive for image processing techniques, virtual reality (Virtual Reality, VR) technology is occurred gradually over
In the visual field of people, especially in aspects of game play, quick development has been obtained.
In order to allow user visually to possess true feeling of immersion, virtual reality device is (for example, wear-type stereoscopic display
Device, abbreviation head are aobvious) visual range of human eye should be covered as much as possible.Therefore, it is necessary to a spy is configured on virtual reality device
Determine the enlarging lens of spherical curvature.In this case, when traditional image being projected in human eye using the enlarging lens, figure
As being distorted.
Currently, the case where being distorted for image, it usually needs calculate the distortion parameter of lens in virtual reality device
And it is modified.In addition, needing to carry out interpolation operation, to solve the problems, such as scalloping in software realization.However, being directed to
Processing mode at this stage, on the one hand, there may be errors when solution distortion parameter, lead to the received image abnormity of human eye;Separately
On the one hand, it in scene rendering, needs constantly to determine the coordinate of image on the screen, take a long time;In another aspect, virtual reality
There may be errors when lens are installed in equipment, and leading to calculated screen coordinate point, there are errors.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part
Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
The disclosure is designed to provide a kind of image processing method, image processing apparatus, storage medium and electronic equipment,
And then the upper figure of virtual reality device screen caused by the limitation and defect due to the relevant technologies is overcome at least to a certain extent
As display exception and the slow problem of image processing speed.
According to one aspect of the disclosure, a kind of image processing method is provided, comprising: determine multiple series of images as training sample
This;Wherein, every group of image in the multiple series of images includes original image and barrel-shaped image corresponding with the original image;
To the generation network in a confrontation neural network and differentiate that network is trained using the training sample, it is housebroken with determination
Generate network;Image to be processed is inputted into the housebroken generation network, with determining bucket corresponding with the image to be processed
Shape image;The corresponding barrel-shaped image of the image to be processed is shown on the screen of virtual reality device.
Optionally it is determined that multiple series of images comprises determining that an original image as training sample;Calculate virtual reality device
Lens distortion parameter;The original image is converted to using the lens distortion parameter corresponding with the original image barrel-shaped
Image;Wherein, using the original image and the barrel-shaped image as one group of image, it is used as training sample.
Optionally, to the generation network in a confrontation neural network and differentiate that network is trained using the training sample
It include: by the generation network of original image input confrontation neural network in the multiple series of images, with the determining and original image
Corresponding intermediate image;The differentiation network of the confrontation neural network will be inputted with the barrel-shaped image of group with the original image,
To determine the first the value of the confidence;The intermediate image is inputted into the differentiation network, to determine the second the value of the confidence;Utilize described second
The value of the confidence determines the loss for generating network, and sentences using described in first the value of the confidence and second the value of the confidence determination
The loss of other network, so as to it is described confrontation neural network in generation network and judge that network is trained.
Optionally, determine that the loss for generating network includes: based on second confidence using second the value of the confidence
Cross entropy between value and 1 determines the loss for generating network.
Optionally, determine that the loss for differentiating network includes: using first the value of the confidence and second the value of the confidence
Cross entropy based on the cross entropy between first the value of the confidence and 1 and between second the value of the confidence and 0 is sentenced described in determining
The loss of other network.
Optionally, described image processing method further include: housebroken generation network is saved as to the model of predetermined format
File;Wherein, in the case where getting image to be processed, the model file is loaded, so as to the determining and figure to be processed
As corresponding barrel-shaped image.
Optionally, housebroken generation network is saved as the model file of predetermined format includes: using authentication image pair
Housebroken generation network is verified;If the result of verifying meets a preset condition, housebroken generation network is protected
Save as the model file of predetermined format.
According to one aspect of the disclosure, a kind of image processing apparatus, including training sample determining module, network instruction are provided
Practice module, image determining module and image display.
Specifically, training sample determining module is for determining multiple series of images as training sample;Wherein, the multiple series of images
In every group of image include original image and barrel-shaped image corresponding with the original image;Network training module is for utilizing
The training sample is to the generation network in a confrontation neural network and differentiates that network is trained, with the housebroken generation of determination
Network;Image determining module is used to image to be processed inputting the housebroken generation network, with it is determining with it is described to be processed
The corresponding barrel-shaped image of image;Image display, which is used to show the corresponding barrel-shaped image of the image to be processed, virtually to be showed
On the screen of real equipment.
Optionally, training sample determining module includes that original image determination unit, parameter calculation unit and barrel-shaped image are true
Order member.
Specifically, original image determination unit is for determining an original image;Parameter calculation unit is virtual existing for calculating
The lens distortion parameter of real equipment;Barrel-shaped image determination unit is used to turn the original image using the lens distortion parameter
It is changed to barrel-shaped image corresponding with the original image;Wherein, using the original image and the barrel-shaped image as a group picture
Picture is used as training sample.
Optionally, network training module includes intermediate image determination unit, the first the value of the confidence determination unit, the second the value of the confidence
Determination unit and network training unit.
Specifically, intermediate image determination unit is used for original image input confrontation neural network in the multiple series of images
Network is generated, to determine intermediate image corresponding with the original image;First the value of the confidence determination unit is used for will be with the original
Beginning image inputs the differentiation network of the confrontation neural network with the barrel-shaped image of group, to determine the first the value of the confidence;Second confidence
It is worth determination unit to be used to the intermediate image inputting the differentiation network, to determine the second the value of the confidence;Network training unit is used
It is set in the loss for determining the generation network using second the value of the confidence, and using first the value of the confidence and described second
Letter value determine it is described differentiate network loss, so as to it is described confrontation neural network in generation network and judge that network is instructed
Practice.
Optionally, network training unit includes first-loss determination unit.
Specifically, first-loss determination unit is used for based on described in the cross entropy determination between second the value of the confidence and 1
Generate the loss of network.
Optionally, network training unit includes the second loss determination unit.
Specifically, the second loss determination unit is used for based on cross entropy between first the value of the confidence and 1 and described
Cross entropy between second the value of the confidence and 0 determines the loss for differentiating network.
Optionally, image processing apparatus further includes network preserving module.
Specifically, network preserving module is used to save as on housebroken generation network the model file of predetermined format;Its
In, in the case where getting image to be processed, the model file is loaded, so that determination is corresponding with the image to be processed
Barrel-shaped image.
Optionally, network preserving module includes network verification unit and network storage unit.
Specifically, network verification unit is used to verify housebroken generation network using authentication image;Network is protected
If result of the memory cell for verifying meets a preset condition, housebroken generation network is saved as to the mould of predetermined format
Type file.
According to one aspect of the disclosure, a kind of storage medium is provided, computer program, the computer are stored thereon with
Image processing method described in above-mentioned any one is realized when program is executed by processor.
According to one aspect of the disclosure, a kind of electronic equipment is provided, comprising: processor;And memory, for storing
The executable instruction of the processor;Wherein, the processor is configured to above-mentioned to execute via the executable instruction is executed
Image processing method described in any one.
In the technical solution provided by some embodiments of the present disclosure, generated and a figure by using confrontation neural network
As corresponding barrel-shaped image, to be shown on the screen of virtual reality device, on the one hand, disclosure scheme does not need to calculate empty
The distortion parameter of quasi- real world devices lens does not need to carry out interpolation arithmetic yet, and solving the problems, such as calculating process, there are errors, and
Relative to the relevant technologies, the speed of rendering can be greatly improved;On the other hand, the disclosure can be used for different virtual realities and set
It is standby, there is universality.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.In the accompanying drawings:
Fig. 1 shows the schematic diagram of the scheme of the processing pattern distortion of the relevant technologies;
Fig. 2 diagrammatically illustrates the flow chart of image processing method according to an exemplary embodiment of the present disclosure;
Fig. 3 shows showing for original image and barrel-shaped image in training sample according to an exemplary embodiment of the present disclosure
It is intended to;
Fig. 4 shows the schematic diagram of the scheme of acquisition training sample according to an exemplary embodiment of the present disclosure;
Fig. 5 diagrammatically illustrates the original image of acquisition according to an exemplary embodiment of the present disclosure and corresponding barrel-shaped
Image;
Fig. 6 diagrammatically illustrates the housebroken generation network of the utilization disclosure according to an exemplary embodiment of the present disclosure
Generate the effect picture of barrel-shaped image;
Fig. 7 shows the confrontation nerve net for realizing image processing method according to an exemplary embodiment of the present disclosure
The schematic diagram of network;
Fig. 8 diagrammatically illustrates the block diagram of image processing apparatus according to an exemplary embodiment of the present disclosure;
Fig. 9 diagrammatically illustrates the block diagram of training sample determining module according to an exemplary embodiment of the present disclosure;
Figure 10 diagrammatically illustrates the block diagram of network training module according to an exemplary embodiment of the present disclosure;
Figure 11 diagrammatically illustrates the block diagram of network training unit according to an exemplary embodiment of the present disclosure;
Figure 12 diagrammatically illustrates the block diagram of the network training unit of the another exemplary embodiment according to the disclosure;
Figure 13 diagrammatically illustrates the block diagram of the image processing apparatus of the another exemplary embodiment according to the disclosure;
Figure 14 diagrammatically illustrates the block diagram of network preserving module according to an exemplary embodiment of the present disclosure;
Figure 15 shows the schematic diagram of storage medium according to an exemplary embodiment of the present disclosure;And
Figure 16 diagrammatically illustrates the block diagram of electronic equipment according to an exemplary embodiment of the present disclosure.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes
Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more
Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot
Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.In the following description, it provides perhaps
More details fully understand embodiment of the present disclosure to provide.It will be appreciated, however, by one skilled in the art that can
It is omitted with technical solution of the disclosure one or more in the specific detail, or others side can be used
Method, constituent element, device, step etc..In other cases, be not shown in detail or describe known solution to avoid a presumptuous guest usurps the role of the host and
So that all aspects of this disclosure thicken.
In addition, attached drawing is only the schematic illustrations of the disclosure, it is not necessarily drawn to scale.Identical attached drawing mark in figure
Note indicates same or similar part, thus will omit repetition thereof.Some block diagrams shown in the drawings are function
Energy entity, not necessarily must be corresponding with physically or logically independent entity.These function can be realized using software form
Energy entity, or these functional entitys are realized in one or more hardware modules or integrated circuit, or at heterogeneous networks and/or place
These functional entitys are realized in reason device device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all steps.For example, the step of having
It can also decompose, and the step of having can merge or part merges, therefore the sequence actually executed is possible to according to the actual situation
Change.
During the relevant technologies carry out distortion processing to image, it no matter is to solve for the distortion parameter of lens, is also to solve for
Corresponding relationship on the coordinate put on screen and lens imaging image between point is finally converted into solving certain point on screen
Coordinate and after lens corresponding points coordinate.With reference to Fig. 1, need to calculate the coordinate (x, y) of certain point on the screen with
And the corresponding coordinate points FOV after lens.In addition, camera finds coordinate points FOV in scene, according to coordinate points in rendering
FOV is determined corresponding (x, y), and then determines that certain point needs the pixel being displayed on the screen in scene.
However, this mode is easy to appear calculating error, and needs constantly to determine the coordinate of image, take a long time.
In consideration of it, present disclose provides a kind of image processing method and device, to solve the above problems.It should be noted that
It is that the image processing method of the disclosure can be realized that the processing unit can be, for example, at virtual environment by a processing unit
Device is managed, is deployed in virtual reality device.In this case, the image processing apparatus of the disclosure can be located at processing unit
It is interior.However, it is without being limited thereto, the image processing method of the disclosure can also be realized by a remote server.
Fig. 2 diagrammatically illustrates the flow chart of the image processing method of the illustrative embodiments of the disclosure.With reference to Fig. 2,
Image processing method may comprise steps of:
S22. determine multiple series of images as training sample;Wherein, every group of image in multiple series of images include original image with
And barrel-shaped image corresponding with original image.
In the illustrative embodiments of the disclosure, the lens distortion parameter of virtual reality device can use to determine this
The training sample of open model used.
It is possible, firstly, to determine an original image.The original image can be, for example, the image of a certain frame in scene of game, this
Disclosure does not do situations such as source of original image, resolution ratio, size, colour temperature specifically limited;Next, can calculate virtual existing
The lens distortion parameter of real equipment.Specifically, can determine void using the method for lens distortion parameter is determined in the related technology
The lens distortion parameter of quasi- real world devices, for example, can determine that lens are abnormal using cross ratio invariability principle in perspective projection imaging
Variable element;Then, it can use calculated lens distortion parameter and the original image be converted into bucket corresponding to original image
Shape image.
Fig. 3 shows the schematic diagram of determining original image 31 and barrel-shaped image 32 corresponding with original image 31.
Using the original image determined and barrel-shaped image as one group of image, it is used as training sample.By constantly obtaining
Original image simultaneously determines that corresponding barrel-shaped image can be using multiple series of images as model used in the disclosure to determine multiple series of images
Training set, to be trained to model.
In addition, being directed to the acquisition of training sample, preferable VR machine can be solved using existing distortion at present, saved same
The image (barrel-shaped image) shown on the picture (original image) and screen of the shooting of one frame camera, as training sample.
Fig. 4 shows a kind of scheme for obtaining training sample.Barrel-shaped image can be shown on screen 41, and camera 43 can obtain
Take the barrel-shaped image via image obtained from lens 42 (original image).By constantly replacing the image on screen, and benefit
Corresponding image is obtained with camera, that is, can determine that training set used in the disclosure.
With reference to Fig. 5, image 51 shows the true original image determined, image 52 shows corresponding with image 51
Barrel-shaped image.Wherein, image 51 and image 52 are one group of image recited above.
S24. to the generation network in a confrontation neural network and differentiate that network is trained using training sample, with determination
Housebroken generation network.
First, on the one hand, can be by the corresponding barrel-shaped one confrontation neural network of image input of original image in training sample
Differentiation network, with determine the first the value of the confidence;On the other hand, original image can be inputted to the generation net of confrontation neural network
Network to determine intermediate image corresponding with original image, and the intermediate image is inputted and differentiates network, to determine the second confidence
Value.Wherein, term " first ", " second " are merely to differentiation the value of the confidence, should not be used as the limitation of the disclosure.
Next, can use the first the value of the confidence and the second the value of the confidence to adjust the parameter of confrontation neural network, to reach
Network is generated in training confrontation neural network and differentiates the process of network.
It can use the second the value of the confidence and determine the loss for generating network, furthermore it is possible to be set using the first the value of the confidence and second
Letter value determines the loss for differentiating network.Network is generated specifically, can determine based on the cross entropy between the second the value of the confidence and 1
Loss;Cross entropy based on the cross entropy between the first the value of the confidence and 1 and between the second the value of the confidence and 0, which determines, differentiates network
Loss.
For example, the image that original image can be 3 channels and size is 960 × 960, the input node for generating network are each
50 such images can be inputted.After original image experience generates network, the intermediate image of output is 3 channels, and size can
Think 960 × 1080.
Input differentiates that the size of the image of network can be identical as the network output size of image is generated, and differentiates the defeated of network
Egress quantity is 1, for characterizing confidence level.
Generating network can be 17 layers of convolutional neural networks, be activated using ReLU activation primitive, and can adopt
Gradient decline optimization is carried out with the optimizer (Optimizer) of Adam type.In addition, differentiating that network can be 12 layers of convolution mind
Through network, full coupling part can be optimized using Dropout layers, differentiate that the last layer of network can use Sigmoid
Activation primitive is activated.
According to some embodiments of the present disclosure, housebroken generation network can be saved as into predetermined format (for example.pb
Format) model file, so as to load call.
According to other embodiments of the disclosure, after being trained using training set to generation network, it can use and test
Card image verifies housebroken generation network.That is, can using verifying collection to the generation network after training into
Row verifying.If the result of verifying meets a preset condition, will be trained after generation network save as the mould of predetermined format
Type file.For example, preset condition can be greater than in the case where the value of the confidence is 0 to 1 for the value of the confidence obtained after verifying collection verifying
0.95。
S26. image to be processed is inputted into housebroken generation network, to determine barrel-shaped figure corresponding with image to be processed
Picture.
In the illustrative embodiments of the disclosure, image to be processed can be the image for currently needing to show.Some
Under scene, image to be processed may, for example, be user in the image for carrying out currently needing to show when VR game.It is, however, to be understood that
, the image to be processed of the disclosure can also be any one image for needing to be converted to barrel-shaped image.
With reference to Fig. 6, the input of image 61 to be processed generates network 60, and generating the result that network 60 exports is corresponding barrel-shaped figure
As 62.
S28. the corresponding barrel-shaped image of image to be processed is shown on the screen of virtual reality device.
After step S26 determines the corresponding barrel-shaped image of image to be processed, which can be shown virtual
On the screen of real world devices, so that user knows current virtual scene.For example, in the scene of VR skijoring, user can be with
Corresponding skiing movement is carried out based on the current virtual scene that human eye is known.
It is illustrated below with reference to confrontation neural network of the Fig. 7 to the disclosure.Wherein, confrontation neural network includes generating
Network (G network) and differentiation network (D network).
Image A (original image i.e. above) inputs G network, to obtain and image A_out;Image A_out is inputted into D net
Network, to obtain the value of the confidence D_out_A;Image B is barrel-shaped image corresponding with image A, and image B inputs D network, to obtain confidence
Value D_out_B.In such a case, it is possible to the loss of G network be determined according to the cross entropy between D_out_A and 1, in addition, can
To determine the loss of D network according to the cross entropy between D_out_B and 1 and the cross entropy between D_out_A and 0.
Above-mentioned image A and image B is only for one group of image being trained to confrontation neural network.Pass through multiple series of images
Training process, to determine housebroken generation network.It will be to be processed next, can use the housebroken generation network
Image is converted to corresponding barrel-shaped image.
In addition, the number of nodes and connection relationship of G network and D network are only examples in Fig. 7, the disclosure is to confrontation nerve
The generation network and the number of nodes of differentiation network, node connection relationship, the network number of plies of network do not do specifically limited.
In conclusion according to the image processing method of the disclosure, on the one hand, do not need to calculate virtual reality device lens
Distortion parameter does not need to carry out interpolation arithmetic yet, and solving the problems, such as calculating process, there are errors, and relative to the relevant technologies,
The speed of rendering can be greatly improved;On the other hand, the disclosure can be used for different virtual reality devices, have universality.
It should be noted that although describing each step of method in the disclosure in the accompanying drawings with particular order, this is simultaneously
Undesired or hint must execute these steps in this particular order, or have to carry out the ability of step shown in whole
Realize desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and executed by certain steps,
And/or a step is decomposed into execution of multiple steps etc..
Further, a kind of image processing apparatus is additionally provided in this example embodiment.
Fig. 8 diagrammatically illustrates the block diagram of the image processing apparatus of the illustrative embodiments of the disclosure.With reference to Fig. 8,
Image processing apparatus 8 according to an exemplary embodiment of the present disclosure may include training sample determining module 81, network training
Module 83, image determining module 85 and image display 87.
Specifically, training sample determining module 81 is determined for multiple series of images as training sample;Wherein, described more
Every group of image in group image includes original image and barrel-shaped image corresponding with the original image;Network training module 83
It can be used for being trained the generation network in a confrontation neural network with differentiation network using the training sample, with determination
Housebroken generation network;Image determining module 85 can be used for image to be processed inputting the housebroken generation network,
With determining barrel-shaped image corresponding with the image to be processed;Image display 87 can be used for the image pair to be processed
The barrel-shaped image answered is shown on the screen of virtual reality device.
According to the image processing apparatus of the disclosure, on the one hand, it does not need to calculate the distortion parameter of virtual reality device lens,
It does not need to carry out interpolation arithmetic, solving the problems, such as calculating process, there are errors, and relative to the relevant technologies, can mention significantly yet
The speed of height rendering;On the other hand, the disclosure can be used for different virtual reality devices, have universality.
According to an exemplary embodiment of the present disclosure, with reference to Fig. 9, training sample determining module 81 may include that original image is true
Order member 901, parameter calculation unit 903 and barrel-shaped image determination unit 905.
Specifically, original image determination unit 901 is determined for an original image;Parameter calculation unit 903 can be with
For calculating the lens distortion parameter of virtual reality device;Barrel-shaped image determination unit 905 can be used for abnormal using the lens
The original image is converted to barrel-shaped image corresponding with the original image by variable element;Wherein, by the original image with
The barrel-shaped image is used as training sample as one group of image.
According to an exemplary embodiment of the present disclosure, with reference to Figure 10, network training module 83 may include that intermediate image determines
Unit 101, the first the value of the confidence determination unit 103, the second the value of the confidence determination unit 105 and network training unit 107.
Specifically, intermediate image determination unit 101 can be used for original image input confrontation mind in the multiple series of images
Generation network through network, to determine intermediate image corresponding with the original image;First the value of the confidence determination unit 103 can be with
For the differentiation network of the confrontation neural network will to be inputted with the barrel-shaped image of group with the original image, to determine that first sets
Letter value;Second the value of the confidence determination unit 105 can be used for the intermediate image inputting the differentiation network, to determine that second sets
Letter value;Network training unit 107 can be used for determining the loss for generating network using second the value of the confidence, and utilize
First the value of the confidence and second the value of the confidence determine the loss for differentiating network, so as to in the confrontation neural network
Generation network and judge that network is trained.
According to an exemplary embodiment of the present disclosure, with reference to Figure 11, network training unit 107 may include that first-loss determines
Unit 111.
Specifically, first-loss determination unit 111 can be used for it is true based on the cross entropy between second the value of the confidence and 1
The fixed loss for generating network.
According to an exemplary embodiment of the present disclosure, with reference to Figure 12, network training unit 107 can also include that the second loss is true
Order member 121.
Specifically, second loss determination unit 121 can be used for based on the cross entropy between first the value of the confidence and 1 with
And the cross entropy between second the value of the confidence and 0 determines the loss for differentiating network.
According to an exemplary embodiment of the present disclosure, with reference to Figure 13, image processing apparatus 13 compared to image processing apparatus 8,
It can also include network preserving module 131.
Specifically, network preserving module 131 can be used for saving as on housebroken generation network the model of predetermined format
File;Wherein, in the case where getting image to be processed, the model file is loaded, so as to the determining and figure to be processed
As corresponding barrel-shaped image.
According to an exemplary embodiment of the present disclosure, with reference to Figure 14, network preserving module 131 may include network verification unit
141 and network storage unit 143.
Specifically, network verification unit 141 can be used for verifying housebroken generation network using authentication image;
If the result that network storage unit 143 can be used for verifying meets a preset condition, housebroken generation network is saved
For the model file of predetermined format.
Since each functional module and the above method of the program analysis of running performance device of embodiment of the present invention are invented
It is identical in embodiment, therefore details are not described herein.
In an exemplary embodiment of the disclosure, a kind of computer readable storage medium is additionally provided, energy is stored thereon with
Enough realize the program product of this specification above method.In some possible embodiments, various aspects of the invention may be used also
In the form of being embodied as a kind of program product comprising program code, when described program product is run on the terminal device, institute
Program code is stated for executing the terminal device described in above-mentioned " illustrative methods " part of this specification according to this hair
The step of bright various illustrative embodiments.
With reference to shown in Figure 15, the program product for realizing the above method of embodiment according to the present invention is described
1500, can using portable compact disc read only memory (CD-ROM) and including program code, and can in terminal device,
Such as it is run on PC.However, program product of the invention is without being limited thereto, in this document, readable storage medium storing program for executing can be with
To be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or
It is in connection.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter
Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or
System, device or the device of semiconductor, or any above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive
List) include: electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only
Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory
(CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal,
Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing
Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its
The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have
Line, optical cable, RF etc. or above-mentioned any appropriate combination.
The program for executing operation of the present invention can be write with any combination of one or more programming languages
Code, described program design language include object oriented program language-Java, C++ etc., further include conventional
Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user
It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating
Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far
Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network
(WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP
To be connected by internet).
In an exemplary embodiment of the disclosure, a kind of electronic equipment that can be realized the above method is additionally provided.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or
Program product.Therefore, various aspects of the invention can be embodied in the following forms, it may be assumed that complete hardware embodiment, complete
The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here
Referred to as circuit, " module " or " system ".
The electronic equipment 1600 of this embodiment according to the present invention is described referring to Figure 16.The electricity that Figure 16 is shown
Sub- equipment 1600 is only an example, should not function to the embodiment of the present invention and use scope bring any restrictions.
As shown in figure 16, electronic equipment 1600 is showed in the form of universal computing device.The component of electronic equipment 1600 can
To include but is not limited to: at least one above-mentioned processing unit 1610, connects not homologous ray at least one above-mentioned storage unit 1620
The bus 1630 of component (including storage unit 1620 and processing unit 1610), display unit 1640.
Wherein, the storage unit is stored with program code, and said program code can be held by the processing unit 1610
Row, so that various according to the present invention described in the execution of the processing unit 1610 above-mentioned " illustrative methods " part of this specification
The step of illustrative embodiments.For example, the processing unit 1610 can execute step S22 as shown in Figure 2 to step
S28。
Storage unit 1620 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit
(RAM) 16201 and/or cache memory unit 16202, it can further include read-only memory unit (ROM) 16203.
Storage unit 1620 can also include program/utility with one group of (at least one) program module 16205
16204, such program module 16205 includes but is not limited to: operating system, one or more application program, other programs
It may include the realization of network environment in module and program data, each of these examples or certain combination.
Bus 1630 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage
Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures
Local bus.
Electronic equipment 1600 can also be with one or more external equipments 1700 (such as keyboard, sensing equipment, bluetooth equipment
Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 1600 communicate, and/or with make
The electronic equipment 1600 can with it is one or more of the other calculating equipment be communicated any equipment (such as router, modulation
Demodulator etc.) communication.This communication can be carried out by input/output (I/O) interface 1650.Also, electronic equipment 1600
Network adapter 1660 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public affairs can also be passed through
Common network network, such as internet) communication.As shown, network adapter 1660 passes through its of bus 1630 and electronic equipment 1600
The communication of its module.It should be understood that although not shown in the drawings, other hardware and/or software can be used in conjunction with electronic equipment 1600
Module, including but not limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, magnetic
Tape drive and data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented
Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure
The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one
Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating
Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to disclosure embodiment
Method.
In addition, above-mentioned attached drawing is only the schematic theory of processing included by method according to an exemplary embodiment of the present invention
It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable
Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description
Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more
Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould
The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or
Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim
It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the attached claims.
Claims (10)
1. a kind of image processing method characterized by comprising
Determine multiple series of images as training sample;Wherein, every group of image in the multiple series of images include original image and with
The corresponding barrel-shaped image of the original image;
To the generation network in a confrontation neural network and differentiate that network is trained using the training sample, to determine through instructing
Experienced generation network;
Image to be processed is inputted into the housebroken generation network, with determining barrel-shaped figure corresponding with the image to be processed
Picture;
The corresponding barrel-shaped image of the image to be processed is shown on the screen of virtual reality device.
2. image processing method according to claim 1, which is characterized in that determine multiple series of images as training sample packet
It includes:
Determine an original image;
Calculate the lens distortion parameter of virtual reality device;
The original image is converted into barrel-shaped image corresponding with the original image using the lens distortion parameter;
Wherein, using the original image and the barrel-shaped image as one group of image, it is used as training sample.
3. image processing method according to claim 1, which is characterized in that using the training sample to a confrontation nerve
Generation network and differentiation network in network are trained and include:
By the generation network of original image input confrontation neural network in the multiple series of images, with the determining and original image pair
The intermediate image answered;
The differentiation network of the confrontation neural network will be inputted with the barrel-shaped image of group with the original image, to determine that first sets
Letter value;
The intermediate image is inputted into the differentiation network, to determine the second the value of the confidence;
The loss for generating network is determined using second the value of the confidence, and utilizes first the value of the confidence and described second
The value of the confidence determine it is described differentiate network loss, so as to it is described confrontation neural network in generation network and judge network progress
Training.
4. image processing method according to claim 3, which is characterized in that determine the life using second the value of the confidence
Include: at the loss of network
The loss for generating network is determined based on the cross entropy between second the value of the confidence and 1.
5. image processing method according to claim 4, which is characterized in that utilize first the value of the confidence and described second
The value of the confidence determines that the loss for differentiating network includes:
Institute is determined based on the cross entropy between first the value of the confidence and 1 and the cross entropy between second the value of the confidence and 0
State the loss for differentiating network.
6. image processing method according to claim 1, which is characterized in that described image processing method further include:
Housebroken generation network is saved as to the model file of predetermined format;
Wherein, in the case where getting image to be processed, the model file is loaded, so as to the determining and image to be processed
Corresponding barrel-shaped image.
7. image processing method according to claim 6, which is characterized in that save as housebroken generation network predetermined
The model file of format includes:
Housebroken generation network is verified using authentication image;
If the result of verifying meets a preset condition, housebroken generation network is saved as to the model text of predetermined format
Part.
8. a kind of image processing apparatus characterized by comprising
Training sample determining module, for determining multiple series of images as training sample;Wherein, every group picture in the multiple series of images
As including original image and barrel-shaped image corresponding with the original image;
Network training module, for using the training sample to one confrontation neural network in generation network and differentiate network into
Row training, with the housebroken generation network of determination;
Image determining module, for image to be processed to be inputted the housebroken generation network, with it is determining with it is described to be processed
The corresponding barrel-shaped image of image;
Image display, for the corresponding barrel-shaped image of the image to be processed to be shown the screen in virtual reality device
On.
9. a kind of storage medium, is stored thereon with computer program, which is characterized in that the computer program is executed by processor
Image processing method described in Shi Shixian any one of claims 1 to 7.
10. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, the processor is configured to come described in any one of perform claim requirement 1 to 7 via the execution executable instruction
Image processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910008670.0A CN109741250B (en) | 2019-01-04 | 2019-01-04 | Image processing method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910008670.0A CN109741250B (en) | 2019-01-04 | 2019-01-04 | Image processing method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109741250A true CN109741250A (en) | 2019-05-10 |
CN109741250B CN109741250B (en) | 2023-06-16 |
Family
ID=66363519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910008670.0A Active CN109741250B (en) | 2019-01-04 | 2019-01-04 | Image processing method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109741250B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347305A (en) * | 2019-05-30 | 2019-10-18 | 华为技术有限公司 | A kind of VR multi-display method and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107524A1 (en) * | 2015-12-21 | 2017-06-29 | 乐视控股(北京)有限公司 | Imaging distortion test method and apparatus for virtual reality helmet |
CN107154027A (en) * | 2017-04-17 | 2017-09-12 | 深圳大学 | Compensation method and device that a kind of fault image restores |
CN107451965A (en) * | 2017-07-24 | 2017-12-08 | 深圳市智美达科技股份有限公司 | Distort face image correcting method, device, computer equipment and storage medium |
CN107945133A (en) * | 2017-11-30 | 2018-04-20 | 北京小米移动软件有限公司 | Image processing method and device |
CN109120854A (en) * | 2018-10-09 | 2019-01-01 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
-
2019
- 2019-01-04 CN CN201910008670.0A patent/CN109741250B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017107524A1 (en) * | 2015-12-21 | 2017-06-29 | 乐视控股(北京)有限公司 | Imaging distortion test method and apparatus for virtual reality helmet |
CN107154027A (en) * | 2017-04-17 | 2017-09-12 | 深圳大学 | Compensation method and device that a kind of fault image restores |
CN107451965A (en) * | 2017-07-24 | 2017-12-08 | 深圳市智美达科技股份有限公司 | Distort face image correcting method, device, computer equipment and storage medium |
CN107945133A (en) * | 2017-11-30 | 2018-04-20 | 北京小米移动软件有限公司 | Image processing method and device |
CN109120854A (en) * | 2018-10-09 | 2019-01-01 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110347305A (en) * | 2019-05-30 | 2019-10-18 | 华为技术有限公司 | A kind of VR multi-display method and electronic equipment |
US11829521B2 (en) | 2019-05-30 | 2023-11-28 | Huawei Technologies Co., Ltd. | VR multi-screen display method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN109741250B (en) | 2023-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110599492B (en) | Training method and device for image segmentation model, electronic equipment and storage medium | |
CN107890672B (en) | Visible sensation method and device, storage medium, the electronic equipment of compensating sound information | |
CN109671126B (en) | Predicting block displacement maps using neural networks | |
US11514813B2 (en) | Smart fitness system | |
KR101379074B1 (en) | An apparatus system and method for human-machine-interface | |
CN106897688A (en) | Interactive projection device, the method for control interactive projection and readable storage medium storing program for executing | |
US10747859B2 (en) | System, method and computer program product for stateful instruction-based dynamic man-machine interactions for humanness validation | |
CN109144252B (en) | Object determination method, device, equipment and storage medium | |
WO2022017305A1 (en) | Mixed-reality teleconferencing across multiple locations | |
CN116091676B (en) | Face rendering method of virtual object and training method of point cloud feature extraction model | |
CN110458924B (en) | Three-dimensional face model establishing method and device and electronic equipment | |
CN109471805A (en) | Resource testing method and device, storage medium, electronic equipment | |
CN113289327A (en) | Display control method and device of mobile terminal, storage medium and electronic equipment | |
CN109002837A (en) | A kind of image application processing method, medium, device and calculate equipment | |
CN109509242A (en) | Virtual objects facial expression generation method and device, storage medium, electronic equipment | |
CN107798675A (en) | The detection method and device of smear in display image | |
CN114693876A (en) | Digital human generation method, device, storage medium and electronic equipment | |
CN113903210A (en) | Virtual reality simulation driving method, device, equipment and storage medium | |
CN109741250A (en) | Image processing method and device, storage medium and electronic equipment | |
CN110215686A (en) | Display control method and device, storage medium and electronic equipment in scene of game | |
CN109493428B (en) | Optimization method and device for three-dimensional virtual model, electronic equipment and storage medium | |
CN113554642B (en) | Focus robust brain region positioning method and device, electronic equipment and storage medium | |
CN112905087B (en) | Interactive state display method, device and equipment and readable storage medium | |
CN117795550A (en) | Image quality sensitive semantic segmentation for use in training image generation countermeasure networks | |
KR102502195B1 (en) | Method and system for operating virtual training content using user-defined gesture model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |