CN109933320A - A kind of image generating method and server - Google Patents

A kind of image generating method and server Download PDF

Info

Publication number
CN109933320A
CN109933320A CN201811629005.5A CN201811629005A CN109933320A CN 109933320 A CN109933320 A CN 109933320A CN 201811629005 A CN201811629005 A CN 201811629005A CN 109933320 A CN109933320 A CN 109933320A
Authority
CN
China
Prior art keywords
information
target
displayed
image
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811629005.5A
Other languages
Chinese (zh)
Other versions
CN109933320B (en
Inventor
郝瑞祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201811629005.5A priority Critical patent/CN109933320B/en
Publication of CN109933320A publication Critical patent/CN109933320A/en
Application granted granted Critical
Publication of CN109933320B publication Critical patent/CN109933320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a kind of image generating methods, which comprises whether target information changes in monitoring page target region, obtains a monitored results;If the monitored results show that the target information changes, information to be displayed is obtained;Based on acquired information to be displayed, target image is determined, the target image is for characterizing the information to be displayed;The target image is exported, so as to show the target image in the target area.The invention also discloses a kind of servers.

Description

A kind of image generating method and server
Technical field
The present invention relates to image generating technologies more particularly to a kind of image generating methods and server.
Background technique
The prior art is only capable of being completed by designer, works as net during generating corresponding icon based on text in webpage When text changes in page, designer generally requires to redesign corresponding icon to adapt to changed text, disappears Time-consuming is longer, is unfavorable for the synchronized update of text and image in webpage.
Summary of the invention
The embodiment of the present invention provides a kind of image generating method and server and can be based on when getting target text The target information determines the component of target image;The component of identified target image is combined, is formed Target image to be output, wherein the target image is used to characterize the content of text of the target text information
The technical solution of the embodiment of the present invention is achieved in that
The present invention provides a kind of image generating methods, which is characterized in that the described method includes:
Whether target information changes in monitoring page target region, obtains a monitored results;
If the monitored results show that the target information changes, information to be displayed is obtained;
Based on acquired information to be displayed, target image is determined, the target image is for characterizing the letter to be shown Breath;
The target image is exported, so as to show the target image in the target area.
In above scheme, the method also includes:
When the quantity of the information to be displayed is at least two,
According to the incidence relation of at least two information to be displayed, display styles are determined;
Based on identified display styles, target image corresponding with corresponding information to be displayed is determined.
It is described based on acquired information to be displayed in above scheme, determine target image, comprising:
The full copy of the information to be displayed is decomposed by the algorithm being adapted to deconvolution neural network model, Determine the corresponding display styles of the information to be displayed.
It is described based on acquired information to be displayed in above scheme, determine target image, comprising:
It is handled by full copy of the deconvolution neural network model to the information to be displayed, confirms corresponding mesh Mark information;
Processing is patterned to identified target information, forms target image.
In above scheme, it is described by deconvolution neural network model to the full copy of the information to be displayed at Reason, confirms corresponding target information, comprising:
By carrying out decoded first decoder model of sentence rank in the deconvolution neural network model, to the mesh Mark information is decomposed;
By carrying out decoded second decoder model of word rank in the deconvolution neural network model, to described first The processing result of decoder model is decoded, and determines the keyword in the target information.
In above scheme, the method also includes:
Based on the keyword in the identified target information, the component of target image is obtained.
It is described that processing is patterned to identified target information in above scheme, form target image, comprising:
By the warp lamination and anti-pond layer of deconvolution neural network model to image corresponding with the target information Component intersection handled, obtain the down-sampled result of the component of the target image;
The down-sampled result is handled by the anti-pond layer of the deconvolution neural network model, is formed to defeated Target image out.
In above scheme, the method also includes:
Based on the target area characteristic information, the target is determined by the warp lamination of deconvolution neural network model The pixel of image, to realize that the target image to be output is adapted with the target area.
In above scheme, the method also includes:
Tag along sort based on image pattern and described image sample, information generate target image to based on information Deconvolution neural network model is trained.
In above scheme, the tag along sort based on image pattern and described image sample, information, to based on letter The deconvolution neural network model that breath generates target image is trained, comprising:
Based on the sentence sample and corresponding decoding result in the information, the training deconvolution neural network model Decoded first decoder model of middle progress sentence rank.
In above scheme, the method also includes:
Based on the word sample and corresponding decoding result in the information, train in the deconvolution neural network model Carry out decoded second decoder model of word rank.
In above scheme, the method also includes:
According to the training result that is trained of the deconvolution neural network model for generating target image based on information, Update the adaptation algorithm and/or model parameter of the deconvolution neural network model;
Based on the adaptation algorithm and/or model parameter of the deconvolution neural network model updated, to the deconvolution mind Training is iterated through network model.
The present invention also provides a kind of server, the server includes:
Data obtaining module obtains a monitored results for monitoring whether target information in page target region changes;
The data obtaining module, for obtaining information to be displayed;
Message processing module, for determining that target image, the target image are used for based on acquired information to be displayed Characterize the information to be displayed;
Message output module, for exporting the target image, so as to show the target image in the target area.
In above scheme,
The message processing module, for when the quantity of the information to be displayed be at least two when, according to it is described at least The incidence relation of two information to be displayed, determines display styles;
The message processing module, for based on identified display styles, determination to be corresponding with corresponding information to be displayed Target image.
In above scheme,
The message processing module, for the algorithm by being adapted to deconvolution neural network model to the letter to be shown The full copy of breath is decomposed, and determines the corresponding display styles of the information to be displayed.
In above scheme,
The message processing module, for the full copy by deconvolution neural network model to the information to be displayed It is handled, confirms corresponding target information;
The message processing module forms target image for being patterned processing to identified target information.
In above scheme,
The message processing module, for passing through progress sentence rank decoded the in the deconvolution neural network model One decoder model decomposes the target information;
By carrying out decoded second decoder model of word rank in the deconvolution neural network model, to described first The processing result of decoder model is decoded, and determines the keyword in the target information.
In above scheme,
The message processing module, for obtaining target image based on the keyword in the identified target information Component.
In above scheme,
The message processing module, for by the warp lamination of deconvolution neural network model and anti-pond layer to institute The component intersection for stating the corresponding image of target information is handled, and the down-sampled of the component of the target image is obtained As a result;
The message processing module, for the anti-pond layer by the deconvolution neural network model to described down-sampled As a result it is handled, forms target image to be output.
In above scheme,
The message processing module passes through deconvolution neural network model for being based on the target area characteristic information Warp lamination determine the pixel of the target image, to realize that the target image to be output and the target area are mutually fitted Match.
In above scheme, the server further include:
Training module, for the tag along sort based on image pattern and described image sample, information, to based on information The deconvolution neural network model for generating target image is trained.
In above scheme,
The training module, described in training based on the sentence sample and corresponding decoding result in the information Decoded first decoder model of sentence rank is carried out in deconvolution neural network model.
In above scheme,
The training module, for based on the word sample and corresponding decoding result in the information, training to be described anti- Decoded second decoder model of word rank is carried out in convolutional neural networks model.
In above scheme,
The training module, for according to based on information generate target image the deconvolution neural network model into The training result of row training, updates the adaptation algorithm and/or model parameter of the deconvolution neural network model;
The training module, for adaptation algorithm and/or model ginseng based on the deconvolution neural network model updated Number, is iterated training to the deconvolution neural network model.
The present invention also provides a kind of server, the server includes:
Memory, for storing executable instruction;
It is raw to execute image provided by the present invention when for running the executable instruction of the memory storage for processor At method.
In the embodiment of the present invention, by obtaining target information;Based on the target information, the composition member of target image is determined Element;The component of identified target image is combined, when avoiding the text information in webpage and changing, is needed It updates image corresponding with changed text information manually by designer, realizes the generation of image, neatly to fit Answer the variation of information in webpage.
Detailed description of the invention
Fig. 1 is an optional flow diagram of image generating method provided in an embodiment of the present invention;
Fig. 2 is an optional structural schematic diagram of server provided in an embodiment of the present invention;
Fig. 3 is an optional structural schematic diagram of server provided in an embodiment of the present invention;
Fig. 4 A is an optional usage scenario schematic diagram of image generating method provided in an embodiment of the present invention;
Fig. 4 B is an optional usage scenario schematic diagram of image generating method provided in an embodiment of the present invention;
Fig. 5 is an optional structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, described embodiment is not construed as limitation of the present invention, and those of ordinary skill in the art are not having All other embodiment obtained under the premise of creative work is made, shall fall within the protection scope of the present invention.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention The normally understood meaning of technical staff is identical.Term used herein is intended merely to the purpose of the description embodiment of the present invention, It is not intended to limit the present invention.
Before the embodiment of the present invention is further elaborated, to noun involved in the embodiment of the present invention and term It is illustrated, noun involved in the embodiment of the present invention and term are suitable for following explanation.
1) display styles, for characterizing the display feature in Webpage with diversification with identity.
Fig. 1 is an optional flow diagram of image generating method provided in an embodiment of the present invention, as shown in Figure 1, One optional flow chart of image generating method provided in an embodiment of the present invention, is illustrated the step of showing.Wherein, scheme Method shown in 1 is applied in server, and the server can be handled the page of webpage.
Step 101: target information in monitoring page target region obtains a monitored results.
Step 102: judging whether target information changes in the page target region, if so, executing step 103, otherwise, execute step 104.
Step 103: obtaining information to be displayed.
Step 104: continuing to monitor target information in page target region.
Step 105: based on acquired information to be displayed, determining target image.
Wherein, the target image is for characterizing the information to be displayed.
In one embodiment of the invention, when the quantity of the information to be displayed be at least two when, according to it is described extremely The incidence relation of few two information to be displayed, determines display styles;Based on identified display styles, determine with it is accordingly to be shown The corresponding target image of information.Technical solution shown in through this embodiment, when the quantity of acquired information to be displayed is extremely At few two, when determine at least two information to be displayed there are when incidence relation, can determining using same display styles, and According to identified same display styles, target image corresponding with corresponding information to be displayed is determined, such as: for same table The information to be displayed for showing children's type, can be used the display styles for being similarly children's element to show corresponding image, for The same information to be displayed for indicating political situation of the time information type, can be used and be similarly red font, the display styles of gold background.
In one embodiment of the invention, described based on acquired information to be displayed, determine target image, comprising:
The full copy of the information to be displayed is decomposed by the algorithm being adapted to deconvolution neural network model, Determine the corresponding display styles of the information to be displayed.Technical solution shown in through this embodiment, when the information to be displayed Full copy it is longer when, by the algorithm that is adapted to deconvolution neural network model to the full copy of the information to be displayed It is decomposed, display styles corresponding with the full copy of the information to be displayed can be confirmed, by showing accordingly The image of style accurately reflects corresponding information to be displayed, and the content of information to be displayed is got information about convenient for user.
In one embodiment of the invention, described based on acquired information to be displayed, determine target image, comprising:
It is handled by full copy of the deconvolution neural network model to the information to be displayed, confirms corresponding mesh Mark information;Processing is patterned to identified target information, forms target image.Technical side shown in through this embodiment Case can be handled the full copy of the information to be displayed using deconvolution neural network model, be extracted described complete Target information in text, and target image is determined by the graphical treatment to target information, target image obtained with The full copy of the information to be displayed is more bonded, and user intuitively can recognize that band is aobvious by identified target image Show content corresponding to the full copy of information.
In one embodiment of the invention, it is described by deconvolution neural network model to the complete of the information to be displayed Whole text is handled, and confirms corresponding target information, comprising:
By carrying out decoded first decoder model of sentence rank in the deconvolution neural network model, to the mesh Mark information is decomposed;It is right by carrying out decoded second decoder model of word rank in the deconvolution neural network model The processing result of first decoder model is decoded, and determines the keyword in the target information.Due to letter to be shown Word content included by the full copy of breath is more, and what complete text information was usually made of different sentences, and Each sentence can also be decomposed into different words, therefore, through this embodiment shown in technical solution, pass through the warp Decoded first decoder model of sentence rank is carried out in product neural network model, the target information is decomposed;Pass through Decoded second decoder model of word rank is carried out in the deconvolution neural network model, to first decoder model Processing result is decoded, and twice in succession after decoding process, can determine the keyword (keyword) in corresponding full copy.
In one embodiment of the invention, the method also includes:
Based on the keyword in the identified target information, the component of target image is obtained.Due to the figure Picture generation method is applied in server, therefore, can store the component of image in the server, by different Combination between image component can quickly form corresponding target image.Technical solution shown in through this embodiment, When the continuous processing by the first decoded model and the second decoded model in the deconvolution neural network model, can obtain Keyword (keyword) in corresponding full copy, by searching for target image corresponding with the keyword (keyword) Component, can quickly through the deconvolution neural network model generate target image, reduce generation and be exported The waiting time of the target image.
In one embodiment of the invention, described that processing is patterned to identified target information, form target Image, comprising:
By the warp lamination and anti-pond layer of deconvolution neural network model to image corresponding with the target information Component intersection handled, obtain the down-sampled result of the component of the target image;Pass through the deconvolution The anti-pond layer of neural network model handles the down-sampled result, forms target image to be output.Specifically, can With by two-way long short-term memory Recognition with Recurrent Neural Network (Bi-directional LSTM RNN) at least two of the picture The text information of type carries out the decoding and the other decoding of statement level of word rank respectively, wherein at least two classes of the picture The word rank of the text information of type decodes or identical decoder model can be used in the decoding of sentence rank.When first decoding When device model is sentence decoder, second decoder model is shot and long term memory (LSTM, Long Short-Term Memory) network.
In one embodiment of the invention, the method also includes:
Based on the target area characteristic information, the target is determined by the warp lamination of deconvolution neural network model The pixel of image, to realize that the target image to be output is adapted with the target area.Due to described image generation method Suitable for server, and the target area has uncertainty with the type difference of terminal, therefore, through this embodiment Shown in technical solution, the picture that the target image is determined by the warp lamination of deconvolution neural network model may be implemented Element, length, width, the height for being formed by target image to be output as a result, are suitable for corresponding destination display area.
Step 106: the target image is exported, so as to show the target image in the target area.
In one embodiment of the invention, the method also includes:
Tag along sort based on image pattern and described image sample, information generate target image to based on information Deconvolution neural network model is trained.
In one embodiment of the invention, the tag along sort based on image pattern and described image sample, letter Breath is trained the deconvolution neural network model for generating target image based on information, comprising:
Based on the sentence sample and corresponding decoding result in the information, the training deconvolution neural network model Decoded first decoder model of middle progress sentence rank.Further, the method also includes:
Based on the word sample and corresponding decoding result in the information, train in the deconvolution neural network model Carry out decoded second decoder model of word rank.Technical solution shown in through this embodiment, may be implemented to neural network The special training of model and different decoders, to realize the adaptation parameter for adjusting different decoders in time.
In one embodiment of the invention, the method also includes:
According to the training result that is trained of the deconvolution neural network model for generating target image based on information, Update the adaptation algorithm and/or model parameter of the deconvolution neural network model;Based on the deconvolution neural network updated The adaptation algorithm and/or model parameter of model are iterated training to the deconvolution neural network model.Due to neural network There may be sporadic mistakes in decoding process for model and different decoders, through this embodiment shown in technical side Case updates the adaptation algorithm of the deconvolution neural network model and/or model parameter and is iterated training, can reduce idol The triggering probability of hair property mistake passes through the deconvolution neural network model target image generated and letter to be shown to realize Breath more matches.
Fig. 2 is an optional structural schematic diagram of server 200 provided in an embodiment of the present invention, as shown in Fig. 2, this hair One optional structure of the server 200 that bright embodiment provides includes:
Data obtaining module 201 obtains a monitoring knot for monitoring whether target information in page target region changes Fruit;
The data obtaining module 201, for obtaining information to be displayed;
Message processing module 202, for determining target image, the target image based on acquired information to be displayed For characterizing the information to be displayed;
Message output module 203, for exporting the target image, so as to show the target figure in the target area Picture.
In one embodiment of the invention, the message processing module 202, for working as the quantity of the information to be displayed When being at least two, according to the incidence relation of at least two information to be displayed, display styles are determined;The information processing mould Block 202, for determining target image corresponding with corresponding information to be displayed based on identified display styles.By this implementation Technical solution shown in example, when the quantity of acquired information to be displayed is at least two, when determining that at least two is to be shown There are when incidence relation, can determine using same display styles, and according to identified same display styles, really for information Fixed target image corresponding with corresponding information to be displayed, such as: the information to be displayed for equally indicating children's type can make With the display styles for being similarly children's element to show corresponding image, for equally indicating the letter to be shown of political situation of the time information type Breath, can be used and be similarly red font, the display styles of gold background.
In one embodiment of the invention, the message processing module 202, for by with deconvolution neural network mould The algorithm of type adaptation decomposes the full copy of the information to be displayed, determines the corresponding display wind of the information to be displayed Lattice.Technical solution shown in through this embodiment, when the full copy of the information to be displayed is longer, by refreshing with deconvolution The algorithm being adapted to through network model decomposes the full copy of the information to be displayed, can be confirmed and the letter to be shown The corresponding display styles of the full copy of breath are accurately reflected accordingly with the image by corresponding display styles to aobvious Show information, the content of information to be displayed is got information about convenient for user.
In one embodiment of the invention, the message processing module 202, for passing through deconvolution neural network model The full copy of the information to be displayed is handled, confirms corresponding target information;The message processing module 202 is used In being patterned processing to identified target information, target image is formed.Technical solution shown in through this embodiment, benefit The full copy of the information to be displayed can be handled, be extracted in the full copy with deconvolution neural network model Target information, and target image is determined by the graphical treatment to target information, target image obtained and it is described to The full copy of display information is more bonded, and user intuitively can recognize band display information by identified target image Full copy corresponding to content.
In one embodiment of the invention, the message processing module 202, for passing through the deconvolution neural network Decoded first decoder model of sentence rank is carried out in model, and the target information is decomposed;The information processing mould Block 202, for by progress decoded second decoder model of word rank in the deconvolution neural network model, to described the The processing result of one decoder model is decoded, and determines the keyword in the target information.It is complete due to information to be displayed Word content included by whole text is more, and what complete text information was usually made of different sentences, and each Sentence can also be decomposed into different words, therefore, through this embodiment shown in technical solution, pass through deconvolution nerve Decoded first decoder model of sentence rank is carried out in network model, and the target information is decomposed;By described anti- Decoded second decoder model of word rank is carried out in convolutional neural networks model, to the processing knot of first decoder model Fruit is decoded, and twice in succession after decoding process, can determine the keyword (keyword) in corresponding full copy.
In one embodiment of the invention, the message processing module 202, for being believed based on the identified target Keyword in breath obtains the component of target image.Since described image generation method is applied in server, The component that can store image in the server can be fast by the combination between different image components Speed forms corresponding target image.Technical solution shown in through this embodiment, when passing through the deconvolution neural network model In the first decoded model and the second decoded model continuous processing, can obtain in corresponding full copy keyword (close Key word), it, can be quickly through described by searching for the component of target image corresponding with the keyword (keyword) Deconvolution neural network model generates target image, reduces generation and exports the waiting time of the target image.
In one embodiment of the invention, the message processing module 202, for passing through deconvolution neural network model Warp lamination and anti-pond layer the component of image corresponding with the target information intersected handle, obtain described The down-sampled result of the component of target image;The message processing module 202, for passing through the deconvolution neural network The anti-pond layer of model handles the down-sampled result, forms target image to be output.Specifically, can be by double To long short-term memory Recognition with Recurrent Neural Network (Bi-directional LSTM RNN) to the text of at least two types of the picture This information carries out the decoding and the other decoding of statement level of word rank respectively, wherein the text of at least two types of the picture The word rank of information decodes or identical decoder model can be used in the decoding of sentence rank.When first decoder model is When sentence decoder, second decoder model is shot and long term memory (LSTM, Long Short-Term Memory) network.
In one embodiment of the invention, the message processing module 202, for being believed based on the target area feature Breath, determines the pixel of the target image, by the warp lamination of deconvolution neural network model to realize the mesh to be output Logo image is adapted with the target area.Since described image generation method is suitable for server, and the target area With terminal type difference have uncertainty, therefore, through this embodiment shown in technical solution, may be implemented by anti- The warp lamination of convolutional neural networks model determines the pixel of the target image, is formed by target figure to be output as a result, Length, width, the height of picture are suitable for corresponding destination display area.
In one embodiment of the invention, the server further include:
Training module (not shown), for the tag along sort based on image pattern and described image sample, letter Breath is trained the deconvolution neural network model for generating target image based on information.
In one embodiment of the invention, the training module, for based in the information sentence sample and Corresponding decoding result trains and carries out decoded first decoder model of sentence rank in the deconvolution neural network model. Further, the training module, for based on the word sample and corresponding decoding result in the information, training to be described anti- Decoded second decoder model of word rank is carried out in convolutional neural networks model.Technical solution shown in through this embodiment, The special training to neural network model and different decoders may be implemented, to realize the adaptation for adjusting different decoders in time Parameter.
In one embodiment of the invention, the training module, for generating target image according to based on information The training result that the deconvolution neural network model is trained updates the adaptation algorithm of the deconvolution neural network model And/or model parameter;The training module, for based on the deconvolution neural network model updated adaptation algorithm and/or Model parameter is iterated training to the deconvolution neural network model.Since neural network model and different decoders exist There may be sporadic mistakes in decoding process, through this embodiment shown in technical solution, update deconvolution mind Adaptation algorithm and/or model parameter through network model are simultaneously iterated training, can reduce the triggering probability of sporadic mistake, It is more matched by the deconvolution neural network model target image generated with information to be displayed with realizing.
Fig. 3 is an optional structural schematic diagram of server provided in an embodiment of the present invention, as shown in figure 3, of the invention One optional structure chart of the server that embodiment provides, is below illustrated module involved in Fig. 3 respectively.
Image encoder 301, the warp lamination and the anti-pond layer of maximum value for passing through deconvolution neural network model are right Described image intersection is handled, and the down-sampled result of described image is obtained;Pass through the flat of the deconvolution neural network model Anti- pond layer handles the down-sampled result, obtains the corresponding target image of the information to be displayed.Specifically, can With by two-way long short-term memory Recognition with Recurrent Neural Network (Bi-directional LSTM RNN) at least two of the picture The text information of type carries out the decoding and the other decoding of statement level of word rank respectively, wherein at least two classes of the picture The word rank of the text information of type decodes or identical decoder model can be used in the decoding of sentence rank.When first decoding When device model is sentence decoder, second decoder model is shot and long term memory (LSTM, Long Short-Term Memory) network.
Text decoder 302, for by carrying out sentence rank decoded first in the deconvolution neural network model Decoder model decomposes the target information, forms the other decoding result of statement level.
Text decoder 303 is decoded for the processing result to first decoder model, determines the target Keyword in information.
Fig. 4 A is an optional usage scenario schematic diagram of image generating method provided in an embodiment of the present invention, such as Fig. 4 A Shown, server info obtains module, for monitoring whether target information in page target region changes, obtains a monitoring knot Fruit, and then can be used for obtaining two information to be displayed;The message processing module is according at least two information to be displayed Incidence relation, determine display styles be sports display styles.Message processing module passes through the deconvolution neural network Decoded first decoder model of sentence rank is carried out in model, and two target informations are decomposed;Pass through the deconvolution Decoded second decoder model of word rank is carried out in neural network model, to the processing result of first decoder model into Row decoding determines that the keyword in the target information is sport and basketball, based on the pass in the identified target information Keyword basketball and movement obtain the component of target image, and to corresponding component graphical treatment, form target figure Picture.Message output module exports the target image, so as to show the target image in the target area.
Fig. 4 B is an optional usage scenario schematic diagram of image generating method provided in an embodiment of the present invention, with Fig. 4 A Shown in unlike usage scenario, server image generated is needed to export to the browser of mobile phone terminal and be shown in Fig. 4 B In interface, therefore, on the basis of the treatment process shown in Fig. 4 A, need based on the target area characteristic information, by anti- The warp lamination of convolutional neural networks model determines the pixel of the target image, to realize the target image to be output and institute Target area is stated to be adapted.
Fig. 5 is an optional structural schematic diagram of server provided in an embodiment of the present invention, as shown in figure 5, server 500 can be with include with the mobile phone of image systematic function, computer, digital broadcast terminal, information transceiving equipment, Game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..Server 500 shown in fig. 5 includes: At least one processor 501, memory 502, at least one network interface 504 and user interface 503.It is each in server 500 A component is coupled by bus system 505.It is understood that bus system 505 is for realizing the connection between these components Communication.Bus system 505 further includes power bus, control bus and status signal bus in addition in addition to including data/address bus.But For the sake of clear explanation, various buses are all designated as bus system 505 in Fig. 5.
Wherein, user interface 503 may include display, keyboard, mouse, trace ball, click wheel, key, button, sense of touch Plate or touch screen etc..
It is appreciated that memory 502 can be volatile memory or nonvolatile memory, may also comprise volatibility and Both nonvolatile memories.Wherein, nonvolatile memory can be read-only memory (ROM, Read Only Memory), Programmable read only memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read Only Memory EPROM (EPROM, Erasable Programmable Read-Only Memory), electrically erasable programmable read-only memory The storage of (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access Device (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface are deposited Reservoir, CD or CD-ROM (CD-ROM, Compact Disc Read-Only Memory);Magnetic surface storage can be Magnetic disk storage or magnetic tape storage.Volatile memory can be random access memory (RAM, Random Access Memory), it is used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as Static random access memory (SRAM, Static Random Access Memory), synchronous static random access memory (SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus Random access memory (DRRAM, Direct Rambus Random Access Memory).Description of the embodiment of the present invention is deposited Reservoir 502 is intended to include the memory of these and any other suitable type.
Memory 502 in the embodiment of the present invention includes but is not limited to: three-state content addressing memory, static random storage Device can store image data, and the multiple types such as text data image generating program data are to support the operation of server 500.These The example of data includes: any computer program for operating on server 500, such as operating system 5021 and application program 5022, image data, text data, image generating program etc..Wherein, operating system 5021 includes various system programs, such as Ccf layer, core library layer, driving layer etc., for realizing various basic businesses and the hardware based task of processing.Application program 5022 may include various application programs, such as the client with image systematic function or application program etc., for realizing packet It includes and obtains image information and the first text information, be based on described image information and first text information, generate the second text Various applied business including information.Realize that the program of power regulating method of the embodiment of the present invention may be embodied in application program In 5022.
The method that the embodiments of the present invention disclose can be applied in processor 501, or be realized by processor 501. Processor 501 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 501 or the operation of software form.Above-mentioned processing Device 501 can be general processor, digital signal processor (DSP, Digital Signal Processor) or other can Programmed logic device, discrete gate or transistor logic, discrete hardware components etc..Processor 501 may be implemented or hold Disclosed each method, step and logic diagram in the row embodiment of the present invention.General processor can be microprocessor or appoint What conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly at hardware decoding Reason device executes completion, or in decoding processor hardware and software module combine and execute completion.Software module can be located at In storage medium, which is located at memory 502, and processor 501 reads the information in memory 502, in conjunction with its hardware The step of completing preceding method.
In the exemplary embodiment, server 500 can by one or more application specific integrated circuit (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), Complex Programmable Logic Devices (CPLD, Complex Programmable Logic Device), scene Programmable gate array (FPGA, Field-Programmable Gate Array), general processor, controller, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronic components are realized, are used for Execute the power regulating method.
In the exemplary embodiment, the embodiment of the invention also provides a kind of computer readable storage medium, for example including The memory 502 of computer program, above-mentioned computer program can be executed by the processor 501 of server 500, to complete aforementioned side Step described in method.Computer readable storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic The memories such as memory surface, CD or CD-ROM;It is also possible to include that one of above-mentioned memory or the various of any combination set It is standby, such as mobile phone, computer, tablet device, personal digital assistant.
The embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer program, the meter When calculation machine program is run by processor, execute:
Whether target information changes in monitoring page target region, obtains a monitored results;
If the monitored results show that the target information changes, information to be displayed is obtained;
Based on acquired information to be displayed, target image is determined, the target image is for characterizing the letter to be shown Breath;
The target image is exported, so as to show the target image in the target area.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as the production of method, system or computer program Product.Therefore, hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the embodiment of the present invention Form.Moreover, it wherein includes the calculating of computer usable program code that the embodiment of the present invention, which can be used in one or more, The form for the computer program product implemented in machine usable storage medium (including magnetic disk storage and optical memory etc.).
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, equipment (system) and computer program product Flowchart and/or the block diagram describe.It should be understood that can be operated by computer program in implementation flow chart and/or block diagram The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating Processing of the machine procedure operation to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices Device is to generate a machine, so that being generated by the operation that computer or the processor of other programmable data processing devices execute For realizing the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram Device.
The operation of these computer programs, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes behaviour that operation stored in the computer readable memory, which generates, Make the manufacture of device, the operating device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
The operation of these computer programs also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The operation executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention, it is all Made any modifications, equivalent replacements, and improvements etc. within the spirit and principles in the present invention, should be included in protection of the invention Within the scope of.

Claims (10)

1. a kind of image generating method, which is characterized in that the described method includes:
Whether target information changes in monitoring page target region, obtains a monitored results;
If the monitored results show that the target information changes, information to be displayed is obtained;
Based on acquired information to be displayed, target image is determined, the target image is for characterizing the information to be displayed;
The target image is exported, so as to show the target image in the target area.
2. the method according to claim 1, wherein the method also includes:
When the quantity of the information to be displayed is at least two,
According to the incidence relation of at least two information to be displayed, display styles are determined;
Based on identified display styles, target image corresponding with corresponding information to be displayed is determined.
3. determining target the method according to claim 1, wherein described based on acquired information to be displayed Image, comprising:
The full copy of the information to be displayed is decomposed by the algorithm being adapted to deconvolution neural network model, is determined The corresponding display styles of the information to be displayed.
4. determining target the method according to claim 1, wherein described based on acquired information to be displayed Image, comprising:
It is handled by full copy of the deconvolution neural network model to the information to be displayed, confirms corresponding target letter Breath;
Processing is patterned to identified target information, forms target image.
5. according to the method described in claim 4, it is characterized in that, it is described by deconvolution neural network model to described to aobvious Show that the full copy of information is handled, confirm corresponding target information, comprising:
By carrying out decoded first decoder model of sentence rank in the deconvolution neural network model, the target is believed Breath is decomposed;
By carrying out decoded second decoder model of word rank in the deconvolution neural network model, to first decoding The processing result of device model is decoded, and determines the keyword in the target information.
6. according to the method described in claim 4, it is characterized in that, described be patterned place to identified target information Reason forms target image, comprising:
By the warp lamination of deconvolution neural network model and anti-pond layer to the group of image corresponding with the target information It is handled at element intersection, obtains the down-sampled result of the component of the target image;
The down-sampled result is handled by the anti-pond layer of the deconvolution neural network model, is formed to be output Target image.
7. the method according to claim 1, wherein the method also includes:
Tag along sort based on image pattern and described image sample, information, to the warp for generating target image based on information Product neural network model is trained.
8. the method according to claim 1, wherein the method also includes:
Based on the target area characteristic information, the target image is determined by the warp lamination of deconvolution neural network model Pixel, to realize that the target image to be output is adapted with the target area.
9. a kind of server, which is characterized in that the server includes:
Data obtaining module obtains a monitored results for monitoring whether target information in page target region changes;
The data obtaining module, for obtaining information to be displayed;
Message processing module, for determining target image, the target image is for characterizing based on acquired information to be displayed The information to be displayed;
Message output module, for exporting the target image, so as to show the target image in the target area.
10. a kind of server, which is characterized in that the server includes:
Memory, for storing executable instruction;
Processor, when for running the executable instruction of the memory storage, image described in perform claim requirement 1 to 8 is raw At method.
CN201811629005.5A 2018-12-28 2018-12-28 Image generation method and server Active CN109933320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811629005.5A CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811629005.5A CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Publications (2)

Publication Number Publication Date
CN109933320A true CN109933320A (en) 2019-06-25
CN109933320B CN109933320B (en) 2021-05-18

Family

ID=66984888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811629005.5A Active CN109933320B (en) 2018-12-28 2018-12-28 Image generation method and server

Country Status (1)

Country Link
CN (1) CN109933320B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014045314A1 (en) * 2012-09-18 2014-03-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
CN104615764A (en) * 2015-02-13 2015-05-13 北京搜狗科技发展有限公司 Display method and electronic equipment
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method
US20180204121A1 (en) * 2017-01-17 2018-07-19 Baidu Online Network Technology (Beijing) Co., Ltd Audio processing method and apparatus based on artificial intelligence
CN108549850A (en) * 2018-03-27 2018-09-18 联想(北京)有限公司 A kind of image-recognizing method and electronic equipment
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014045314A1 (en) * 2012-09-18 2014-03-27 株式会社ソニー・コンピュータエンタテインメント Information processing device and information processing method
CN104615764A (en) * 2015-02-13 2015-05-13 北京搜狗科技发展有限公司 Display method and electronic equipment
CN108182016A (en) * 2016-12-08 2018-06-19 Lg电子株式会社 Mobile terminal and its control method
US20180204121A1 (en) * 2017-01-17 2018-07-19 Baidu Online Network Technology (Beijing) Co., Ltd Audio processing method and apparatus based on artificial intelligence
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN108549850A (en) * 2018-03-27 2018-09-18 联想(北京)有限公司 A kind of image-recognizing method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOHAMMED EL AMIN LARABI等: "Convolutional neural network features based change detection in satellite images", 《FIRST INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION》 *
吕恩辉: "基于反卷积特征提取的深度卷积神经网络学习", 《控制与决策》 *

Also Published As

Publication number Publication date
CN109933320B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN108549850A (en) A kind of image-recognizing method and electronic equipment
CN110334357A (en) A kind of method, apparatus, storage medium and electronic equipment for naming Entity recognition
CN109300008A (en) A kind of information recommendation method and device
CN109087380A (en) A kind of caricature cardon generation method, device and storage medium
CN108096833B (en) Motion sensing game control method and device based on cascade neural network and computing equipment
CN108875769A (en) Data mask method, device and system and storage medium
CN109725948A (en) A kind of configuration method and device of animation resource
CN109213932A (en) A kind of information-pushing method and device
CN110162191A (en) A kind of expression recommended method, device and storage medium
CN109271587A (en) A kind of page generation method and device
CN107734352A (en) A kind of information determines method, apparatus and storage medium
US20200410967A1 (en) Method for displaying triggered by audio, computer apparatus and storage medium
CN108874336A (en) A kind of information processing method and electronic equipment
CN107391535A (en) The method and device of document is searched in document application
CN110716767B (en) Model component calling and generating method, device and storage medium
CN107454470A (en) A kind of information recommendation method and device and storage medium
CN113010785B (en) User recommendation method and device
CN107368495A (en) Determine method and device of the user to the mood of internet object
CN109871205A (en) GUI code method of adjustment, device, computer installation and storage medium
CN108241404A (en) A kind of method, apparatus and electronic equipment for obtaining the off-line operation time
CN112818219A (en) Method, system, electronic device and readable storage medium for explaining recommendation effect
CN109933320A (en) A kind of image generating method and server
CN112799658B (en) Model training method, model training platform, electronic device, and storage medium
CN116091741A (en) Digital collection generation method, device, storage medium and system
CN112241453B (en) Emotion attribute determining method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant