CN110189246A - Image stylization generation method, device and electronic equipment - Google Patents

Image stylization generation method, device and electronic equipment Download PDF

Info

Publication number
CN110189246A
CN110189246A CN201910403860.2A CN201910403860A CN110189246A CN 110189246 A CN110189246 A CN 110189246A CN 201910403860 A CN201910403860 A CN 201910403860A CN 110189246 A CN110189246 A CN 110189246A
Authority
CN
China
Prior art keywords
image
target object
interactive interface
processing parameter
stylized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910403860.2A
Other languages
Chinese (zh)
Other versions
CN110189246B (en
Inventor
李华夏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910403860.2A priority Critical patent/CN110189246B/en
Publication of CN110189246A publication Critical patent/CN110189246A/en
Priority to PCT/CN2020/079205 priority patent/WO2020228406A1/en
Application granted granted Critical
Publication of CN110189246B publication Critical patent/CN110189246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Abstract

A kind of image stylization generation method, device and electronic equipment are provided in the embodiment of the present disclosure, belong to technical field of data processing, this method comprises: obtaining the multiple images comprising target object shown on interactive interface, the target object forms the first graphics field in described multiple images;Variation tendency based on first graphics field in time series, determines whether the target object remains static;When remaining static in response to the target object, from the multiple series of images processing parameter stored in pre-set light weighed model, one group of Image Processing parameter is selected, to form the first Image Processing parameter;Using the first image processing parameter and the light weighed model, it converts the corresponding with the target object first stylized image in real time by image to be presented in current interactive interface in first time period.By the processing scheme of the disclosure, the stylized effect of setting image that can be random.

Description

Image stylization generation method, device and electronic equipment
Technical field
This disclosure relates to technical field of data processing more particularly to a kind of image stylization generation method, device and electronics Equipment.
Background technique
With the continuous development of society and progress, electronic product start widely to enter in people's lives.Especially Not only spreading speed was fast for these electronic products in recent years, and the speed updated is also very surprising.It is sent out based on electronic equipment The swift and violent development that the software of exhibition also obtains, more and more users begin to use the electronic equipments such as smart phone to carry out social activity Equal network operations.During carrying out network operation, it is only that more and more people wish that oneself shooting or the video recorded have Special stylized feature.
During carrying out stylization to image, it usually needs the video of photo or recording to user's shooting carries out big The data of amount calculate, this just proposes higher requirement to the electronic equipment take pictures that user uses, i.e. electronic equipment has There is higher arithmetic speed.And there are more performance differences for electronic equipment on the market at present, this makes the realization of stylization At certain obstacle.
In addition to this, for user before taking pictures or before recorded video, usual system can be with more friendly side Formula shows more different types of stylizations, the further usage experience for improving user.
Summary of the invention
In view of this, the embodiment of the present disclosure provides a kind of image stylization generation method, device and electronic equipment, at least portion Decompose problems of the prior art of determining.
In a first aspect, the embodiment of the present disclosure provides a kind of image stylization generation method, comprising:
The multiple images comprising target object shown on interactive interface are obtained, the target object is in described multiple images The first graphics field of middle formation;
Variation tendency based on first graphics field in time series, determines whether the target object is in quiet Only state;
It remains static in response to the determination target object, the multiple groups stored from pre-set light weighed model In Image Processing parameter, one group of Image Processing parameter is selected, to form the first Image Processing parameter;
Using the first image processing parameter and the light weighed model, by current interactive interface in first time period In image to be presented be converted into the corresponding with the target object first stylized image in real time.
According to a kind of specific implementation of the embodiment of the present disclosure, it is described will be in current interactive interface in first time period Image to be presented is converted into real time after the corresponding with the target object first stylized image, and the method is also wrapped It includes:
In the second time period after the first time period, the target pair described in real-time display in the interactive interface The transfer image acquisition of elephant.
According to a kind of specific implementation of the embodiment of the present disclosure, the mesh described in real-time display in the interactive interface After the transfer image acquisition for marking object, the method also includes:
Within the third period after the second time period, the target pair described in real-time display in the interactive interface The primary image of elephant, the primary image are without the image by stylization processing.
According to a kind of specific implementation of the embodiment of the present disclosure, the mesh described in real-time display in the interactive interface Mark the transfer image acquisition of object, comprising:
It obtains the stylized image of n shown in the second time period and the n stylized image is n corresponding Primary image, the primary image are without the image by stylization processing;
First transparency (n-i)/n is arranged to i-th of stylized image in n stylized image, to n primary images In i-th of primary image the second transparency i/n is set;
Stylized image with the first transparency and the primary image superposition with the second transparency are shown.
According to a kind of specific implementation of the embodiment of the present disclosure, the mesh described in real-time display in the interactive interface After the primary image for marking object, the method also includes:
In the 4th period after the third period, the multiple groups that are stored from pre-set light weighed model In Image Processing parameter, one group of Image Processing parameter is selected, to form the second Image Processing parameter;
It is within the 4th period that image to be presented in current interactive interface is real based on second Image Processing parameter When be converted into the second stylized image corresponding with the target object.
According to a kind of specific implementation of the embodiment of the present disclosure, what is shown on the acquisition interactive interface includes target pair The multiple images of elephant, comprising:
Video content in the interactive interface is acquired, to obtain the video file for including multiple video frames;
Multiple video frames is chosen, from the video file to form the multiple images for including the target object.
It is described that multiple videos is chosen from the video file according to a kind of specific implementation of the embodiment of the present disclosure Frame forms the multiple images comprising the target object, comprising:
Target object detection is carried out to the video frame in the video file, to obtain the image sequence comprising target object Column;
In described image sequence, the first graphics field in current video frame and the first view in a upper video frame are judged Whether frequency domain is identical;
It is identical as the first video area in a upper video frame in response to the first graphics field in current video frame, in institute It states and deletes current video frame in image sequence.
According to a kind of specific implementation of the embodiment of the present disclosure, what is shown on the acquisition interactive interface includes target pair After the multiple images of elephant, the method also includes:
Choose multiple structural elements of different orientation;
Details description is carried out to described multiple images using each structural element in multiple structural elements, to be filtered Image;
The gray-scale edges of filtering image are determined, to obtain depositing in each grey level in multiple grey levels in filtering image Pixel number;
Pixel number in each grey level is weighted, wherein the average gray after weighting is used as threshold value;
Binary conversion treatment is carried out to the filtering image based on the threshold value;
Using the image after binary conversion treatment as the edge image of the target object.
It is described to utilize described image processing parameter and the light weight according to a kind of specific implementation of the embodiment of the present disclosure Change model, converts stylization corresponding with the target object in real time for image to be presented in current interactive interface and scheme Picture, comprising:
Multiple convolutional layers and pond layer are chosen in the light weighed model, wherein the pond layer is using average pond Processing mode;
Image to be presented and stylized image are set in the character representation of the convolutional layer and pond layer;
Based on the character representation, building minimizes loss function;
Based on the minimum loss function, stylized image corresponding with the target object is generated.
Second aspect, the embodiment of the present disclosure provide a kind of image stylization generating means, comprising:
Module is obtained, for obtaining the multiple images comprising target object shown on interactive interface, the target object The first graphics field is formed in described multiple images;
Determining module determines the target for the variation tendency based on first graphics field in time series Whether object remains static;
Selecting module, for remaining static in response to the determination target object, from pre-set lightweight mould In the multiple series of images processing parameter stored in type, one group of Image Processing parameter is selected, to form the first Image Processing parameter;
Conversion module, for utilizing the first image processing parameter and the light weighed model, in first time period It converts the corresponding with the target object first stylized image in real time by image to be presented in current interactive interface.
The third aspect, the embodiment of the present disclosure additionally provide a kind of electronic equipment, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor It executes, so that at least one processor is able to carry out the figure in any implementation of aforementioned first aspect or first aspect As stylized generation method.
Fourth aspect, the embodiment of the present disclosure additionally provide a kind of non-transient computer readable storage medium, the non-transient meter Calculation machine readable storage medium storing program for executing stores computer instruction, and the computer instruction is for making the computer execute aforementioned first aspect or the Image stylization generation method in any implementation of one side.
5th aspect, the embodiment of the present disclosure additionally provide a kind of computer program product, which includes The calculation procedure being stored in non-transient computer readable storage medium, the computer program include program instruction, when the program When instruction is computer-executed, the computer is made to execute the image in aforementioned first aspect or any implementation of first aspect Stylized generation method.
Image stylization in the embodiment of the present disclosure generates scheme, including being shown on acquisition interactive interface comprising target pair The multiple images of elephant, the target object form the first graphics field in described multiple images;Based on first graph area Variation tendency of the domain in time series, determines whether the target object remains static;In response to the determination target Object remains static, and from the multiple series of images processing parameter stored in pre-set light weighed model, selects a group picture As processing parameter, the first Image Processing parameter is formed;Using the first image processing parameter and the light weighed model, It converts the first wind corresponding with the target object in real time by image to be presented in current interactive interface in one period It formats image.By the scheme of the disclosure, while reducing electronic equipment calculated load, the wind of setting image that can be random It formats effect, improves the usage experience of user.
Detailed description of the invention
It, below will be to needed in the embodiment attached in order to illustrate more clearly of the technical solution of the embodiment of the present disclosure Figure is briefly described, it should be apparent that, the accompanying drawings in the following description is only some embodiments of the present disclosure, for this field For those of ordinary skill, without creative efforts, it can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of image stylization product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 2 is a kind of neural network model schematic diagram that the embodiment of the present disclosure provides;
Fig. 3 is another image stylization product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 4 is another image stylization product process schematic diagram that the embodiment of the present disclosure provides;
Fig. 5 is the image stylization generating means structural schematic diagram that the embodiment of the present disclosure provides;
Fig. 6 is the electronic equipment schematic diagram that the embodiment of the present disclosure provides.
Specific embodiment
The embodiment of the present disclosure is described in detail with reference to the accompanying drawing.
Illustrate embodiment of the present disclosure below by way of specific specific example, those skilled in the art can be by this specification Disclosed content understands other advantages and effect of the disclosure easily.Obviously, described embodiment is only the disclosure A part of the embodiment, instead of all the embodiments.The disclosure can also be subject to reality by way of a different and different embodiment It applies or applies, the various details in this specification can also be based on different viewpoints and application, in the spirit without departing from the disclosure Lower carry out various modifications or alterations.It should be noted that in the absence of conflict, the feature in following embodiment and embodiment can To be combined with each other.Based on the embodiment in the disclosure, those of ordinary skill in the art are without creative efforts Every other embodiment obtained belongs to the range of disclosure protection.
It should be noted that the various aspects of embodiment within the scope of the appended claims are described below.Ying Xian And be clear to, aspect described herein can be embodied in extensive diversified forms, and any specific structure described herein And/or function is only illustrative.Based on the disclosure, it will be understood by one of ordinary skill in the art that one described herein Aspect can be independently implemented with any other aspect, and can combine the two or both in these aspects or more in various ways. For example, carry out facilities and equipments in terms of any number set forth herein can be used and/or practice method.In addition, can make With other than one or more of aspect set forth herein other structures and/or it is functional implement this equipment and/or Practice the method.
It should also be noted that, diagram provided in following embodiment only illustrates the basic structure of the disclosure in a schematic way Think, component count, shape and the size when only display is with component related in the disclosure rather than according to actual implementation in schema are drawn System, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its assembly layout kenel can also It can be increasingly complex.
In addition, in the following description, specific details are provided for a thorough understanding of the examples.However, fields The skilled person will understand that the aspect can be practiced without these specific details.
The embodiment of the present disclosure provides a kind of image stylization generation method.Image stylization provided in this embodiment generation side Method can be executed by a computing device, which can be implemented as software, or be embodied as the combination of software and hardware, The computing device, which can integrate, to be arranged in server, terminal device etc..
Referring to Fig. 1, a kind of image stylization generation method that the embodiment of the present disclosure provides includes the following steps:
S101 obtains the multiple images comprising target object shown on interactive interface, and the target object is described more The first graphics field is formed in a image.
The scheme of the embodiment of the present disclosure can be applied in electronic equipment having data processing function, which includes Hardware and the software being installed in electronic equipment, while the electronic equipment can also install various application programs, for example, at image Manage class application program, video playback class application program, social category application program etc..
Interactive interface is operate in the window in application program, shown on interactive interface the image comprising target object or Person's video.Target object is specific object defined in the disclosure, and target object has certain shape, by changing target The shape of object can be formed based on order of different shapes.For example, target object can be the figure of human body, human body passes through Four limbs form different postures, may be constructed different posture orders.Alternatively, target object can also be various gestures, pass through Gesture forms the posture such as " thumbed up " etc, to express different gesture instructions.
Target object occupies certain position and area in interactive interface, corresponding, and target object is on interaction circle Projection on face just constitutes the first graphics field, which can show in the multiple images that interaction area is formed Show.
Electronic equipment can be obtained from long-range, or from local in interaction by wired connection mode or radio connection Multiple images (target image sequence) played on interface, that target object is shot.Wherein, interactive interface can be with It is the interface for showing the image shot to target object.For example, interactive interface can be above-mentioned executing subject Upper installation, application for shooting image interface.Target object can be the personage that shooting image is carried out to it, for example, Target object can be the user that self-timer is carried out using above-mentioned executing subject.Multiple images can also be carry out moving object detection Image sequence.In general, the image that multiple images include can be in the image sequence shot to target object Image all or part of, multiple images include the image currently shown on interactive interface.As a kind of situation, Duo Getu As may include preset quantity image, including the image currently shown on interactive interface.
Whether S102, the variation tendency based on first graphics field in time series, determine the target object It remains static.
Moving object detection can be carried out to multiple images, determine that the corresponding movement of each image is believed in multiple images Breath.It would generally include regular hour information (for example, image capturing time or image are formed since multiple images are when being formed Time), the time in multiple images can be extracted thus, form time series.Based on time series, can according to when Between sequencing sequencing arrangement is carried out to multiple images, so that time-based dimension judges to include in multiple images Action message (for example, action command).
For characterizing the action state that target object is sequentially generated in time series, action state can be action message Motion state or stationary state.For the image in multiple images, the corresponding action state of the image can according in the image, (it can be the image adjacent with the image relative to the image before the image, be also possible to default with interval between the image The image of quantity image), occur in target interface the region of mobile pixel composition moving distance (for example, the movement away from The maximum moving distance in moving distance from each pixel in the region that can be the mobile pixel composition of above-mentioned generation;Or Person can be the average value of the moving distance of each pixel) it determines.For example, if above-mentioned moving distance is more than or equal to preset Distance threshold determines that the corresponding action state of the image is motion state.Alternatively, according to above-mentioned moving distance and the image with Play time between above-mentioned target image is poor, determines movement speed, if movement speed is more than or equal to preset speed threshold Value determines that the corresponding action state of the image is motion state.
Generally, it is true to be only user for target object shape instruction represented by the first graphics field of stationary state formation Think that the action command of expression, target object are temporary in the centre being generally in the shape of before forming action command that motion state is formed in fact State shape, for this reason, it may be necessary to judge that the first graphics field on which image is related to user instruction based on multiple images.
Specifically, can be judged based on state change of the described multiple images in time series.When detecting Target object is when the state in multiple images is by motion state convert to static state, by the first graphics field in stationary state Represented graphics command resolves to the operational order of target object.The operational order can be expressed with various ways, behaviour The form for making to instruct can include but is not limited to following at least one: number, text, symbol, level signal etc..
S103 remains static in response to the determination target object, stores from pre-set light weighed model Multiple series of images processing parameter in, select one group of Image Processing parameter, to form the first Image Processing parameter.
Electronic equipment internal is provided with light weighed model, which is used for the image received in electronic equipment Carry out stylized processing.In order to reduce the resource consumption of electronic equipment (for example, mobile phone), enable electronic equipment in lesser money In the case that source occupies, it still is able to effectively carry out stylized processing to the image of input.The conceptual design of the disclosure one Plant targetedly light weighed model.Referring to fig. 2, light weighed model is designed by the way of neural network model, neural network mould Type includes convolutional layer, pond layer, sample level.In order to improve the computational efficiency of neural network, while reducing system electronics Complexity is calculated, is not provided with full articulamentum in the scheme of the disclosure.
Convolutional layer major parameter includes the size of convolution kernel and the quantity of input feature vector figure, if each convolutional layer may include The characteristic pattern of dry same size, for same layer characteristic value by the way of shared weight, the convolution kernel in every layer is in the same size.Volume Lamination carries out convolutional calculation to input picture, and extracts the spatial layout feature of input picture.
It can be connect with sample level behind the feature extraction layer of convolutional layer, sample level is used to ask the office of input facial expression image Portion's average value simultaneously carries out Further Feature Extraction, by the way that sample level is connect with convolutional layer, can guarantee neural network model for Inputting facial expression image has preferable robustness.
In order to accelerate the training speed of neural network model, pond layer is additionally provided with behind convolutional layer, pond layer uses The mode in average pond handles the output result of convolutional layer, can improve the gradient current of neural network and can obtain More infectious result.
Contain different parameters inside light weighed model, light weighed model can be made to generate different skills by the way that parameter is arranged Art style.Specifically, can be stored from pre-set light weighed model when determining that target object remains static Multiple series of images processing parameter in, select one group of Image Processing parameter randomly or by specified mode, formed at the first image Manage parameter.
S104 will be handed over currently in first time period using the first image processing parameter and the light weighed model Image to be presented is converted into the corresponding with the target object first stylized image in real time in mutual interface.
After getting the first Image Processing parameter, it is based on first Image Processing parameter, it can be in light weighed model The type of middle setting stylization, so as to convert image to be presented to and the mesh in real time in current interactive interface Mark the corresponding first stylized image of object.Image to be presented can be one that user selects in current interactive interface Or multiple images, image to be presented are also possible to one or more video frame images in video to be presented.Due to the first figure As processing parameter is generated by way of generating at random, the stylized type of the first stylized image also has randomness, from And random from multiple stylized effects can show a kind of stylized effect, improve the usage experience of user.
It, can also be by pre-set mode, after preset time period other than generating the first stylized image Random second Image Processing parameter generates the second stylized image by the second Image Processing parameter, the second stylized image with First stylized image is different.
As a kind of situation, in order to increase user experience, in second time period after the first period of time, in the friendship The transfer image acquisition of target object described in real-time display in mutual interface.Transfer image acquisition is to carry out between stylized image and primary image The image of smooth transition.
After having shown transfer image acquisition in second time period, within the third period after the second time period, The primary image of target object described in real-time display in the interactive interface.In this way, by stylized image, transfer image acquisition and Switching between primary image can further improve user and use the experience of stylization.
Referring to Fig. 3, according to a kind of specific implementation of the embodiment of the present disclosure, the real-time display institute in the interactive interface The transfer image acquisition for stating target object, may include steps of:
S301, obtains the stylized image of n shown in the second time period and the n stylized image is corresponding N primary images, the primary image is the image without handling by stylization.
The first transparency (n-i)/n is arranged to i-th of stylized image in n stylized image, to n original in S302 The second transparency i/n is arranged in i-th of primary image in raw image.Wherein, i, n are natural number, and i is less than or equal to n.
S303 shows the stylized image with the first transparency and the primary image superposition with the second transparency.
By the mode in step S301~S303, the image that interactive interface can be allowed to show is in stylized image and primary Smooth transition is carried out between image.
As an alternative embodiment, the multiple images comprising target object shown on obtaining interactive interface In the process, when the content on interactive interface is video content, the video content in interactive interface can be acquired, is obtained Video file comprising multiple video frames.Based on actual needs, one or more videos is chosen from the video file Frame forms the multiple images comprising the target object.
The selection multiple images of consumption during in order to reduce to(for) electronic equipment resource, implements according to the disclosure A kind of optional implementation of example first can carry out target object detection to the video frame in the video file, be included The image sequence of target object, for not including the picture frame of target object, then without processing, to save electronic equipment Resource.
It, can be in order to further reduce the resource consumption of electronic equipment in the image sequence comprising target object Judge whether the first graphics field in current video frame and the first video area in a upper video frame are identical, if so, Current video frame is deleted in described image sequence.In this way, the resource of electronic equipment can be advanced optimized.
For the ease of carrying out recongnition of objects to the multiple images of acquisition, referring to fig. 4, according to the one of the embodiment of the present disclosure Kind of specific implementation, it is described obtain the multiple images comprising target object shown on interactive interface after, the method is also Include:
S401 chooses multiple structural elements of different orientation.
Target object can be detected by edge detection operator, if edge detection operator is only with a kind of structure Element exports in image and contains only a kind of geological information, is unfavorable for the holding of image detail.In order to guarantee that image is examined The accuracy of survey, selection include the edge detection operator of various structures member.
S402 carries out details description to described multiple images using each structural element in multiple structural elements, obtains Filtering image.
By using multiple structural elements of different orientation, using each structural element as a kind of scale to image detail into Row matching can sufficiently keep the various details of image while being filled into the noise of different type and size.
S403 determines that the gray-scale edges of filtering image calculate, to obtain in filtering image each ash in multiple grey levels Spend pixel number present in rank.
After image filtering, in order to further reduce calculation amount, the image after filtering can be changed into gray scale Picture present in each gray-level image can be calculated by the way that multiple grey levels are arranged to gray level image in image Prime number.
S404 is weighted the pixel number in each grey level, and using the average gray after weighting as threshold value.
Number based on pixel in different grey-scale, it may be considered that processing is weighted to grey level based on pixel number, For example, the grey scale values more for pixel number give biggish weight, the grey scale values setting less for pixel number Lesser weight, by carrying out mean value calculation to the gray value after weighting, average gray value after being weighted as threshold value, So as to carry out binary conversion treatment to gray level image based on the average gray value.
S405 carries out binary conversion treatment to the filtering image based on the threshold value.
Based on the threshold value, binary conversion treatment can be carried out to filtering image, for example, for the pixel two-value for being greater than the threshold value Data 1 are turned to, the pixel two-value for being less than the threshold value turns to 0.
S406, using the image after binary conversion treatment as the edge image of the target object.
By just having obtained the edge image of target object for the corresponding color assignment of data progress after binaryzation, Such as two-value is turned into 1 pixel assignment as black, the image that two-value turns to 0 is assigned a value of white.
It is improved under the premise of reducing electronic apparatus system resource consumption by the step in step S401~S406 The accuracy of target object detection.
Before determining the first Image Processing parameter, it can be based on mapping table predetermined in pre-defined mapping table, Zoom factor corresponding with the operational order and shift factor can be searched, by the way that zoom factor and shift factor is arranged, It is capable of forming the stylized effect of different-style.For this purpose, input layer can be arranged in light weighed model, input layer includes scaling The factor and shift factor will zoom factors corresponding with the operational order after obtaining specific Image Processing parameter With shift factor as the input factor, all condition entry layers are configured in the light weighed model, can simply be had Effect configures light weighed model.Condition entry layer can be arranged according to the actual needs one or more convolutional layers, In pond layer or sample level.All conditions input layer is completed with the parameter postponed, at the image as the light weighed model Parameter is managed, so as to obtain different types of stylized model.
It is described to be based on the multiple convolutional layer and pond layer, life according to a kind of optional implementation of the embodiment of the present disclosure May include step S501-S503 at stylized image corresponding with the target object:
Image to be presented and stylized image is arranged in the character representation of the convolutional layer and pond layer in S501.
Stylized image in image and training sample to be presented in the convolutional layer and pond layer of lightweight network into Row sampling, the data after sampling in each layer constitute image to be presented and stylized image in the convolutional layer and pond layer Character representation.For example, in light weighed model i-th layer, the feature of image to be presented and stylized image at i-th layer Expression can use Pi and Fi.
S502, is based on the character representation, and building minimizes loss function.
Based on Pi and Fi, the square error loss function can be defined based on the two character representations, and by the square error The loss function is set as minimizing loss function L, then minimizing loss function L can indicate at i-th layer are as follows:
Wherein, k, j are the natural number less than or equal to i.
S503 is based on the minimum loss function, generates stylized image corresponding with the target object.
By calculating to minimizing function, the numerical value for minimizing function L is minimum, available with target object phase Corresponding stylization image.
By way of character representation and minimizing function, the accuracy of the stylized image of generation is improved.
It is corresponding with above method embodiment, referring to Fig. 5, also a kind of image stylization generating means 50 of the disclosure, packet It includes:
Module 501 is obtained, for obtaining the multiple images comprising target object shown on interactive interface, the target pair As forming the first graphics field in described multiple images.
Determining module 502 determines the mesh for the variation tendency based on first graphics field in time series Whether mark object remains static.
Selecting module 503, for remaining static in response to the determination target object, from pre-set lightweight In the multiple series of images processing parameter stored in model, one group of Image Processing parameter is selected, to form the first Image Processing parameter.
Conversion module 504, for utilizing the first image processing parameter and the light weighed model, in first time period It is interior to convert the corresponding with the target object first stylized image in real time for image to be presented in current interactive interface.
Fig. 5 shown device can it is corresponding execute above method embodiment in content, what the present embodiment was not described in detail Part, referring to the content recorded in above method embodiment, details are not described herein.
Referring to Fig. 6, the embodiment of the present disclosure additionally provides a kind of electronic equipment 60, which includes:
At least one processor;And
The memory being connect at least one processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one processor, and the instruction is by least one processor It executes, so that at least one processor is able to carry out image stylization generation method in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of non-transient computer readable storage medium, and the non-transient computer is readable to deposit Storage media stores computer instruction, and the computer instruction is for executing the computer in preceding method embodiment.
The embodiment of the present disclosure additionally provides a kind of computer program product, and the computer program product is non-temporary including being stored in Calculation procedure on state computer readable storage medium, the computer program include program instruction, when the program instruction is calculated When machine executes, the computer is made to execute the image stylization generation method in preceding method embodiment.
Below with reference to Fig. 6, it illustrates the structural schematic diagrams for the electronic equipment 60 for being suitable for being used to realize the embodiment of the present disclosure. Electronic equipment in the embodiment of the present disclosure can include but is not limited to such as mobile phone, laptop, Digital Broadcasting Receiver Device, PDA (personal digital assistant), PAD (tablet computer), PMP (portable media player), car-mounted terminal are (such as vehicle-mounted Navigation terminal) etc. mobile terminal and such as number TV, desktop computer etc. fixed terminal.Electronics shown in Fig. 6 Equipment is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in fig. 6, electronic equipment 60 may include processing unit (such as central processing unit, graphics processor etc.) 601, It can be loaded into random access storage according to the program being stored in read-only memory (ROM) 602 or from storage device 608 Program in device (RAM) 603 and execute various movements appropriate and processing.In RAM 603, it is also stored with the behaviour of electronic equipment 60 Various programs and data needed for making.Processing unit 601, ROM 602 and RAM 603 are connected with each other by bus 604.It is defeated Enter/export (I/O) interface 605 and is also connected to bus 604.
In general, following device can connect to I/O interface 605: including such as touch screen, touch tablet, keyboard, mouse, figure As the input unit 606 of sensor, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaking The output device 607 of device, vibrator etc.;Storage device 608 including such as tape, hard disk etc.;And communication device 609.It is logical T unit 609 can permit electronic equipment 60 and wirelessly or non-wirelessly be communicated with other equipment to exchange data.Although showing in figure The electronic equipment 60 with various devices is gone out, it should be understood that being not required for implementing or having all devices shown. It can alternatively implement or have more or fewer devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 609, or from storage device 608 It is mounted, or is mounted from ROM 602.When the computer program is executed by processing unit 601, the embodiment of the present disclosure is executed Method in the above-mentioned function that limits.
It should be noted that the above-mentioned computer-readable medium of the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer readable storage medium can be it is any include or storage journey The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this In open, computer-readable signal media may include in a base band or as the data-signal that carrier wave a part is propagated, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable signal media can send, propagate or transmit and be used for By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. are above-mentioned Any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by the electricity When sub- equipment executes, so that the electronic equipment: obtaining at least two internet protocol addresses;Send to Node evaluation equipment includes institute State the Node evaluation request of at least two internet protocol addresses, wherein the Node evaluation equipment is internet from described at least two In protocol address, chooses internet protocol address and return;Receive the internet protocol address that the Node evaluation equipment returns;Its In, the fringe node in acquired internet protocol address instruction content distributing network.
Alternatively, above-mentioned computer-readable medium carries one or more program, when said one or multiple programs When being executed by the electronic equipment, so that the electronic equipment: receiving the Node evaluation including at least two internet protocol addresses and request; From at least two internet protocol address, internet protocol address is chosen;Return to the internet protocol address selected;Wherein, The fringe node in internet protocol address instruction content distributing network received.
The calculating of the operation for executing the disclosure can be write with one or more programming languages or combinations thereof Machine program code, above procedure design language include object oriented program language-such as Java, Smalltalk, C+ +, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package, Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part. In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN) Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part is realized.Wherein, the title of unit does not constitute the restriction to the unit itself under certain conditions, for example, the One acquiring unit is also described as " obtaining the unit of at least two internet protocol addresses ".
It should be appreciated that each section of the disclosure can be realized with hardware, software, firmware or their combination.
The above, the only specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, it is any Those familiar with the art is in the technical scope that the disclosure discloses, and any changes or substitutions that can be easily thought of, all answers Cover within the protection scope of the disclosure.Therefore, the protection scope of the disclosure should be subject to the protection scope in claims.

Claims (12)

1. a kind of image stylization generation method characterized by comprising
Obtain the multiple images comprising target object shown on interactive interface, target object shape in described multiple images At the first graphics field;
Variation tendency based on first graphics field in time series, determines whether the target object is in static shape State;
It remains static in response to the determination target object, the multiple series of images stored from pre-set light weighed model In processing parameter, one group of Image Processing parameter is selected, to form the first Image Processing parameter;
Using the first image processing parameter and the light weighed model, in first time period by current interactive interface to The image of displaying is converted into the corresponding with the target object first stylized image in real time.
2. the method according to claim 1, wherein it is described in first time period by current interactive interface to The image of displaying is converted into real time after the corresponding with the target object first stylized image, the method also includes:
In the second time period after the first time period, the target object described in real-time display in the interactive interface Transfer image acquisition.
3. according to the method described in claim 2, it is characterized in that, the target described in real-time display in the interactive interface After the transfer image acquisition of object, the method also includes:
Within the third period after the second time period, the target object described in real-time display in the interactive interface Primary image, the primary image are without the image by stylization processing.
4. according to the method described in claim 2, it is characterized in that, the target described in real-time display in the interactive interface The transfer image acquisition of object, comprising:
It obtains the stylized image of n shown in the second time period and the corresponding n of the n stylization image is a primary Image, the primary image are without the image by stylization processing;
First transparency (n-i)/n is arranged to i-th of stylized image in n stylized image, in n primary images The second transparency i/n is arranged in i-th of primary image;
Stylized image with the first transparency and the primary image superposition with the second transparency are shown.
5. according to the method described in claim 3, it is characterized in that, the target described in real-time display in the interactive interface After the primary image of object, the method also includes:
In the 4th period after the third period, the multiple series of images that is stored from pre-set light weighed model In processing parameter, one group of Image Processing parameter is selected, to form the second Image Processing parameter;
Based on second Image Processing parameter, image to be presented in current interactive interface is turned in real time within the 4th period Turn to the corresponding with the target object second stylized image.
6. the method according to claim 1, wherein what is shown on the acquisition interactive interface includes target object Multiple images, comprising:
Video content in the interactive interface is acquired, to obtain the video file for including multiple video frames;
Multiple video frames is chosen, from the video file to form the multiple images for including the target object.
7. according to the method described in claim 6, it is characterized in that, described choose multiple videos from the video file Frame, to form the multiple images for including the target object, comprising:
Target object detection is carried out to the video frame in the video file, to obtain the image sequence comprising target object;
In described image sequence, the first graphics field in current video frame and the first video area in a upper video frame are judged Whether domain is identical;
It is identical as the first video area in a upper video frame in response to the first graphics field in current video frame, in the figure As deleting current video frame in sequence.
8. the method according to claim 1, wherein what is shown on the acquisition interactive interface includes target object Multiple images after, the method also includes:
Choose multiple structural elements of different orientation;
Details description is carried out to described multiple images using each structural element in multiple structural elements, to obtain filtering figure Picture;
The gray-scale edges of filtering image are determined, to obtain in filtering image in multiple grey levels present in each grey level Pixel number;
Pixel number in each grey level is weighted, wherein the average gray after weighting is used as threshold value;
Binary conversion treatment is carried out to the filtering image based on the threshold value;
Using the image after binary conversion treatment as the edge image of the target object.
9. the method according to claim 1, wherein will be to be presented in current interactive interface in first time period Image be converted into the first stylized image corresponding with the target object in real time, comprising:
Multiple convolutional layers and pond layer are chosen in the light weighed model, wherein the pond layer is using average pondization processing Mode;
Image to be presented and stylized image are set in the character representation of the convolutional layer and pond layer;
Based on the character representation, building minimizes loss function;
Based on the minimum loss function, stylized image corresponding with the target object is generated.
10. a kind of image stylization generating means characterized by comprising
Module is obtained, for obtaining the multiple images comprising target object shown on interactive interface, the target object is in institute It states and forms the first graphics field in multiple images;
Determining module determines the target object for the variation tendency based on first graphics field in time series Whether remain static;
Selecting module, for remaining static in response to the determination target object, from pre-set light weighed model In the multiple series of images processing parameter of storage, one group of Image Processing parameter is selected, to form the first Image Processing parameter;
Conversion module will work as in first time period for utilizing the first image processing parameter and the light weighed model Image to be presented is converted into the corresponding with the target object first stylized image in real time in preceding interactive interface.
11. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one It manages device to execute, so that at least one described processor is able to carry out image style metaplasia described in aforementioned any claim 1-9 At method.
12. a kind of non-transient computer readable storage medium, which stores computer instruction, The computer instruction is for making the computer execute image stylization generation method described in aforementioned any claim 1-9.
CN201910403860.2A 2019-05-15 2019-05-15 Image stylization generation method and device and electronic equipment Active CN110189246B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910403860.2A CN110189246B (en) 2019-05-15 2019-05-15 Image stylization generation method and device and electronic equipment
PCT/CN2020/079205 WO2020228406A1 (en) 2019-05-15 2020-03-13 Image stylization generation method and apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910403860.2A CN110189246B (en) 2019-05-15 2019-05-15 Image stylization generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110189246A true CN110189246A (en) 2019-08-30
CN110189246B CN110189246B (en) 2023-02-28

Family

ID=67716352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910403860.2A Active CN110189246B (en) 2019-05-15 2019-05-15 Image stylization generation method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110189246B (en)
WO (1) WO2020228406A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2020228406A1 (en) * 2019-05-15 2020-11-19 北京字节跳动网络技术有限公司 Image stylization generation method and apparatus, and electronic device
CN111986076A (en) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 Image processing method and device, interactive display device and electronic equipment
CN112053286A (en) * 2020-09-04 2020-12-08 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and readable medium
WO2022068724A1 (en) * 2020-09-29 2022-04-07 维沃移动通信有限公司 Image display method and electronic device
WO2022171114A1 (en) * 2021-02-09 2022-08-18 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469923A (en) * 2021-05-28 2021-10-01 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113742630B (en) * 2021-09-16 2023-12-15 阿里巴巴新加坡控股有限公司 Image processing method, electronic device, and computer storage medium
CN113891141B (en) * 2021-10-25 2024-01-26 抖音视界有限公司 Video processing method, device and equipment
CN114040129B (en) * 2021-11-30 2023-12-05 北京字节跳动网络技术有限公司 Video generation method, device, equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
CN104869346A (en) * 2014-02-26 2015-08-26 中国移动通信集团公司 Method and electronic equipment for processing image in video call
CN106341695A (en) * 2016-08-31 2017-01-18 腾讯数码(天津)有限公司 Interaction method, device and system of live streaming room
CN107171932A (en) * 2017-04-27 2017-09-15 腾讯科技(深圳)有限公司 A kind of picture style conversion method, apparatus and system
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN107633228A (en) * 2017-09-20 2018-01-26 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN108205803A (en) * 2017-07-19 2018-06-26 北京市商汤科技开发有限公司 Image processing method, the training method of neural network model and device
CN108875751A (en) * 2017-11-02 2018-11-23 北京旷视科技有限公司 Image processing method and device, the training method of neural network, storage medium
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN108986110A (en) * 2018-07-02 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method, device, mobile terminal and storage medium
CN109151489A (en) * 2018-08-14 2019-01-04 广州虎牙信息科技有限公司 live video image processing method, device, storage medium and computer equipment
CN109308679A (en) * 2018-08-13 2019-02-05 深圳市商汤科技有限公司 A kind of image style conversion side and device, equipment, storage medium
CN109636712A (en) * 2018-12-07 2019-04-16 北京达佳互联信息技术有限公司 Image Style Transfer and date storage method, device and electronic equipment
CN109697690A (en) * 2018-11-01 2019-04-30 北京达佳互联信息技术有限公司 Image Style Transfer method and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10152768B2 (en) * 2017-04-14 2018-12-11 Facebook, Inc. Artifact reduction for image style transfer
CN108171652A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 A kind of method, mobile terminal and storage medium for improving image stylistic effects
CN108537776A (en) * 2018-03-12 2018-09-14 维沃移动通信有限公司 A kind of image Style Transfer model generating method and mobile terminal
CN110189246B (en) * 2019-05-15 2023-02-28 北京字节跳动网络技术有限公司 Image stylization generation method and device and electronic equipment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501445A (en) * 2013-10-12 2014-01-08 青岛旲天下智能科技有限公司 Gesture-based interaction two-way interactive digital TV box system and implementation method
CN104869346A (en) * 2014-02-26 2015-08-26 中国移动通信集团公司 Method and electronic equipment for processing image in video call
CN106341695A (en) * 2016-08-31 2017-01-18 腾讯数码(天津)有限公司 Interaction method, device and system of live streaming room
CN107171932A (en) * 2017-04-27 2017-09-15 腾讯科技(深圳)有限公司 A kind of picture style conversion method, apparatus and system
CN107277615A (en) * 2017-06-30 2017-10-20 北京奇虎科技有限公司 Live stylized processing method, device, computing device and storage medium
CN108205803A (en) * 2017-07-19 2018-06-26 北京市商汤科技开发有限公司 Image processing method, the training method of neural network model and device
CN107633228A (en) * 2017-09-20 2018-01-26 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN108875751A (en) * 2017-11-02 2018-11-23 北京旷视科技有限公司 Image processing method and device, the training method of neural network, storage medium
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN108986110A (en) * 2018-07-02 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method, device, mobile terminal and storage medium
CN109308679A (en) * 2018-08-13 2019-02-05 深圳市商汤科技有限公司 A kind of image style conversion side and device, equipment, storage medium
CN109151489A (en) * 2018-08-14 2019-01-04 广州虎牙信息科技有限公司 live video image processing method, device, storage medium and computer equipment
CN109697690A (en) * 2018-11-01 2019-04-30 北京达佳互联信息技术有限公司 Image Style Transfer method and system
CN109636712A (en) * 2018-12-07 2019-04-16 北京达佳互联信息技术有限公司 Image Style Transfer and date storage method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈恩庆 等: "采用多结构元素模板的形态学边缘检测新算法", 《计算机工程与应用》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020228406A1 (en) * 2019-05-15 2020-11-19 北京字节跳动网络技术有限公司 Image stylization generation method and apparatus, and electronic device
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111986076A (en) * 2020-08-21 2020-11-24 深圳市慧鲤科技有限公司 Image processing method and device, interactive display device and electronic equipment
CN112053286A (en) * 2020-09-04 2020-12-08 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and readable medium
CN112053286B (en) * 2020-09-04 2023-09-05 抖音视界有限公司 Image processing method, device, electronic equipment and readable medium
WO2022068724A1 (en) * 2020-09-29 2022-04-07 维沃移动通信有限公司 Image display method and electronic device
WO2022171114A1 (en) * 2021-02-09 2022-08-18 北京字跳网络技术有限公司 Image processing method and apparatus, and device and medium
JP7467780B2 (en) 2021-02-09 2024-04-15 北京字跳▲網▼絡技▲術▼有限公司 Image processing method, apparatus, device and medium

Also Published As

Publication number Publication date
WO2020228406A1 (en) 2020-11-19
CN110189246B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN110189246A (en) Image stylization generation method, device and electronic equipment
CN110222726A (en) Image processing method, device and electronic equipment
CN110287891A (en) Gestural control method, device and electronic equipment based on human body key point
CN110086988A (en) Shooting angle method of adjustment, device, equipment and its storage medium
CN110058685A (en) Display methods, device, electronic equipment and the computer readable storage medium of virtual objects
CN103997687B (en) For increasing the method and device of interaction feature to video
CN109771951A (en) Method, apparatus, storage medium and the electronic equipment that map generates
CN110189394A (en) Shape of the mouth as one speaks generation method, device and electronic equipment
KR20200128378A (en) Image generation network training and image processing methods, devices, electronic devices, and media
CN110384924A (en) The display control method of virtual objects, device, medium and equipment in scene of game
CN110378410A (en) Multi-tag scene classification method, device and electronic equipment
CN110298785A (en) Image beautification method, device and electronic equipment
CN109962939A (en) Position recommended method, device, server, terminal and storage medium
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
CN110267097A (en) Video pushing method, device and electronic equipment based on characteristic of division
CN110288520A (en) Image beautification method, device and electronic equipment
CN110070551A (en) Rendering method, device and the electronic equipment of video image
CN110288551A (en) Video beautification method, device and electronic equipment
CN110047121A (en) Animation producing method, device and electronic equipment end to end
CN112163717A (en) Population data prediction method and device, computer equipment and medium
CN110069191A (en) Image based on terminal pulls deformation implementation method and device
CN110278447A (en) Video pushing method, device and electronic equipment based on continuous feature
CN110035271A (en) Fidelity image generation method, device and electronic equipment
CN108776544A (en) Exchange method and device, storage medium, electronic equipment in augmented reality
CN110211017A (en) Image processing method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant