CN109766089A - Code generating method, device, electronic equipment and storage medium based on cardon - Google Patents
Code generating method, device, electronic equipment and storage medium based on cardon Download PDFInfo
- Publication number
- CN109766089A CN109766089A CN201811536798.6A CN201811536798A CN109766089A CN 109766089 A CN109766089 A CN 109766089A CN 201811536798 A CN201811536798 A CN 201811536798A CN 109766089 A CN109766089 A CN 109766089A
- Authority
- CN
- China
- Prior art keywords
- cardon
- component
- frame
- sub
- frame picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to code generating method, device, electronic equipment and storage mediums based on cardon.The described method includes: receiving cardon and parsing a framing picture from the cardon;The component in every frame picture is identified by deep learning algorithm;Whether the quantity for judging the component in the cardon is 1;When the quantity of the component in the cardon is 1, the time interval between the parameter information and every two frame pictures of the component of every frame picture is obtained;The equation of motion and the function of time of the component are constructed according to the time interval between the parameter information of the component of the frame picture of acquisition and every two frame pictures;And according to the equation of motion of the component of building, function of time calling interface function library to generate the corresponding code of the cardon.Code generating method in the present invention based on cardon can automatically generate animation code according to cardon, so that reduction effect figure animation does not need repeatedly to modify parameter, compiling, operation, debugging save human cost.
Description
Technical field
The present invention relates to computer code development fields, and in particular to a kind of code generating method based on cardon, device,
Electronic equipment and storage medium.
Background technique
The code generated at present is all manually to carry out writing generation.Due to the code of manual compiling, need to be implemented to page
The defining operations such as control, the element in face require to re-start and write, lead to the generation of code even if generating similar code
Efficiency is lower, and the cost that manpower is paid is larger.
Summary of the invention
In view of the foregoing, it is necessary to propose it is a kind of by the code generating method of cardon and device, electronic equipment and based on
Calculation machine readable storage medium storing program for executing, to improve the formation efficiency of code and reduce paying for human cost.
The first aspect of the application provides a kind of code generating method based on cardon, which comprises
It receives cardon and parses a framing picture from the cardon;
The component in every frame picture is identified by deep learning algorithm;
Whether the quantity for judging the component in the cardon is 1;
When the quantity of the component in the cardon is 1, the parameter information of component of every frame picture and two every is obtained
Time interval between frame picture;
According to the time interval building between the parameter information of the component of the frame picture of acquisition and every two frame pictures
The equation of motion and the function of time of component;And
According to the equation of motion of the component of building, function of time calling interface function library to generate the cardon corresponding generation
Code.
Preferably, the method also includes steps:
When the quantity of the component in the cardon is not 1, identifies the component in the cardon and determine the component
Quantity, and the cardon is resolved into the sub- cardon with the quantity same number of the component;And
A framing picture is parsed from each sub- cardon, identifies the component in each sub- cardon in frame picture, and obtain
Take the time interval between the parameter information of the component of the frame picture of each sub- cardon and the frame picture of each sub- cardon.
Preferably, the equation of motion of the component according to building, function of time calling interface function library are described to generate
The corresponding code of cardon includes:
According between the parameter information of the component of the frame picture of each sub- cardon and two frame pictures of each sub- cardon
Time interval constructs the equation of motion and the function of time of the component of each sub- cardon;
The system platform of the terminal is called according to the equation of motion of the component of each sub- cardon of building and the function of time
Interface generate code corresponding with each sub- cardon;And
The corresponding code of each sub- cardon is merged to obtain the corresponding code of the cardon.
Preferably, described to identify that the component in every frame picture includes: by deep learning algorithm
The frame image data of positive sample and the frame image data of negative sample are obtained, and by the frame image data of positive sample and is born
The frame image data of sample marks frame picture classification, so that the frame image data of the frame image data of positive sample and negative sample carries
Frame picture class label;
The frame image data label of the frame image data label of the positive sample and the negative sample is randomly divided into first
The verifying collection of the training set of preset ratio and the second preset ratio, using the training set training component classification model, and
Utilize the accuracy rate of the component classification model after the verifying collection verifying training;
Terminate to train when the accuracy rate is more than or equal to default accuracy rate, with the component classification mould after training
Type identifies the component category of the frame picture as classifier;And
When the accuracy rate is less than default accuracy rate, then increase positive sample quantity and negative sample quantity with re -training institute
Component classification model is stated until the accuracy rate is more than or equal to default accuracy rate.
Preferably, the reception cardon and a framing picture is parsed from the cardon include:
The cardon is parsed into the static frames image with equal resolution using PIL image module.
Preferably, the parameter information of the component includes position, size, color, transparency.
Preferably, the format of the cardon is GIF format.
The second aspect of the application provides a kind of code generating unit based on cardon, and described device includes:
Parsing module, for receiving cardon and parsing from the cardon framing picture;
Component recognition module, for identifying the component in every frame picture by deep learning algorithm;
Judgment module, for judging whether the quantity of the component in the cardon is 1;
Module is obtained, for obtaining the parameter of the component of every frame picture when the quantity of the component in the cardon is 1
Time interval between information and every two frame pictures;
Module is constructed, for the time between the parameter information and every two frame pictures according to the component of the frame picture of acquisition
Interval constructs the equation of motion and the function of time of the component;And
Generation module, for according to the equation of motion of the component of building, function of time calling interface function library to generate
State the corresponding code of cardon.
The third aspect of the application provides a kind of electronic equipment, and the electronic equipment includes processor, and the processor is used
The code generating method based on cardon is realized when executing the computer program stored in memory.
The fourth aspect of the application provides a kind of computer readable storage medium, is stored thereon with computer program, described
The code generating method based on cardon is realized when computer program is executed by processor.
The code generating method based on cardon in the present invention can automatically generate animation code according to cardon, so that reduction
Effect picture animation does not need repeatedly to modify parameter, and compiling, operation, debugging save human cost.
Detailed description of the invention
Fig. 1 is the application environment schematic diagram of the code generating method the present invention is based on cardon.
Fig. 2 is the flow chart of one embodiment of code generating method the present invention is based on cardon.
Fig. 3 is the structure chart of the code generating unit preferred embodiment the present invention is based on cardon.
Fig. 4 is the schematic diagram of electronic equipment preferred embodiment of the present invention.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real
Applying example, the present invention will be described in detail.It should be noted that in the absence of conflict, embodiments herein and embodiment
In feature can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, described embodiment is only
It is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill
Personnel's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Preferably, the code generating method of the invention based on cardon is applied in one or more electronic equipment.Institute
State electronic equipment be it is a kind of can according to the instruction for being previously set or store, automatic progress numerical value calculating and/or information processing
Equipment, hardware include but is not limited to microprocessor, specific integrated circuit (Application Specific Integrated
Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit
(Digital Signal Processor, DSP), embedded device etc..
The electronic equipment can be the calculating such as desktop PC, laptop, tablet computer and cloud server
Equipment.The equipment can carry out man-machine friendship by modes such as keyboard, mouse, remote controler, touch tablet or voice-operated devices with user
Mutually.
Embodiment 1
Fig. 1 is the application environment schematic diagram of the code generating method 200 the present invention is based on cardon.
As shown in fig.1, the code generating method 200 based on cardon is applied in a terminal 1.The terminal 1 is used for
It receives cardon and cardon generates corresponding code based on the received.In present embodiment, the terminal 1 can be notebook electricity
The devices such as brain, desktop computer, tablet computer, cell phone are also possible to a server zone or Cloud Server.
Fig. 2 is the flow chart of 200 1 embodiment of code generating method the present invention is based on cardon.According to different need
It asks, the sequence of step can change in the flow chart, and certain steps can be omitted.
As shown in fig.2, the code generating method 200 based on cardon specifically includes the following steps:
Step S201 receives cardon and parses a framing picture from the cardon.
In present embodiment, the cardon, which refers to, generates certain when one group of specific still image with specified frequency error factor
The picture of kind dynamic effect.In present embodiment, the format of the cardon is GIF format.In a specific embodiment, described
The cardon is parsed into the still image with equal resolution using PIL image module by terminal 1.
Step S202 identifies the component in every frame picture by deep learning algorithm.
In present embodiment, the component is the target object in picture, and the target object can be people or object.?
In one specific embodiment, it can use preparatory trained component classification model and determine component in the frame picture.It is described
Component classification model includes, but are not limited to: support vector machines (Support Vector Machine, SVM) model.This embodiment party
In formula, the terminal 1 passes through institute using the frame picture parsed from the cardon as the input of the component classification model
State output component corresponding with the frame picture after component classification model calculates.In present embodiment, the step S202 " passes through
Deep learning algorithm identifies the component in frame picture " include:
(S2021) the frame image data of positive sample and the frame image data of negative sample are obtained, and by the frame picture of positive sample
The frame image data of data and negative sample marks frame picture classification, so that the frame picture of the frame image data of positive sample and negative sample
Data carry frame picture class label.
In present embodiment, 500 people, the not corresponding frame image data of species are chosen respectively, and to each frame picture number
According to mark classification, can frame image data label using " 1 " as people, the frame image data label using " 2 " as object.
(S2022) the frame image data label of the frame image data label of the positive sample and the negative sample is divided at random
At the training set of the first preset ratio and the verifying collection of the second preset ratio, the training set training component classification mould is utilized
Type, and utilize the accuracy rate of the component classification model after the verifying collection verifying training.
In present embodiment, the training sample in the training set of different component categories is first distributed to different files
In.For example, the training sample of people is distributed in the first file, the training sample of object is distributed in the second file.Then
The training sample for extracting the first preset ratio (for example, 70%) respectively in different files is carried out as total training sample
The training of component classification model takes the training sample of remaining second preset ratio (for example, 30%) respectively in different files
This carries out Accuracy Verification to the component classification model that training is completed as total test sample.
(S2023) if the accuracy rate is more than or equal to default accuracy rate, terminate to train, described in after training
Component classification model identifies the component category of the frame picture as classifier;If the accuracy rate is less than default accuracy rate,
Then increase positive sample quantity and negative sample quantity with component classification model described in re -training until the accuracy rate be greater than or
Equal to default accuracy rate.
Step S203 judges whether the quantity of the component in cardon is 1.Wherein, if the quantity of the component in cardon is 1
When, S204 is thened follow the steps, it is no to then follow the steps S207.
Step S204 obtains the time interval between the parameter information and every two frame pictures of the component of every frame picture.
In present embodiment, step S205 is executed after executing the step S204.
In present embodiment, the parameter information of every frame picture is also obtained after the component for identifying every frame picture.This
In embodiment, the parameter information of the frame picture refers to the status information of frame picture, and the parameter information includes, but are not limited to
The information such as position, size, color, transparency.
Step S205, according to the time interval between the parameter information of the component of the frame picture of acquisition and every two frame pictures
Construct the equation of motion and the function of time of component.In present embodiment, step S208 is executed after executing the step S205.This reality
It applies in mode, the fortune for the component that can be constructed according to the time interval between the parameter information and frame picture of the component in frame picture
Dynamic equation and the function of time change with time program and motion change rule to describe component.
Step S206 identifies the component in the cardon when the quantity of the component in the cardon is not 1 and determines group
The quantity of part, and cardon is resolved into the sub- cardon with the quantity same number of component.In present embodiment, when executing the step
Step S207 is executed after S206.
Include multiple components in the cardon in present embodiment, can divide cardon for the quantity of multiple components
Solution is at multiple sub- cardons, wherein every sub- cardon only corresponds to a component, according to component corresponding to every sub- cardon generate with
The corresponding code of the sub- cardon, and it is corresponding that the cardon can be obtained after the corresponding code of each sub- cardon is merged
Code.
Step S207 parses a framing picture from each sub- cardon, identifies in each sub- cardon in frame picture
Component, and obtain between the time between the parameter information of the component of the frame picture of each sub- cardon and the frame picture of each sub- cardon
Every.In present embodiment, step S208 is executed after executing the step S207.
In present embodiment, the parameter information of the frame picture refers to the status information of frame picture, the parameter information packet
It includes, but is not limited to the information such as position, size, color, transparency.
Step S208 is generated described dynamic according to the equation of motion of the component of building, function of time calling interface function library
Scheme corresponding code.
In present embodiment, connecing for the system platform of terminal 1 is called according to the equation of motion of the component of building, the function of time
Mouth function library generates the corresponding code of the cardon.In present embodiment, the method according to the component movement equation of building, when
Between function call terminal 1 system platform interface and syntax rule group according to the development language of the system platform
Fill text generation code.The interface of calling system platform includes the calling etc. of animation api in present embodiment.
Further, in the present embodiment, in the equation of motion, the function of time calling interface letter according to the component of building
Number libraries include: when generating the corresponding code of the cardon
According between the parameter information of the component of the frame picture of each sub- cardon and two frame pictures of each sub- cardon
Time interval constructs the equation of motion and the function of time of the component of each sub- cardon;
The interface is called to generate according to the equation of motion of the component of each sub- cardon of building and the function of time
Code corresponding with each sub- cardon;And
The corresponding code of each sub- cardon is merged to obtain the corresponding code of the cardon.
Specifically, according to the time interval between the position of the component of the frame picture of each sub- cardon and every two frame picture
The change in location equation of motion and the function of time of the component of the frame picture of each sub- cardon are constructed, and according to each of building
The change in location equation of motion and function of time calling interface function library of the component of sub- cardon, which generate, reflects each sub- cardon
Component change in location code.According between the size of the component of the frame picture of each sub- cardon and every two frame picture
Time interval constructs the size variation equation of motion and the function of time of the component of the frame picture of each sub- cardon, and according to structure
It is described every that the size variation equation of motion and function of time calling interface function library of the component for each sub- cardon built generate reflection
The code of the size variation of the component of one sub- cardon.According to the color of the component of the frame picture of each sub- cardon and every two frame figure
Time interval between piece constructs the color change equation of motion and the function of time of the component of the frame picture of each sub- cardon,
And it is generated instead according to the color change equation of motion of the component of each sub- cardon of building with function of time calling interface function library
Reflect the code of the color change of the component of each sub- cardon.According to the transparency of the component of the frame picture of each sub- cardon and
Time interval between every two frame picture constructs the transparency change movement side of the component of the frame picture of each sub- cardon
Journey and the function of time, and connect according to the transparency change equation of motion of the component of each sub- cardon of building with function of time calling
Mouth function library generates the code for reflecting the transparency change of component of each sub- cardon.It will finally reflect that each son is dynamic
The code of the change in location of the component of figure, the code of the size variation of the component of reflection each sub- cardon, reflection are described every
The code of the transparency change of the component of the code and reflection each sub- cardon of the color change of the component of one sub- cardon closes
And together to obtain the code of the cardon.
The code generating method 200 based on cardon in the present invention can automatically generate animation code according to cardon, so that
Reduction effect figure animation does not need repeatedly to modify parameter, and compiling, operation, debugging save human cost.
Embodiment 2
Fig. 3 is that the present invention is based on the structure charts of 30 preferred embodiment of code generating unit of cardon.
In some embodiments, the code generating unit 30 based on cardon is run in electronic equipment.A base
In the code generating unit 30 of cardon may include multiple functional modules as composed by program code segments.It is described based on cardon
The program code of each program segment in code generating unit 30 can store in memory, and by least one processor institute
It executes, to execute the function of generating code based on cardon.
In the present embodiment, the function of the code generating unit 30 according to performed by it based on cardon can be divided
For multiple functional modules.As shown in fig.3, the code generating unit 30 based on cardon may include parsing module 301, group
Part identification module 302, judgment module 303 obtain module 304, building module 305 and generation module 306.The so-called mould of the present invention
Block refer to it is a kind of performed by least one processor and can complete the series of computation machine program segment of fixed function,
It is stored in memory.In some embodiments, it will be described in detail in subsequent embodiment about the function of each module.
The parsing module 301 is for receiving cardon and parsing a framing picture from the cardon.
In present embodiment, the cardon, which refers to, generates certain when one group of specific still image with specified frequency error factor
The picture of kind dynamic effect.In present embodiment, the format of the cardon is GIF format.In a specific embodiment, described
The cardon is parsed into the still image with equal resolution using PIL image module by parsing module 301.
The component recognition module 302 is used to identify the component in every frame picture by deep learning algorithm.
In present embodiment, the component is the target object in picture, and the target object can be people or object.?
In one specific embodiment, the component recognition module 302 be can use described in trained component classification model determination in advance
Component in frame picture.The component classification model includes, but are not limited to: supporting vector machine model.It is described in present embodiment
The component recognition module 302 using the frame picture parsed from the cardon as the input of the component classification model, and
The output component corresponding with the frame picture after component classification model calculating.
In present embodiment, the component recognition module 302 obtains the frame image data of positive sample and the frame figure of negative sample
Sheet data, and the frame image data of the frame image data of positive sample and negative sample is marked into frame picture classification, so that positive sample
The frame image data of frame image data and negative sample carries frame picture class label.In present embodiment, 500 are chosen respectively
The not corresponding frame image data of people, species, and classification is marked to each frame image data, it can frame picture using " 1 " as people
Data label, the frame image data label using " 2 " as object.
The component recognition module 302 is also by the frame image data label of the positive sample and the frame picture of the negative sample
Data label is randomly divided into the training set of the first preset ratio and the verifying collection of the second preset ratio, utilizes training set training
The component classification model, and utilize the accuracy rate of the component classification model after the verifying collection verifying training.This implementation
In mode, first the training sample in the training set of different component categories is distributed in different files.For example, by the instruction of people
Sample is distributed in the first file white silk, the training sample of object is distributed in the second file.Then in different files
The training sample for extracting the first preset ratio (for example, 70%) respectively carries out the instruction of component classification model as total training sample
Practice, takes the training sample of remaining second preset ratio (for example, 30%) as total test specimens respectively in different files
This carries out Accuracy Verification to the component classification model that training is completed.
The component recognition module 302 is also used to terminate instruction when the accuracy rate is more than or equal to default accuracy rate
Practice, and identifies the component category of the frame picture using the component classification model after training as classifier.When described accurate
When rate is less than default accuracy rate, the component recognition module 302 will also increase positive sample quantity and negative sample quantity to instruct again
Practice the component classification model until the accuracy rate is more than or equal to default accuracy rate.
The judgment module 303 is used to judge whether the quantity of the component in cardon to be 1.
When the quantity for determining the component in cardon is 1, the ginseng for obtaining module 304 and obtaining the component of every frame picture
Time interval between number information and every two frame pictures.In present embodiment, after the component for identifying every frame picture also
Obtain the parameter information of every frame picture.The parameter information of the frame picture refers to the status information of frame picture, the parameter letter
Breath includes, but are not limited to the information such as position, size, color, transparency.
Between the parameter information and every two frame pictures of the component that the building module 305 is used for the frame picture according to acquisition
Time interval building component the equation of motion and the function of time.In present embodiment, according to the parameter information and frame of frame picture
Between picture time interval building component the equation of motion and the function of time come describe component change with time program and
Motion change rule.
When the quantity for determining the component in cardon is not 1, the parsing module 301 identifies the component in the cardon simultaneously
It determines the quantity of component, and shown cardon is resolved into the sub- cardon with the quantity same number of component.The component recognition mould
Block 302 is also used to parse a framing picture from each sub- cardon, identifies the component in each sub- cardon in frame picture.Institute
It states between the parameter information of component and the frame picture of each sub- cardon that obtain the frame picture that module 304 obtains each sub- cardon
Time interval.Shown building module 306 generates generation according to the equation of motion of the component of building, function of time calling interface function library
Code.In present embodiment, the interface letter of the system platform of terminal 1 is called according to the equation of motion of the component of building, the function of time
Number library is to generate the corresponding code of the cardon.
Include multiple components in the cardon in present embodiment, can divide cardon for the quantity of multiple components
Solution is at multiple sub- cardons, wherein every sub- cardon only corresponds to a component, according to component corresponding to every sub- cardon generate with
The corresponding code of the sub- cardon, and it is corresponding that the cardon can be obtained after the corresponding code of each sub- cardon is merged
Code.
The generation module 306 is used for the component movement equation according to building, the function of time calls the system platform of terminal 1
Interface and syntax rule assembling text generation code according to the development language of the system platform.Present embodiment
The interface of middle calling system platform includes the calling etc. of animation api.
In one embodiment, the generation module 306 is called and is connect according to the equation of motion, the function of time of the component of building
According to the parameter information of the component of the frame picture of each sub- cardon and every when mouthful function library is to generate the cardon corresponding code
Time interval between two frame pictures of one sub- cardon constructs the equation of motion and the function of time of the component of each sub- cardon, root
It is generated and each sub- cardon phase according to the equation of motion and function of time calling interface function library of the component of each sub- cardon of building
Corresponding code, and the corresponding code of each sub- cardon is merged to obtain the corresponding code of the cardon.Specifically,
The generation module 306 is according to the time interval between the position and every two frame picture of the component of the frame picture of each sub- cardon
The change in location equation of motion and the function of time of the component of the frame picture of each sub- cardon are constructed, and according to each of building
The change in location equation of motion and function of time calling interface function library of the component of sub- cardon, which generate, reflects each sub- cardon
Component change in location code.According between the size of the component of the frame picture of each sub- cardon and every two frame picture
Time interval constructs the size variation equation of motion and the function of time of the component of the frame picture of each sub- cardon, and according to structure
It is described every that the size variation equation of motion and function of time calling interface function library of the component for each sub- cardon built generate reflection
The code of the size variation of the component of one sub- cardon.According to the color of the component of the frame picture of each sub- cardon and every two frame figure
Time interval between piece constructs the color change equation of motion and the function of time of the component of the frame picture of each sub- cardon,
And it is generated instead according to the color change equation of motion of the component of each sub- cardon of building with function of time calling interface function library
Reflect the code of the color change of the component of each sub- cardon.According to the transparency of the component of the frame picture of each sub- cardon and
Time interval between every two frame picture constructs the transparency change movement side of the component of the frame picture of each sub- cardon
Journey and the function of time, and connect according to the transparency change equation of motion of the component of each sub- cardon of building with function of time calling
Mouth function library generates the code for reflecting the transparency change of component of each sub- cardon.It will finally reflect that each son is dynamic
The code of the change in location of the component of figure, the code of the size variation of the component of reflection each sub- cardon, reflection are described every
The code of the transparency change of the component of the code and reflection each sub- cardon of the color change of the component of one sub- cardon closes
And together to obtain the code of the cardon.The code generating unit 30 based on cardon in the present invention can be according to cardon certainly
Dynamic to generate animation code, so that reduction effect figure animation does not need repeatedly to modify parameter, compiling, operation, debugging save manpower
Cost.
Embodiment three
Fig. 4 is the schematic diagram of 4 preferred embodiment of electronic equipment of the present invention.
The electronic equipment 4 includes memory 41, processor 42 and is stored in the memory 41 and can be described
The computer program 43 run on processor 42.The processor 42 is realized above-mentioned based on dynamic when executing the computer program 43
Step in 200 embodiment of code generating method of figure, such as step S201~S208 shown in Fig. 2.Alternatively, the processor
The function of each module/unit in the above-mentioned code generating unit embodiment based on cardon is realized when the 42 execution computer program 43
Can, such as the module 301~306 in Fig. 3.
Illustratively, the computer program 43 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 41, and are executed by the processor 42, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, and described instruction section is used
In implementation procedure of the description computer program 43 in the electronic equipment 4.For example, the computer program 43 can be by
Parsing module 301, component recognition module 302, the judgment module 303, acquisition module 304, building module 305 being divided into Fig. 3
And generation module 306, each module concrete function is referring to embodiment two.
The electronic equipment 4 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.It will be understood by those skilled in the art that the schematic diagram is only the example of electronic equipment 4, do not constitute to electronic equipment 4
Restriction, may include perhaps combining certain components or different components, such as institute than illustrating more or fewer components
Stating electronic equipment 4 can also include input-output equipment, network access equipment, bus etc..
Alleged processor 42 can be central processing module (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 42 is also possible to any conventional processing
Device etc., the processor 42 are the control centres of the electronic equipment 4, utilize various interfaces and the entire electronic equipment of connection
4 various pieces.
The memory 41 can be used for storing the computer program 43 and/or module/unit, and the processor 42 passes through
Operation executes the computer program and/or module/unit being stored in the memory 41, and calls and be stored in memory
Data in 41 realize the various functions of the meter electronic equipment 4.The memory 41 can mainly include storing program area and deposit
Store up data field, wherein storing program area can application program needed for storage program area, at least one function (for example sound is broadcast
Playing function, image player function etc.) etc.;Storage data area can store according to electronic equipment 4 use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 41 may include high-speed random access memory, it can also include non-volatile
Property memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital
(Secure Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or other
Volatile solid-state part.
If the integrated module/unit of the electronic equipment 4 is realized in the form of software function module and as independent
Product when selling or using, can store in a computer readable storage medium.Based on this understanding, the present invention is real
All or part of the process in existing above-described embodiment method, can also instruct relevant hardware come complete by computer program
At the computer program can be stored in a computer readable storage medium, and the computer program is held by processor
When row, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, institute
Stating computer program code can be source code form, object identification code form, executable file or certain intermediate forms etc..It is described
Computer-readable medium may include: any entity or device, recording medium, U that can carry the computer program code
Disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), arbitrary access
Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It needs
It is bright, the content that the computer-readable medium includes can according in jurisdiction make laws and patent practice requirement into
Row increase and decrease appropriate, such as do not include electric load according to legislation and patent practice, computer-readable medium in certain jurisdictions
Wave signal and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that arriving, disclosed electronic equipment and method can be with
It realizes by another way.For example, electronic equipment embodiment described above is only schematical, for example, the mould
The division of block, only a kind of logical function partition, there may be another division manner in actual implementation.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in same treatment module
It is that modules physically exist alone, can also be integrated in equal modules with two or more modules.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also realize in the form of hardware adds software function module.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " is not excluded for other modules or step, and odd number is not excluded for plural number.It is stated in electronic equipment claim
Multiple modules or electronic equipment can also be implemented through software or hardware by the same module or electronic equipment.The first, the
Second-class word is used to indicate names, and is not indicated any particular order.
Finally it should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although reference
Preferred embodiment describes the invention in detail, those skilled in the art should understand that, it can be to of the invention
Technical solution is modified or equivalent replacement, without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
1. a kind of code generating method based on cardon, which is characterized in that the described method includes:
It receives cardon and parses a framing picture from the cardon;
The component in every frame picture is identified by deep learning algorithm;
Whether the quantity for judging the component in the cardon is 1;
When the quantity of the component in the cardon is 1, the parameter information and every two frame figures of the component of every frame picture are obtained
Time interval between piece;
The component is constructed according to the time interval between the parameter information of the component of the frame picture of acquisition and every two frame pictures
The equation of motion and the function of time;And
According to the equation of motion of the component of building, function of time calling interface function library to generate the corresponding code of the cardon.
2. as described in claim 1 based on the code generating method of cardon, which is characterized in that the method also includes steps:
When the quantity of the component in the cardon is not 1, identifies the component in the cardon and determines the quantity of the component,
And the cardon is resolved into the sub- cardon with the quantity same number of the component;And
A framing picture is parsed from each sub- cardon, identifies the component in each sub- cardon in frame picture, and is obtained every
Time interval between the frame picture of the parameter information of the component of the frame picture of one sub- cardon and each sub- cardon.
3. as described in claim 1 based on the code generating method of cardon, which is characterized in that the component according to building
The equation of motion, function of time calling interface function library include: to generate the corresponding code of the cardon
According to the time between the parameter information of the component of the frame picture of each sub- cardon and two frame pictures of each sub- cardon
Interval constructs the equation of motion and the function of time of the component of each sub- cardon;
It is generated according to the equation of motion of the component of each sub- cardon of building and the function of time calling interface and every
The corresponding code of one sub- cardon;And
The corresponding code of each sub- cardon is merged to obtain the corresponding code of the cardon.
4. as described in claim 1 based on the code generating method of cardon, which is characterized in that described to pass through deep learning algorithm
Identify that the component in every frame picture includes:
The frame image data of positive sample and the frame image data of negative sample are obtained, and by the frame image data and negative sample of positive sample
Frame image data mark frame picture classification so that the frame image data of the frame image data of positive sample and negative sample carry frame figure
Piece class label;
The frame image data label of the frame image data label of the positive sample and the negative sample is randomly divided into first to preset
The verifying collection of the training set of ratio and the second preset ratio using the training set training component classification model, and utilizes
The accuracy rate of the component classification model after the verifying collection verifying training;
Terminate to train when the accuracy rate is more than or equal to default accuracy rate, be made with the component classification model after training
The component category of the frame picture is identified for classifier;And
When the accuracy rate is less than default accuracy rate, then increase positive sample quantity and negative sample quantity with group described in re -training
Part disaggregated model is until the accuracy rate is more than or equal to default accuracy rate.
5. as described in claim 1 based on the code generating method of cardon, which is characterized in that the reception cardon and from described
A framing picture is parsed in cardon includes:
The cardon is parsed into the static frames image with equal resolution using PIL image module.
6. as described in claim 1 based on the code generating method of cardon, which is characterized in that the parameter information packet of the component
Include position, size, color, transparency.
7. as described in claim 1 based on the code generating method of cardon, which is characterized in that the format of the cardon is GIF
Format.
8. a kind of code generating unit based on cardon, which is characterized in that described device includes:
Parsing module, for receiving cardon and parsing from the cardon framing picture;
Component recognition module, for identifying the component in every frame picture by deep learning algorithm;
Judgment module, for judging whether the quantity of the component in the cardon is 1;
Module is obtained, for obtaining the parameter information of the component of every frame picture when the quantity of the component in the cardon is 1
And the time interval between every two frame pictures;
Module is constructed, for the time interval between the parameter information and every two frame pictures according to the component of the frame picture of acquisition
Construct the equation of motion and the function of time of the component;And
Generation module, for described dynamic to generate according to the equation of motion of the component of building, function of time calling interface function library
Scheme corresponding code.
9. a kind of electronic equipment, it is characterised in that: the electronic equipment includes processor, and the processor is for executing memory
It realizes as described in any one of claim 1-7 when the computer program of middle storage based on the code generating method of cardon.
10. a kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the computer program
It realizes as described in any one of claim 1-7 when being executed by processor based on the code generating method of cardon.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811536798.6A CN109766089B (en) | 2018-12-15 | 2018-12-15 | Code generation method and device based on dynamic diagram, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811536798.6A CN109766089B (en) | 2018-12-15 | 2018-12-15 | Code generation method and device based on dynamic diagram, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109766089A true CN109766089A (en) | 2019-05-17 |
CN109766089B CN109766089B (en) | 2023-05-30 |
Family
ID=66451931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811536798.6A Active CN109766089B (en) | 2018-12-15 | 2018-12-15 | Code generation method and device based on dynamic diagram, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766089B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113362348A (en) * | 2021-07-19 | 2021-09-07 | 网易(杭州)网络有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113434136A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Code generation method and device, electronic equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000152233A (en) * | 1998-11-13 | 2000-05-30 | Sony Corp | Image information converter and conversion method |
JP2001285870A (en) * | 2000-04-03 | 2001-10-12 | Sony Corp | Device and method for processing digital signal and device and method for processing digital picture signal |
CN104469379A (en) * | 2013-09-18 | 2015-03-25 | 想象技术有限公司 | Generating an output frame for inclusion in a video sequence |
US20170060851A1 (en) * | 2015-08-31 | 2017-03-02 | Linkedin Corporation | Leveraging digital images of user information in a social network |
CN108415705A (en) * | 2018-03-13 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Webpage generating method, device, storage medium and equipment |
CN108519986A (en) * | 2018-02-24 | 2018-09-11 | 阿里巴巴集团控股有限公司 | A kind of webpage generating method, device and equipment |
CN108762741A (en) * | 2018-05-18 | 2018-11-06 | 北京车和家信息技术有限公司 | Animation code generation method and system |
CN108804093A (en) * | 2018-06-15 | 2018-11-13 | 联想(北京)有限公司 | A kind of code generating method and electronic equipment |
CN108812407A (en) * | 2018-05-23 | 2018-11-16 | 平安科技(深圳)有限公司 | Animal health status monitoring method, equipment and storage medium |
-
2018
- 2018-12-15 CN CN201811536798.6A patent/CN109766089B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000152233A (en) * | 1998-11-13 | 2000-05-30 | Sony Corp | Image information converter and conversion method |
JP2001285870A (en) * | 2000-04-03 | 2001-10-12 | Sony Corp | Device and method for processing digital signal and device and method for processing digital picture signal |
CN104469379A (en) * | 2013-09-18 | 2015-03-25 | 想象技术有限公司 | Generating an output frame for inclusion in a video sequence |
US20170060851A1 (en) * | 2015-08-31 | 2017-03-02 | Linkedin Corporation | Leveraging digital images of user information in a social network |
CN108519986A (en) * | 2018-02-24 | 2018-09-11 | 阿里巴巴集团控股有限公司 | A kind of webpage generating method, device and equipment |
CN108415705A (en) * | 2018-03-13 | 2018-08-17 | 腾讯科技(深圳)有限公司 | Webpage generating method, device, storage medium and equipment |
CN108762741A (en) * | 2018-05-18 | 2018-11-06 | 北京车和家信息技术有限公司 | Animation code generation method and system |
CN108812407A (en) * | 2018-05-23 | 2018-11-16 | 平安科技(深圳)有限公司 | Animal health status monitoring method, equipment and storage medium |
CN108804093A (en) * | 2018-06-15 | 2018-11-13 | 联想(北京)有限公司 | A kind of code generating method and electronic equipment |
Non-Patent Citations (4)
Title |
---|
ANIL K. JAIN 等: "AUTOMATIC TEXT LOCATION IN IMAGES AND VIDEO FRAMES", 《 PATTERN RECOGNITION》 * |
YONG PENG 等: "Discriminative graph regularized extreme learning machine and its application to face recognition", 《NEUROCOMPUTING》 * |
史宝会: "基于动态图和线程关系的混合软件水印算法", 《电子设计工程》 * |
李波: "视频序列中运动目标检测与跟踪算法的研究", 《中国博士学位论文全文数据库信息科技辑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434136A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Code generation method and device, electronic equipment and storage medium |
CN113434136B (en) * | 2021-06-30 | 2024-03-05 | 平安科技(深圳)有限公司 | Code generation method, device, electronic equipment and storage medium |
CN113362348A (en) * | 2021-07-19 | 2021-09-07 | 网易(杭州)网络有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109766089B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109360550B (en) | Testing method, device, equipment and storage medium of voice interaction system | |
CN108877782B (en) | Speech recognition method and device | |
CN110363810B (en) | Method, apparatus, device and computer storage medium for establishing image detection model | |
CN108293079A (en) | For the striding equipment buddy application of phone | |
CN109711877A (en) | Processing method, device, computer storage medium and the electronic equipment of advertising pictures | |
CN109697537A (en) | The method and apparatus of data audit | |
CN112397047A (en) | Speech synthesis method, device, electronic equipment and readable storage medium | |
CN110083526A (en) | Applied program testing method, device, computer installation and storage medium | |
CN109408468A (en) | Document handling method and device calculate equipment and storage medium | |
CN110349007A (en) | The method, apparatus and electronic equipment that tenant group mentions volume are carried out based on variable discrimination index | |
CN108121699A (en) | For the method and apparatus of output information | |
CN112951233A (en) | Voice question and answer method and device, electronic equipment and readable storage medium | |
CN111222837A (en) | Intelligent interviewing method, system, equipment and computer storage medium | |
CN109766089A (en) | Code generating method, device, electronic equipment and storage medium based on cardon | |
US20220198153A1 (en) | Model training | |
CN117992598B (en) | Demand response method, device, medium and equipment based on large model | |
CN118350572A (en) | Demand delivery method and device | |
CN110502716A (en) | A kind of methods of exhibiting of information of vehicles, server, terminal device | |
CN112270350B (en) | Method, apparatus, device and storage medium for portraying organization | |
CN111832254B (en) | Drawing annotation display processing method and device | |
CN107071553A (en) | Method, device and computer readable storage medium for modifying video and voice | |
CN110717101A (en) | User classification method and device based on application behaviors and electronic equipment | |
CN110348438A (en) | A kind of picture character identifying method, device and electronic equipment based on artificial nerve network model | |
CN109657073A (en) | Method and apparatus for generating information | |
CN109522210A (en) | Interface testing parameters analysis method, device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |