CN108769549A - A kind of image processing method, device and computer readable storage medium - Google Patents

A kind of image processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN108769549A
CN108769549A CN201810711932.5A CN201810711932A CN108769549A CN 108769549 A CN108769549 A CN 108769549A CN 201810711932 A CN201810711932 A CN 201810711932A CN 108769549 A CN108769549 A CN 108769549A
Authority
CN
China
Prior art keywords
video data
editing
image
data
groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810711932.5A
Other languages
Chinese (zh)
Other versions
CN108769549B (en
Inventor
吴嘉旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Animation Co Ltd
MIGU Comic Co Ltd
Original Assignee
MIGU Animation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Animation Co Ltd filed Critical MIGU Animation Co Ltd
Priority to CN201810711932.5A priority Critical patent/CN108769549B/en
Publication of CN108769549A publication Critical patent/CN108769549A/en
Application granted granted Critical
Publication of CN108769549B publication Critical patent/CN108769549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The invention discloses a kind of image processing methods, including:Acquire video data;Receive the first operation;First operation is responded, using the video data of acquisition as pending video data;Editing processing is carried out to the pending video data;Using the video data after editing, the data of graphic interchange format are generated;The data of described image interchange format are used to form dynamic picture and share for user.The present invention further simultaneously discloses a kind of image processing apparatus and computer readable storage medium.

Description

A kind of image processing method, device and computer readable storage medium
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing method, devices and computer-readable Storage medium.
Background technology
Currently, with the fast development of image processing techniques, the integrated video processing function of video processing platform is increasingly It is powerful.
Currently, after terminal user has shot short-sighted frequency, the short-sighted frequency of shooting is uploaded to short-sighted frequency shooting platform, utilization is short Editing tool on video capture platform carries out editing to the short-sighted frequency of upload.
But in the related technology, terminal user cannot carry out editing immediately after having shot short-sighted frequency, and cannot be fast after editing Fast-growing is at the dynamic picture that can be used for sharing.
Invention content
In view of this, an embodiment of the present invention is intended to provide a kind of image processing method, device and computer-readable storage mediums Matter can realize the dynamic picture quickly generated after editing immediately, editing after shooting for sharing.
What the technical solution of the embodiment of the present invention was realized in:
The embodiment of the present invention provides a kind of image processing method, the method includes:
Acquire video data;
Receive the first operation;
First operation is responded, using the video data of acquisition as pending video data;
Editing processing is carried out to the pending video data;
Using the video data after editing, the data of graphic interchange format are generated;The data of described image interchange format are used Share in formation dynamic picture and for user.
It is described that editing processing is carried out to the pending video data in said program, including:
Determine at least one time anchor point;Using at least one time anchor point, in conjunction with the pending video data Corresponding time shaft obtains at least two groups video data to the pending video data segment;
For at least one set of video data at least two groups video data, editing relevant treatment is carried out.
In said program, at least one set of video data at least two groups video data, editing relevant treatment is carried out, Including:
Determine the boundary of every group of video data;
For at least one set of video data at least two groups video data, judge whether to detect the sliding for boundary Operation;
When confirmly detecting the slide, editing processing is carried out to the other video data of respective sets.
In said program, the method further includes:
Image recognition processing is carried out to the other video data of editing treated respective sets;
Judge whether to identify the first video data;The video counts of the first video data characterization carrying first information According to;The first information include at least image style information, the dress ornament element information of image personage, image personage face image Information;
When determination identifies first video data, default second video data is utilized to replace first video counts According to;
Wherein, first video data and second video data difference.
In said program, when there is at least two groups video data to carry out editing relevant treatment, the method further includes:
Editing processing is carried out at least two groups video data, obtains editing treated at least two groups video data;
To editing, treated that at least two groups video data merges.
In said program, described to editing, treated that at least two groups video data merges, including:
For editing treated at least two groups video data, judge whether adjacent between each group video data;
When determining adjacent, to editing, treated that at least two groups video data merges.
In said program, at least one time anchor point of determination, including:
Obtain pre-set time anchor point quantity;
Judge whether pre-set time anchor point quantity is less than or equal to predetermined threshold value;
It, will be pre-set at least one when determining that pre-set time anchor point quantity is less than or equal to predetermined threshold value Time anchor point is as determining at least one time anchor point.
The embodiment of the present invention provides a kind of image processing apparatus, and described device includes:
Acquisition module, for acquiring video data;
Receiving module, for receiving the first operation;
Editing module, for responding first operation, using the video data of acquisition as pending video data;To institute It states pending video data and carries out editing processing;
Generation module, for using the video data after editing, generating the data of graphic interchange format;Described image is exchanged The data of format are used to form dynamic picture and share for user.
The embodiment of the present invention provides a kind of computer readable storage medium, is stored thereon with computer program, the calculating Machine program realizes the step of image processing method any one of described above when being executed by processor.
The embodiment of the present invention provides a kind of image processing apparatus, including:Memory, processor and storage are on a memory And the computer program that can be run on a processor;
Wherein, the processor is for when running the computer program, executing image processing method any one of described above The step of method.
Image processing method, device and computer readable storage medium provided in an embodiment of the present invention acquire video data; Receive the first operation;First operation is responded, using the video data of acquisition as pending video data;To described pending Video data carries out editing processing;Using the video data after editing, the data of graphic interchange format are generated;Described image is exchanged The data of format are used to form dynamic picture and share for user.In the embodiment of the present invention, after acquiring video data, to adopting The video data of collection carries out editing processing, after the completion of editing processing, generates the data of graphic interchange format, described image exchanges lattice The data of formula are used to form dynamic picture and share for user, so, it is possible fast after editing immediately, editing after realization acquires Fast-growing is at the dynamic picture for sharing.
Description of the drawings
Fig. 1-Fig. 4 is the schematic diagram of short-sighted frequency shooting platform in the related technology;
Fig. 5 is the implementation process schematic diagram of image processing method of the embodiment of the present invention;
Fig. 6 is the specific implementation flow schematic diagram of image procossing of the embodiment of the present invention;
Fig. 7 a are the schematic diagram of the editing page of the embodiment of the present invention;
Fig. 7 b are the schematic diagram that user of the embodiment of the present invention adds breakpoint;
Fig. 7 c are the schematic diagram of user's slide boundary of the embodiment of the present invention;
Fig. 8 a are that graphic interchange format of the embodiment of the present invention (GIF, Graphics Interchange Format) entrance shows It is intended to;
Fig. 8 b are the schematic diagram one that the embodiment of the present invention shares interface generation;
Fig. 9 a are the schematic diagram that user of the embodiment of the present invention chooses preview frame;
Fig. 9 b are the schematic diagram two that the embodiment of the present invention shares interface generation;
Figure 10 is the composed structure schematic diagram one of image processing apparatus of the embodiment of the present invention;
Figure 11 is the composed structure schematic diagram two of image processing apparatus of the embodiment of the present invention.
Specific implementation mode
In the related technology, after terminal user has shot short-sighted frequency, the short-sighted frequency of shooting is uploaded to short-sighted frequency shooting platform, Such as it is micro- regarding (such as Fig. 1), U.S. bat (such as Fig. 2), second beats (such as Fig. 3), quick worker (such as Fig. 4).In Fig. 1 to Fig. 4, in display page bottom Show that a series of preview frames, user can the realizations pair by dragging the right boundary of short-sighted frequency to adjust the length of short-sighted frequency The cutting of short-sighted frequency.
But in the related technology, terminal user cannot carry out editing immediately after having shot short-sighted frequency, cannot be quick after editing The picture for generating image format transformation (GIF) is shared for user, to enhance social activity.
In addition, short-sighted frequency shooting platform is only supported to carry out editing to short-sighted frequency according to the right boundary of short-sighted frequency, do not support Editing is carried out to a certain segment among short-sighted frequency, does not also support the editing mode that short-sighted frequency is cut to split again after multistage.
Based on this, in the embodiment of the present invention, video data is acquired;Receive the first operation;First operation is responded, will be adopted The video data of collection is as pending video data;Editing processing is carried out to the pending video data;After editing Video data generates the data of graphic interchange format;The data of described image interchange format be used to form dynamic picture and for Shared at family.In the embodiment of the present invention, after acquiring video data, editing processing is carried out to the video data of acquisition, and generate The data of graphic interchange format so, it is possible to quickly generate the Dynamic Graph for sharing after realization acquires after editing immediately, editing Piece.
The characteristics of in order to more fully hereinafter understand the embodiment of the present invention and technology contents, below in conjunction with the accompanying drawings to this hair The realization of bright embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used for limiting the present invention.
As shown in figure 5, the present invention will be described in detail embodiment image processing method, includes the following steps:
Step 501:Acquire video data.
Wherein, the video can be short-sighted frequency.
Here it is possible to camera be opened by the application program in terminal, to realize the acquisition video data.It is described Application program is integrated with acquisition, editing, generates GIF pictures, sharing function.
Step 502:Receive the first operation;First operation is responded, using the video data of acquisition as pending video Data;Editing processing is carried out to the pending video data.
When practical application, in the preset duration after acquiring video data, if receiving the first operation of user's triggering, Acquisition interface is then switched to editing interface, so, it is possible to carry out editing immediately after acquiring video data.Wherein, described pre- If time length comparison is short, such as 30 seconds or 1 minute;First operation can be the click behaviour for clip button in acquisition interface Make.
In one embodiment, described that editing processing is carried out to the pending video data, including:When determining at least one Between anchor point;Using at least one time anchor point, in conjunction with the corresponding time shaft of the pending video data, wait locating to described Video data segment is managed, at least two groups video data is obtained;For at least one set of video data at least two groups video data, Carry out editing relevant treatment.
Wherein, the time anchor point can be the anchor point on the corresponding horizontal time axis of the pending video data;One Group video data corresponds to a video-frequency band.
Here, at least one time anchor point of the determination, including:Obtain pre-set time anchor point quantity;Judge pre- Whether the time anchor point quantity being first arranged is less than or equal to predetermined threshold value;When determine pre-set time anchor point quantity be less than or When equal to predetermined threshold value, using pre-set at least one time anchor point as determining at least one time anchor point;Work as determination When pre-set anchor point quantity is more than predetermined threshold value, the first prompt message is generated;First prompt message is used to indicate use Family stops setting time anchor point.
When practical application, time anchor can be arranged on the corresponding horizontal time axis of the pending video data in user Point.
In one embodiment, it at least one set of video data at least two groups video data, carries out at editing correlation Reason, including:Determine the boundary of every group of video data;For at least one set of video data at least two groups video data, judge Whether slide for boundary is detected;When confirmly detecting the slide, to the other video data of respective sets Carry out editing processing.
When practical application, when user sets up time anchor point on the corresponding horizontal time axis of the pending video data Afterwards, left margin and right margin are generated at time anchor point;Before time anchor point is not arranged in user, the pending data includes starting Boundary and end boundary.Using the left margin, right margin, beginning boundary and end boundary, every group of video data is determined Boundary.
After anchor point is segmented the pending video data between when utilized, carrying for " lower boundary please being slide " is generated Show information, and detects the slide on the boundary for every group of video data;When monitoring the slide for boundary in water Usually on countershaft when corresponding end position, one group of time point is recorded;Using one group of time point of record, after obtaining one group of editing Video data.Alternatively, after anchor point is segmented the pending video data between when utilized, generation " please be clicked The prompt message on boundary ";It is corresponding in corresponding group video data when detecting that user is directed to first clicking operation on boundary The prompt message of display " please clicking in this area primary " in preview frame region;When the second clicking operation for detecting user When, the boundary that user clicks is shown in the corresponding position of the second clicking operation, and record delimitation is on horizontal time axis Time point;For every group of video data, one group of time point is obtained;Using one group of time point of record, after obtaining one group of editing Video data.Wherein, boundary is shown perpendicular to horizontal time axis.
For every group of video data that segmentation obtains, by prompting user to carry out slide or click to grasp to boundary Make, the editing to every group of video data may be implemented, in this way, after user has shot video, editing processing can be completed immediately, be not necessarily to Consume the time that user waits for editing.
In one embodiment, the method further includes:Image is carried out to the other video data of editing treated respective sets Identifying processing;Judge whether to identify the first video data;The first video data characterization can carry the video of the first information Data;The first information is believed including at least the face image of image style information, the dress ornament element information of image personage, image personage Breath;When determination identifies first video data, default second video data is utilized to replace first video data;Its In, first video data and second video data difference.Described image style includes classicism, style of missing old times or old friends; The dress ornament element include trend element,
For example, image recognition processing is carried out to the other video data of editing treated respective sets;If identified The video data for carrying character facial image information replaces carrying character facial using the video data of carrying avatars information The video data of image information;Wherein, described avatars, such as cat, Shiba Inu image etc..If identifying that carrying is classic The video data of style information, the video counts for the video data replacement carrying classicism information of style information of being missed old times or old friends using carrying According to;If identifying the video data of the dress ornament element information of carrying flower, bird, insect, and fish pattern;Utilize the dress ornament of carrying line image The video data of element information replaces the video data of the dress ornament element information of carrying flower, bird, insect, and fish pattern.
Here, to editing treated every group of video data into edlin, for example head portrait is replaced, dress ornament is replaced, replaces figure As style etc., enjoyment of the user to short video clip can be increased, help to promote user experience.
When practical application, for every group of video data after editing, it can also be carried out in the preview window in editing interface It plays.
In one embodiment, when there is at least two groups video data to carry out editing relevant treatment, the method further includes:Needle Editing processing is carried out at least two groups video data, obtains editing treated at least two groups video data;After handling editing At least two groups video data merge.
It is in one embodiment, described that editing, treated that at least two groups video data merges, including:For editing Whether treated at least two groups video data judges adjacent between each group video data;When determining adjacent, editing is handled At least two groups video data afterwards merges;When determining non-conterminous, generates and be used to indicate user's sliding adjacent video section The prompting message on boundary.
Here, for editing treated each group video data, the first time period on horizontal time axis is determined;For Each group video data after segment processing determines the second time period on horizontal time axis;According to group sequence, first is judged Whether the period falls into corresponding second time period;When determine do not fall within when, determine editing treated each group video data it Between it is non-conterminous;When determine fall into when, determine editing treated and is adjacent between each group video time.Wherein, every group of video data It is identical with the group number after editing before editing.
When practical application, after the completion of editing processing, each group video data to be combined can also be chosen by user, and sentence It is whether adjacent between each group video data that disconnected user chooses;When adjacent, merge.Wherein, user can be from being segmented To each group video data carry out choosing each group video data to be combined.
Step 503:Using the video data after editing, the data of graphic interchange format are generated;Described image interchange format Data be used to form dynamic picture and share for user.
Here it is possible to using editing treated each group video data, the dynamic picture of GIF formats is respectively obtained;May be used also The picture of GIF formats is generated with the preview frame for the correspondence each group video data chosen using user;It can also utilize after merging Video obtains the dynamic picture of GIF formats.Wherein, the dynamic picture of the GIF formats can have animation effect.
When practical application, in order to promote editing entertaining, the dynamic picture comprising GIF formats can be generated and share contact Share interface;After sharing contact when the user clicks, the dynamic picture of GIF formats shared.
In order to allow other users to generate interest to editing, the dynamic picture comprising GIF formats can also be generated, share official documents and correspondence With the interface of sharing of sharing contact, after sharing contact when the user clicks, by the dynamic picture of GIF formats with share official documents and correspondence together Shared.It is described share official documents and correspondence for other users understand editing related content, such as editing number, editing video in Hold.
Here, the generating process for sharing official documents and correspondence includes:According to user for the slide or clicking operation of boundary implementation Number, count editing number;The content type of the video data of acquisition is identified;The content type includes landscape Class, action class, figure kind;According to the editing number of the content type and statistics identified, official documents and correspondence is shared in generation.It is described to share The title of official documents and correspondence can be《User monarch A has cut 1000 dancing videos worked with great care》, the content for sharing official documents and correspondence can With the related introduction information including dancing video.
Image processing method provided in an embodiment of the present invention can be real by the application program (APP, application) in terminal Existing, the APP is integrated with acquisition, editing, generates GIF pictures, sharing function.
Using the technical solution of the embodiment of the present invention, after acquiring video data, the video data of acquisition is carried out at editing Reason generates the data of graphic interchange format, the data of described image interchange format are used to form Dynamic Graph after the completion of editing processing Piece is simultaneously shared for user, so, it is possible to quickly generate the Dynamic Graph for sharing after realization acquires after quick editing, editing Piece.
In addition, editing process combination image processing techniques, the face in intelligent measurement video, and support to replace with face The head portrait that user chooses, such as cat, Shiba Inu head portrait contribute to the entertaining for promoting video clipping, and then promote user experience.
Below by taking specific embodiment as an example, the realization process and principle that the present invention will be described in detail in practical applications.
Fig. 6 is the specific implementation flow schematic diagram of image procossing of the embodiment of the present invention, implements process, including walk as follows Suddenly:
Step 601:User records one section of short-sighted frequency using the video capture function in miaow cluck circle circle APP acquisition interfaces;When After user clicks the submitting button shown in acquisition interface, it is immediately switched to editing interface.
Here, the function that miaow cluck circle circle APP is integrated with acquisition, editing, generates GIF pictures, shares GIF pictures.
Step 602:After switching to the editing page, the short-sighted frequency of acquisition is segmented, at least one video-frequency band is obtained;When When user drags video section boundary, editing relevant treatment is carried out to corresponding video section.
Fig. 7 a are the schematic diagram of the editing page, as shown in Figure 7a, in the bottom of the editing page, by the short-sighted frequency of acquisition with one The form of series of preview frames is shown, and horizontal time axis is established in the lower sideline horizontal direction of preview frame.
Fig. 7 b are that the schematic diagram of user's addition breakpoint " adds when user clicks in video clipping interface as shown in Figure 7b Add breakpoint " button, and dragged after the centre position addition breakpoint of such as horizontal time axis in a series of preview frames of displaying The stop position of breakpoint shows the scissors icon, when the user clicks after the scissors icon, the time of miaow cluck circle circle APP record user settings Anchor point.
User can add multiple breakpoints, and miaow cluck circle circle APP is according to the time anchor point recorded in a period of time, to acquisition Short-sighted frequency is segmented, and after the completion of segment processing, the position that breakpoint is added in user shows left margin and right margin.Left margin Auto-matching first boundary forward forms a video section boundary;Right margin Auto-matching first boundary backward is formed Another video section boundary.Wherein, the border color of the same video-frequency band is identical, and different video section boundary color is different.
It should be noted that when practical application, when user unlimitedly adds multiple breakpoint meetings on mobile terminal screen Cause breakpoint to be not easy accurately to drag, and multiple breakpoints touch area it is fuzzy lead to maloperation, therefore, when the breakpoint of user's addition After quantity is more than predetermined threshold value such as 5 breakpoints, if user again taps on " addition breakpoint " button, miaow cluck circle circle APP is no longer Time anchor point is recorded, and the upper limit is had reached using toast prompting frames prompt user's breakpoint number, addition is can continue after need to merging Breakpoint.
Fig. 7 c are the schematic diagram of user's slide boundary, as shown in Figure 7 c, when user drags the left margin at some breakpoint extremely When some position stops, editing is carried out to the video-frequency band that left margin is formed;After the completion of editing, detection editing treated video It whether there is face in section image, in the presence of determination, generate the prompt message whether prompt replaces facial image, and showing can For the second head image such as cat head portrait of user's selection.And the people in corresponding video section is replaced according to the head portrait that user chooses Face.And edited video-frequency band is played automatically in preview interface.
If having carried out editing at least two video-frequency bands, when user clicked in editing interface merge button when, miaow cluck Circle circle APP judges editing treated whether each video is adjacent, when determining adjacent, by editing treated each video-frequency band progress Merge, plays the video after merging in the preview window, and show the corresponding progress for merging rear video.Alternatively, when user is in editing It is clicked in interface after merging button, receives the video-frequency band that user chooses;Judge whether the video-frequency band that user chooses is adjacent, if phase Neighbour then merges the video-frequency band that user chooses, and the video after merging is played in the preview window, and shows corresponding merging backsight The progress of frequency.
It should be noted that when the video clip that user chooses is non-conterminous, union operation is not executed, and use toast Prompting frame prompt user reselects at least two adjacent video-frequency bands.
Step 603:GIF pictures are generated, and are shared.
Fig. 8 a are GIF entrance schematic diagrames, and Fig. 8 b are to share the schematic diagram that interface generates to work as respective sets as shown in Fig. 8 a, 8b After the completion of other video data editing, GIF entrances contact is quickly generated;When the user clicks behind GIF entrances contact, respective sets are utilized Other video data generates dynamic picture, that is, GIF pictures of GIF formats, and generate comprising GIF pictures with share sharing for contact Interface.Share when the user clicks in interface share contact after, GIF pictures are shared;Share in interface when the user clicks Return push-button after, be back to the editing page.
Fig. 9 a are the schematic diagram for choosing preview frame, and Fig. 9 b are the schematic diagram for sharing interface generation, as shown in Fig. 9 a, 9b, when After the completion of the other video data editing of respective sets, the preview frame for the correspondence each group video data that user chooses is received;Utilize user The preview frame of the correspondence each group video data of selection generates GIF pictures, and generates and share boundary with share button comprising GIF pictures Face.Share when the user clicks in interface share contact after, GIF pictures are shared;Share in interface when the user clicks After return push-button, it is back to the editing page.Wherein, user can correspond in the preview thumbnail of every group of video data and choose preview Frame, selected preview frame are highlighted, and again tapping on the preview frame chosen can be cancelled;Preview thumbnail can be with It is obtained using pumping frame is carried out in preset duration ratio from each group video data.
In the present embodiment, GIF pictures are quickly generated for quickly sharing after supporting editing, realization user once creates, multiple Output, excitation user share desire, greatly enhance the social attribute of application;It also supports to carry out multistage video-frequency band simultaneously, with interface It quickly edits and merges, meet the needs of user video depth editing, the depth based on user's bigger creates power-assisted;It can realize Multibreak addition, user customize GIF image contents, have stronger custom attributes and expansibility.
Based on the image processing method that each embodiment of the application provides, the application also provides a kind of image processing apparatus, such as Shown in Figure 10, described device includes:
Acquisition module 101, for acquiring video data.
Receiving module 102, for receiving the first operation.
Editing module 103, for responding first operation, using the video data of acquisition as pending video data; Editing processing is carried out to the pending video data.
Generation module 104, for using the video data after editing, generating the data of graphic interchange format;Described image The data of interchange format are used to form dynamic picture and share for user.
Wherein, the video can be short-sighted frequency.
In one embodiment, the editing module 103 is specifically used for determining at least one time anchor point;Using it is described extremely A few time anchor point obtains the pending video data segment in conjunction with the corresponding time shaft of the pending video data To at least two groups video data;For at least one set of video data at least two groups video data, editing relevant treatment is carried out.
In one embodiment, the editing module 103 is specifically used for obtaining pre-set time anchor point quantity;Judge Whether pre-set time anchor point quantity is less than or equal to predetermined threshold value;When determining that pre-set time anchor point quantity is less than Or when equal to predetermined threshold value, using pre-set at least one time anchor point as determining at least one time anchor point.
Here, it when determining that pre-set anchor point quantity is more than predetermined threshold value, generates first using generation module 104 and carries Show information;First prompt message is used to indicate user and stops setting time anchor point.
In one embodiment, the editing module 103 is specifically used for determining the boundary of every group of video data;For at least At least one set of video data in two groups of video datas judges whether to detect the slide for boundary;It is detected when determining When to the slide, editing processing is carried out to the other video data of respective sets.
In one embodiment, described device further includes:Picture recognition module, for treated that respective sets are other to editing Video data carries out image recognition processing;Judge whether to identify the first video data for being used to form the first head image;When When determination identifies first video data, the second video data is utilized to replace first video data;Described second regards Frequency evidence is used to form the second head image;Wherein, first head image and second head image difference.Described Two head images can be determining according to the user's choice;Second head image can be avatars, such as cat, Shiba Inu image etc., first head image can be face.
In one embodiment, described device further includes:Playing module, treated for playing editing every group of video counts According to.
In one embodiment, described device further includes:Merging module, for for editing treated at least two groups video Whether data judge adjacent between each group video data;When determining adjacent, to editing treated at least two groups video data It merges.When determining non-conterminous, is generated in conjunction with generation module 104 and be used to indicate user's sliding adjacent video section boundary Prompting message.
When practical application, after the completion of editing processing, each group video data to be combined can also be chosen by user, and sentence It is whether adjacent between each group video data that disconnected user chooses;When adjacent, merge.Wherein, user can be from being segmented To each group video data carry out choosing each group video data to be combined.
Here it is possible to using editing treated each group video data, the dynamic picture of GIF formats is respectively obtained;May be used also The picture of GIF formats is generated with the preview frame for the correspondence each group video data chosen using user;It can also utilize after merging Video obtains the dynamic picture of GIF formats.Wherein, the dynamic picture of the GIF formats can have animation effect.
It should be noted that:Above-described embodiment provide image processing apparatus when carrying out Computer Vision, only more than The division progress of each program module is stated for example, in practical application, can as needed be distributed above-mentioned processing by difference Program module complete, i.e., the internal structure of device is divided into different program modules, with complete it is described above whole or Person part is handled.In addition, the image processing apparatus that above-described embodiment provides belongs to same design with dispatching method embodiment, have Body realizes that process refers to embodiment of the method, and which is not described herein again.
In practical applications, acquisition module 101, receiving module 102 are real by the network interface on image processing apparatus It is existing;Editing module 103, generation module 104, picture recognition module, playing module, merging module can be by being located at image processing apparatus On processor such as central processing unit (CPU, Central Processing Unit), microprocessor (MPU, Micro Processor Unit), digital signal processor (DSP, Digital Signal Processor) or field-programmable gate array Arrange realizations such as (FPGA, Field Programmable Gate Array).
Figure 11 is the structural schematic diagram of image processing apparatus of the present invention, and the setting of image processing apparatus 1100 shown in Figure 11 exists On the service terminal, including:At least one processor 1101, memory 1102, user interface 1103, at least one network connect Mouth 1104.Various components in image processing apparatus 1100 are coupled by bus system 1105.It is understood that bus system 1105 for realizing the connection communication between these components.Bus system 1105 further includes power supply in addition to including data/address bus Bus, controlling bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus in fig. 11 System 1105.
Wherein, user interface 1103 may include display, keyboard, mouse, trace ball, click wheel, button, button, touch Feel plate or touch screen etc..
It is appreciated that memory 1102 can be volatile memory or nonvolatile memory, volatibility may also comprise Both with nonvolatile memory.Wherein, nonvolatile memory can be read-only memory (ROM, Read Only Memory), programmable read only memory (PROM, Programmable Read-Only Memory), erasable programmable are read-only Memory (EPROM, Erasable Programmable Read-Only Memory), electrically erasable programmable read-only memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access store Device (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface are deposited Reservoir, CD or CD-ROM (CD-ROM, Compact Disc Read-Only Memory);Magnetic surface storage can be Magnetic disk storage or magnetic tape storage.Volatile memory can be random access memory (RAM, Random Access Memory), it is used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as Static RAM (SRAM, Static Random Access Memory), synchronous static RAM (SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus Random access memory (DRRAM, Direct Rambus Random Access Memory).Description of the embodiment of the present invention is deposited Reservoir 1102 is intended to the memory of including but not limited to these and any other suitable type.
Memory 1102 in the embodiment of the present invention is for storing various types of data to support image processing apparatus 1100 operation.The example of these data includes:Any computer program for being operated on image processing apparatus 1100, such as Operating system 11021 and application program 11022;Wherein, operating system 11021 includes various system programs, such as ccf layer, core Heart library layer, driving layer etc., for realizing various basic businesses and the hardware based task of processing.Application program 11022 can be with Including various application programs, for realizing various applied business.It realizes that the program of present invention method may be embodied in answer With in program 11022.
The method that the embodiments of the present invention disclose can be applied in processor 1101, or real by processor 1101 It is existing.Processor 1101 may be a kind of IC chip, the processing capacity with signal.During realization, the above method Each step can be completed by the instruction of the integrated logic circuit of the hardware in processor 1101 or software form.Above-mentioned Processor 1101 can be general processor, digital signal processor either other programmable logic device, discrete gate or crystalline substance Body pipe logical device, discrete hardware components etc..Processor 1101 may be implemented or execute disclosed in the embodiment of the present invention Each method, step and logic diagram.General processor can be microprocessor or any conventional processor etc..In conjunction with this hair The step of method disclosed in bright embodiment, hardware decoding processor can be embodied directly in and execute completion, or at decoding Hardware and software module combination in reason device execute completion.Software module can be located in storage medium, which is located at The step of memory 1102, processor 1101 reads the information in memory 1102, preceding method is completed in conjunction with its hardware.
Based on the image processing method that each embodiment of the application provides, the application also provides a kind of computer-readable storage medium Matter, referring to Fig.1 shown in 1, the computer readable storage medium may include:Memory for storing computer program 1102, above computer program can be executed by the processor 1101 of image processing apparatus 1100, to complete to walk described in preceding method Suddenly.Computer readable storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface storage The memories such as device, CD or CD-ROM.
It should be noted that:Between technical solution recorded in the embodiment of the present invention, in the absence of conflict, Ke Yiren Meaning combination.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (10)

1. a kind of image processing method, which is characterized in that the method includes:
Acquire video data;
Receive the first operation;
First operation is responded, using the video data of acquisition as pending video data;
Editing processing is carried out to the pending video data;
Using the video data after editing, the data of graphic interchange format are generated;The data of described image interchange format are used for shape Share at dynamic picture and for user.
2. according to the method described in claim 1, it is characterized in that, described carry out at editing the pending video data Reason, including:
Determine at least one time anchor point;Using at least one time anchor point, corresponded in conjunction with the pending video data Time shaft at least two groups video data is obtained to the pending video data segment;
For at least one set of video data at least two groups video data, editing relevant treatment is carried out.
3. according to the method described in claim 2, it is characterized in that, at least one set of video at least two groups video data Data carry out editing relevant treatment, including:
Determine the boundary of every group of video data;
For at least one set of video data at least two groups video data, judge whether to detect the sliding behaviour for boundary Make;
When confirmly detecting the slide, editing processing is carried out to the other video data of respective sets.
4. according to the method described in claim 3, it is characterized in that, the method further includes:
Image recognition processing is carried out to the other video data of editing treated respective sets;
Judge whether to identify the first video data;The video data of the first video data characterization carrying first information;Institute State the first information including at least image style information, the dress ornament element information of image personage, image personage facial image information;
When determination identifies first video data, default second video data is utilized to replace first video data;
Wherein, first video data and second video data difference.
5. according to the method described in claim 2, it is characterized in that, when there is at least two groups video data to carry out editing relevant treatment When, the method further includes:
Editing processing is carried out at least two groups video data, obtains editing treated at least two groups video data;
To editing, treated that at least two groups video data merges.
6. according to the method described in claim 5, it is characterized in that, it is described to editing treated at least two groups video data into Row merges, including:
For editing treated at least two groups video data, judge whether adjacent between each group video data;
When determining adjacent, to editing, treated that at least two groups video data merges.
7. according to the method described in claim 2, it is characterized in that, at least one time anchor point of the determination, including:
Obtain pre-set time anchor point quantity;
Judge whether pre-set time anchor point quantity is less than or equal to predetermined threshold value;
When determining that pre-set time anchor point quantity is less than or equal to predetermined threshold value, by pre-set at least one time Anchor point is as determining at least one time anchor point.
8. a kind of image processing apparatus, which is characterized in that described device includes:
Acquisition module, for acquiring video data;
Receiving module, for receiving the first operation;
Editing module, for responding first operation, using the video data of acquisition as pending video data;It waits for described It handles video data and carries out editing processing;
Generation module, for using the video data after editing, generating the data of graphic interchange format;Described image interchange format Data be used to form dynamic picture and share for user.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt The step of any one of claim 1 to 7 the method is realized when processor executes.
10. a kind of image processing apparatus, which is characterized in that including:Memory, processor and storage are on a memory and can be The computer program run on processor;
Wherein, the processor is for when running the computer program, perform claim to require any one of 1 to 7 the method Step.
CN201810711932.5A 2018-06-29 2018-06-29 Image processing method and device and computer readable storage medium Active CN108769549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810711932.5A CN108769549B (en) 2018-06-29 2018-06-29 Image processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810711932.5A CN108769549B (en) 2018-06-29 2018-06-29 Image processing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108769549A true CN108769549A (en) 2018-11-06
CN108769549B CN108769549B (en) 2021-08-06

Family

ID=63975503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810711932.5A Active CN108769549B (en) 2018-06-29 2018-06-29 Image processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108769549B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801861A (en) * 2021-01-29 2021-05-14 恒安嘉新(北京)科技股份公司 Method, device and equipment for manufacturing film and television works and storage medium
CN113490051A (en) * 2021-07-16 2021-10-08 北京奇艺世纪科技有限公司 Video frame extraction method and device, electronic equipment and storage medium
CN113542870A (en) * 2021-06-25 2021-10-22 惠州Tcl云创科技有限公司 Video segmentation clipping processing method and device based on mobile terminal and terminal

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527690A (en) * 2009-04-13 2009-09-09 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
US20100262657A1 (en) * 2009-04-08 2010-10-14 Research In Motion Limited Method of sharing image based files between a group of communication devices
CN103745736A (en) * 2013-12-27 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method of video editing and mobile terminal thereof
CN103902808A (en) * 2012-12-27 2014-07-02 索尼电脑娱乐美国公司 Video clip sharing system and method for generating cloud supply games
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN105939483A (en) * 2016-06-06 2016-09-14 乐视控股(北京)有限公司 Video processing method and device
CN106959816A (en) * 2017-03-31 2017-07-18 努比亚技术有限公司 Video intercepting method and mobile terminal
CN107256117A (en) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 The method and its mobile terminal of a kind of video editing
CN107992246A (en) * 2017-12-22 2018-05-04 珠海格力电器股份有限公司 A kind of video editing method and its device and intelligent terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100262657A1 (en) * 2009-04-08 2010-10-14 Research In Motion Limited Method of sharing image based files between a group of communication devices
CN101527690A (en) * 2009-04-13 2009-09-09 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
CN103902808A (en) * 2012-12-27 2014-07-02 索尼电脑娱乐美国公司 Video clip sharing system and method for generating cloud supply games
CN103745736A (en) * 2013-12-27 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method of video editing and mobile terminal thereof
CN105872675A (en) * 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and device for intercepting video animation
CN105939483A (en) * 2016-06-06 2016-09-14 乐视控股(北京)有限公司 Video processing method and device
CN106959816A (en) * 2017-03-31 2017-07-18 努比亚技术有限公司 Video intercepting method and mobile terminal
CN107256117A (en) * 2017-04-19 2017-10-17 上海卓易电子科技有限公司 The method and its mobile terminal of a kind of video editing
CN107992246A (en) * 2017-12-22 2018-05-04 珠海格力电器股份有限公司 A kind of video editing method and its device and intelligent terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801861A (en) * 2021-01-29 2021-05-14 恒安嘉新(北京)科技股份公司 Method, device and equipment for manufacturing film and television works and storage medium
CN113542870A (en) * 2021-06-25 2021-10-22 惠州Tcl云创科技有限公司 Video segmentation clipping processing method and device based on mobile terminal and terminal
CN113490051A (en) * 2021-07-16 2021-10-08 北京奇艺世纪科技有限公司 Video frame extraction method and device, electronic equipment and storage medium
CN113490051B (en) * 2021-07-16 2024-01-23 北京奇艺世纪科技有限公司 Video frame extraction method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108769549B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN105791692B (en) Information processing method, terminal and storage medium
US20220188352A1 (en) Method and terminal for video processing and computer readable storage medium
CN205788149U (en) Electronic equipment and for showing the device of image
CN107257338B (en) media data processing method, device and storage medium
CN108769549A (en) A kind of image processing method, device and computer readable storage medium
CN107277274B (en) growth history recording terminal and growth history recording method
CN110612533A (en) Method for recognizing, sorting and presenting images according to expressions
CN109644217A (en) For capturing equipment, method and graphic user interface with recording medium under various modes
CN111475084A (en) Apparatus and method for capturing and interacting with enhanced digital images
WO2016123266A1 (en) Capture and sharing of video contents
CN107111620A (en) Video editing using context data and the content discovery using group
EP1376582A2 (en) Computer user interface for viewing video compositions generated from a video composition authoring system using video cliplets
JP5083559B2 (en) Image composition apparatus, image composition method, and program
CN104995639A (en) Terminal and method for managing video file
CN108271432B (en) Video recording method and device and shooting equipment
CN110035329A (en) Image processing method, device and storage medium
CN106371702A (en) Image information editing method and device and mobile terminal
CN115362474A (en) Scoods and hairstyles in modifiable video for custom multimedia messaging applications
JP6281648B2 (en) Server, control method, control program, and recording medium
CN108965101A (en) Conversation message processing method, device, storage medium and computer equipment
JP2013200867A (en) Animation creation device and camera
JP2012004747A (en) Electronic equipment and image display method
CN205942662U (en) Electronic equipment with be used for device that divides into groups to a plurality of images
CN108875670A (en) Information processing method, device and storage medium
JP2016143201A (en) Communication terminal, server, communication terminal control method, control program, recording medium, server control method, control program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant