CN105847728A - Information processing method and terminal - Google Patents

Information processing method and terminal Download PDF

Info

Publication number
CN105847728A
CN105847728A CN201610232008.XA CN201610232008A CN105847728A CN 105847728 A CN105847728 A CN 105847728A CN 201610232008 A CN201610232008 A CN 201610232008A CN 105847728 A CN105847728 A CN 105847728A
Authority
CN
China
Prior art keywords
information
image
face
area
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610232008.XA
Other languages
Chinese (zh)
Inventor
吴运声
吴发强
戴阳刚
高雨
时峰
汪倩怡
熊涛
崔凌睿
应磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610232008.XA priority Critical patent/CN105847728A/en
Publication of CN105847728A publication Critical patent/CN105847728A/en
Priority to PCT/CN2017/074455 priority patent/WO2017177768A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses an information processing method and a terminal. The method comprises the following steps: running the application at the terminal, obtaining the first operation, and triggering the acquisition of first media information; identifying a first zone according to a preset strategy in the process of acquiring the first media information by the terminal wherein the first zone is defined as the local zone of the image information of each frame in the first media information; taking the first zone away from the image information of each frame and the remaining zone of the image information of each frame is defined as a second zone; using a first image processing manner to process the first zone so as to obtain the processed result of a first image; using a second image processing manner to process the second zone so as to obtain the processed result of a second image; and integrating the processed results of the first image and the second image to obtain new complete image integration information wherein the complete image integration information is used as the image information of each frame again.

Description

A kind of information processing method and terminal
Technical field
The present invention relates to mechanics of communication, particularly relate to a kind of information processing method and terminal.
Background technology
Along with the progress of science and technology, the photographic head configuration constantly upgrading of terminal, use mobile phone, flat board, notebook Carrying out recorded video or shooting high definition picture becomes a kind of trend, user can also be by the video recorded or bat The high definition picture taken the photograph carries out Information Sharing by social networking application.
As a example by the scene of the video recorded, before Information Sharing, if the video image quality that user is to recording Dissatisfied, can be by for image quality quickly being revised the image processing techniques (such as filtering techniques) beautified Carry out image procossing.In prior art, it is that whole picture is added filter, and filter function is single, due to whole Comprising the shading value needed for multiple different element, different elements in individual picture, color saturation etc. image quality is imitated Fruit is different, therefore, after whole picture is added filter, can cause the decline of whole image quality.But, In correlation technique, for this problem, there is no effective solution.
Summary of the invention
In view of this, the embodiment of the present invention, desirable to provide a kind of information processing method and terminal, solve at least The problem that prior art exists, improves the video image quality of real-time recording video.
The technical scheme of the embodiment of the present invention is achieved in that
A kind of information processing method of the embodiment of the present invention, described method includes:
Open application in terminal, obtain the first operation, trigger the collection of the first media information;
Terminal, during gathering described first media information, identifies first area according to preset strategy, Described first area is the regional area in each frame image information of described first media information;
Described first area is separated from described each frame image information, described each frame after separating The remaining region of image information is designated as second area;
Described first area uses the first image procossing mode process, obtains the first processing result image;
Described second area uses the second image procossing mode process, obtains the second processing result image;
Described first processing result image and described second processing result image are carried out fusion treatment, again obtains To complete image co-registration information, and using described complete image co-registration information again as the image of each frame Information.
In such scheme, described terminal is during gathering described first media information, according to preset strategy Identify first area, including:
Obtain face characteristic value, judge each frame figure of described first media information according to described face characteristic value As whether information comprises face, obtain judged result;
When described judged result is for comprising face, then orient the position at face place in current frame image information, Described first area is included in the region that the position at face place is corresponding.
In such scheme, described first area is separated from described each frame image information, will separate The rear described remaining region of each frame image information is designated as second area, including:
Obtain the position at face place in current frame image information, in the position at described face place according to face Identification parameter extracts facial contour information;
According to described facial contour information, current frame image information is separated, obtain human face region and inhuman Face region;
Described human face region is defined as described first area;
Described non-face region is defined as described second area.
In such scheme, described method also includes:
Before the collection of described triggering the first media information, the collection gathered for the first media information detected When module has turned on and not yet starts the acquisition operations of reality, identify and gather phase with described first media information Close current scene information and collect described current scene information.
In such scheme, described method also includes:
Terminal is during gathering described first media information, and the described current scene information according to collecting is entered Row is analyzed, and obtains analysis result;
Adaptively selected for each frame image information to described first media information according to described analysis result Carry out the image procossing mode of image procossing;
Described image procossing mode includes: described first image procossing mode and/or the second image procossing mode.
A kind of terminal of the embodiment of the present invention, described terminal includes:
Trigger element, for opening application in terminal, obtains the first operation, triggers adopting of the first media information Collection;
Recognition unit, for, during gathering described first media information, identifying according to preset strategy First area, described first area is the regional area in each frame image information of described first media information;
Separative element, for being separated from described each frame image information described first area, will divide It is designated as second area from the rear described remaining region of each frame image information;
First processing unit, for using the first image procossing mode to process described first area, To the first processing result image;
Second processing unit, for using the second image procossing mode to process described second area, To the second processing result image;
Integrated unit, for melting described first processing result image and described second processing result image Conjunction processes, and retrieves complete image co-registration information, and described complete image co-registration information is again made Image information for each frame.
In such scheme, described recognition unit, it is further used for:
Obtain face characteristic value, judge each frame figure of described first media information according to described face characteristic value As whether information comprises face, obtain judged result;
When described judged result is for comprising face, then orient the position at face place in current frame image information, Described first area is included in the region that the position at face place is corresponding.
In such scheme, described separative element, it is further used for:
Obtain the position at face place in current frame image information, in the position at described face place according to face Identification parameter extracts facial contour information;
According to described facial contour information, current frame image information is separated, obtain human face region and inhuman Face region;
Described human face region is defined as described first area;
Described non-face region is defined as described second area.
In such scheme, described terminal also includes: detector unit, is used for:
Before the collection of described triggering the first media information, the collection gathered for the first media information detected When module has turned on and not yet starts the acquisition operations of reality, identify and gather phase with described first media information Close current scene information and collect described current scene information.
In such scheme, described terminal also includes: selects unit, is used for:
Terminal is during gathering described first media information, and the described current scene information according to collecting is entered Row is analyzed, and obtains analysis result;
Adaptively selected for each frame image information to described first media information according to described analysis result Carry out the image procossing mode of image procossing;
Described image procossing mode includes: described first image procossing mode and/or the second image procossing mode.
The information processing method of the embodiment of the present invention, described method includes: open application in terminal, obtains the One operation, triggers the collection of the first media information;Terminal is gathering during described first media information, Identifying first area according to preset strategy, described first area is each frame figure of described first media information As the regional area in information;Described first area is separated from described each frame image information, will After separation, the described remaining region of each frame image information is designated as second area;Described first area is used the One image procossing mode processes, and obtains the first processing result image;Described second area is used second Image procossing mode processes, and obtains the second processing result image;By described first processing result image and Described second processing result image carries out fusion treatment, retrieves complete image co-registration information, and by institute State complete image co-registration information again as the image information of each frame.Use the embodiment of the present invention, for Whole picture comprises the shading value needed for multiple different element, different elements, color saturation etc. image quality Effect is different, based on this, the partial picture in whole picture is added filter respectively and carries out different local Process, thus improve the video image quality of real-time recording video.
Accompanying drawing explanation
Fig. 1 is the schematic diagram carrying out the mutual each side's hardware entities of information in the embodiment of the present invention;
Fig. 2 is one of the embodiment of the present invention one and realizes schematic flow sheet;
Fig. 3 is a schematic diagram of the end-user interface of application the inventive method embodiment;
Fig. 4 is one of the embodiment of the present invention two and realizes schematic flow sheet;
Fig. 5 is the schematic diagram that application the inventive method embodiment carries out regional area division;
Fig. 6 is one of the embodiment of the present invention three and realizes schematic flow sheet;
Fig. 7 is another schematic diagram that application the inventive method embodiment carries out regional area division;
Fig. 8 is a composition structural representation of the embodiment of the present invention four;
Fig. 9 is a hardware composition structural representation of the embodiment of the present invention five.
Detailed description of the invention
Enforcement to technical scheme is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 is the schematic diagram carrying out the mutual each side's hardware entities of information in the embodiment of the present invention, wraps in Fig. 1 Including: server 11, terminal unit 21-24, terminal unit 21-24 passes through cable network or wireless network Carrying out information alternately with server, terminal unit includes the types such as mobile phone, desktop computer, PC, all-in-one. Wherein, all application installed in terminal unit or the application specified.Use the embodiment of the present invention, based on System shown in above-mentioned Fig. 1, terminal open application (take pictures application or recorded video application or image procossing Application etc.), acquisition the first operation (as opened photographic head after entrance recorded video application, video is started recording, Or the acquisition operations of referred to as recorded video), trigger the collection (as recorded one section of video) of the first media information; Terminal, during gathering described first media information (such as video), identifies first according to preset strategy Region (can be that certain being different from other regions specifies region, this region can be human face region), described First area is the regional area in each frame image information of described first media information;By described firstth district Territory is separated from described each frame image information, the described remaining district of each frame image information after separating Territory is designated as second area, and (if first area is human face region, then second area is non-face region, or claims For face extraneous areas);Described first area use the first image procossing mode (such as skin, despeckle, the upper cheek Red filtering techniques such as grade) process, obtain the first processing result image;Described second area is used second Image procossing mode (as adjusted the filtering techniques such as shading value or color saturation) processes, and obtains second Processing result image;Described first processing result image and described second processing result image are carried out at fusion Reason, retrieves complete image co-registration information, and using described complete image co-registration information again as every The image information of one frame.
The example of above-mentioned Fig. 1 simply realizes a system architecture example of the embodiment of the present invention, and the present invention implements Example is not limited to the system structure described in above-mentioned Fig. 1, and based on this system architecture, each is implemented to propose the present invention Example.
Embodiment one:
A kind of information processing method of the embodiment of the present invention, as in figure 2 it is shown, described method includes:
Step 101, terminal open application, obtain first operation, trigger the collection of the first media information.
Here, user is currently in use terminal (such as mobile phone 11), the user interface of mobile phone 11 comprises various types of The application icon of type, is illustrated in figure 3 the end-user interface comprising various types of application icon, Application icon includes: such as music icon, function setting icon, mail transmission/reception icon etc., user holds Row first operates, and as clicked on the video processing applications icon of A1 mark with finger, enters the place of video record Reason process, thus trigger the collection of the first media information (such as video).Such as, one section of indoor can be recorded Scene, or, carry out auto heterodyne etc. to oneself.
Step 102, terminal, during gathering described first media information, identify according to preset strategy First area, described first area is the regional area in each frame image information of described first media information.
Here, in the processing procedure of video record, by recognition of face location mechanism, terminal can capture The local facial region in whole picture in each frame image information of described first media information.Concrete, During recognition of face, face recognition technology is face feature based on people, to the people in video record Face image or video flowing are acquired, and first determine whether whether there is face in video flowing, if there is face, The most further provide position and the size of face, and orient the positional information of each major facial organ, The respective position of face in face.
Step 103, described first area is separated from described each frame image information, after separating The described remaining region of each frame image information is designated as second area.
Here it is possible to whole picture in each frame image information of described first media information that will identify that In local facial region as first area, then, in addition to local facial region, in whole picture Remaining region is second area.Follow-up, owing to the feature of first area and second area is different, The most different to the two image procossing strategy processed, i.e. need to be respectively adopted different filtering techniques, as In step 104, described first area use the first image procossing mode (such as filter skills such as mill skin, speckle dispelling Art) process, as in step 105, described second area use the second image procossing mode (as adjusted The filtering techniques such as high brightness and color saturation) process, so, from many for whole picture comprises Planting the shading value needed for different elements, different elements, color saturation etc. image quality effect is different angle Set out, based on this, the partial picture in whole picture is added respectively filter and carries out different Local treatment, and It not to use single a set of filtering techniques to process in whole picture, thus improve real-time recording video Video image quality.
Step 104, described first area uses the first image procossing mode process, obtain the first figure As result.
Step 105, described second area uses the second image procossing mode process, obtain the second figure As result.
Step 106, described first processing result image and described second processing result image are carried out at fusion Reason, retrieves complete image co-registration information, and using described complete image co-registration information again as every The image information of one frame.
Here, one implements process and can be: for one section of video file of real-time recording, first From described video, obtain image stream and audio frequency respectively, obtain image stream by the camera interface of Android system, Audio frequency is obtained when video record by microphone samples by audio collection interface;Each frame to image stream Image, distinguishes human face region and non-face region, and respectively human face region and non-face region is used two Set different filtering techniques targetedly carry out the image procossing of local respectively, generate two filter specially good effect image stream, Re-use encoder interfaces in real time two filter special effect graph picture streams with filter specially good effect to be re-started image and melt Merge and be encoded to video flowing, and audio frequency is also recompiled into audio stream, use audio frequency and video combiner mixing Track of video and audio track, generate in real time through different filtering techniques realization office while audio frequency and video are recorded Again image is merged again after portion's image procossing, obtain eventually passing through the video of amended real-time recording File.
Embodiment two:
A kind of information processing method of the embodiment of the present invention, as shown in Figure 4, described method includes:
Step 201, terminal open application, obtain first operation, trigger the collection of the first media information.
Here, user is currently in use terminal (such as mobile phone 11), the user interface of mobile phone 11 comprises various types of The application icon of type, is illustrated in figure 3 the end-user interface comprising various types of application icon, Application icon includes: such as music icon, function setting icon, mail transmission/reception icon etc., user holds Row first operates, and as clicked on the video processing applications icon of A1 mark with finger, enters the place of video record Reason process, thus trigger the collection of the first media information (such as video).Such as, one section of indoor can be recorded Scene, or, carry out auto heterodyne etc. to oneself.
Step 202, acquisition face characteristic value, judge described first media information according to described face characteristic value Each frame image information in whether comprise face, obtain judged result.
Here, in the processing procedure of video record, by recognition of face location mechanism, terminal can capture The local facial region in whole picture in each frame image information of described first media information.Concrete, During recognition of face, face recognition technology is face feature based on people, to the people in video record Face image or video flowing are acquired, and first determine whether whether there is face in video flowing, if there is face, The most further provide position and the size of face, and orient the positional information of each major facial organ, The respective position of face in face.
When step 203, described judged result are for comprising face, then orient face in current frame image information The position at place, described first area is included in the region that the position at face place is corresponding.
Here, being illustrated in figure 5 the exemplary plot that a region divides, Fig. 5 includes the initial picture in left side, The whole picture area of current frame image information is A1, including human face region and non-face region, as non- Human face region includes a little water tumbler A3, and in this step, the region that the position at described face place is corresponding is A2, human face region is included in A2.This region can be carried out carefully by subsequent embodiment three further Change and separate, thus obtain human face region accurately.
Step 204, described first area is separated from described each frame image information, after separating The described remaining region of each frame image information is designated as second area.
Here it is possible to whole picture in each frame image information of described first media information that will identify that In local facial region as first area, then, in addition to local facial region, in whole picture Remaining region is second area.Follow-up, owing to the feature of first area and second area is different, The most different to the two image procossing strategy processed, i.e. need to be respectively adopted different filtering techniques, as In step 205, described first area use the first image procossing mode (such as filter skills such as mill skin, speckle dispelling Art) process, as in step 206, described second area use the second image procossing mode (as adjusted The filtering techniques such as high brightness and color saturation) process, so, from many for whole picture comprises Planting the shading value needed for different elements, different elements, color saturation etc. image quality effect is different angle Set out, based on this, the partial picture in whole picture is added respectively filter and carries out different Local treatment, and It not to use single a set of filtering techniques to process in whole picture, thus improve real-time recording video Video image quality.
Step 205, described first area uses the first image procossing mode process, obtain the first figure As result.
Step 206, described second area uses the second image procossing mode process, obtain the second figure As result.
Step 207, described first processing result image and described second processing result image are carried out at fusion Reason, retrieves complete image co-registration information, and using described complete image co-registration information again as every The image information of one frame.
Here, one implements process and can be: for one section of video file of real-time recording, first From described video, obtain image stream and audio frequency respectively, obtain image stream by the camera interface of Android system, Audio frequency is obtained when video record by microphone samples by audio collection interface;Each frame to image stream Image, judges whether comprise face in each frame image information of image stream according to face characteristic value, to distinguish Go out human face region and non-face region, and respectively human face region and non-face region are used two sets targetedly Different filtering techniques carry out the image procossing of local respectively, generate two filter specially good effect image stream, re-use volume Two filter special effect graph picture streams with filter specially good effect are re-started image co-registration and encode by code device interface in real time For video flowing, and audio frequency is also recompiled into audio stream, use audio frequency and video combiner mixed video track And audio track, generate in real time while audio frequency and video are recorded and realize at topography through different filtering techniques Again image is merged again after reason, obtain eventually passing through the video file of amended real-time recording.
Embodiment three:
A kind of information processing method of the embodiment of the present invention, as shown in Figure 6, described method includes:
Step 301, terminal open application, obtain first operation, trigger the collection of the first media information.
Here, user is currently in use terminal (such as mobile phone 11), the user interface of mobile phone 11 comprises various types of The application icon of type, is illustrated in figure 3 the end-user interface comprising various types of application icon, Application icon includes: such as music icon, function setting icon, mail transmission/reception icon etc., user holds Row first operates, and as clicked on the video processing applications icon of A1 mark with finger, enters the place of video record Reason process, thus trigger the collection of the first media information (such as video).Such as, one section of indoor can be recorded Scene, or, carry out auto heterodyne etc. to oneself.
Step 302, acquisition face characteristic value, judge described first media information according to described face characteristic value Each frame image information in whether comprise face, obtain judged result.
Here, in the processing procedure of video record, by recognition of face location mechanism, terminal can capture The local facial region in whole picture in each frame image information of described first media information.Concrete, During recognition of face, face recognition technology is face feature based on people, to the people in video record Face image or video flowing are acquired, and first determine whether whether there is face in video flowing, if there is face, The most further provide position and the size of face, and orient the positional information of each major facial organ, The respective position of face in face.
When step 303, described judged result are for comprising face, then orient face in current frame image information The position at place, described first area is included in the region that the position at face place is corresponding.
The position at face place in step 304, acquisition current frame image information, in the position at described face place Put and go out facial contour information according to recognition of face parameter extraction.
Here, recognition of face parameter includes face size, the relative position etc. of face face organ.
Here, being illustrated in figure 7 the exemplary plot that a region divides, Fig. 7 includes the initial picture in left side, The whole picture area of current frame image information is A1, including human face region and non-face region, as non- Human face region includes a little water tumbler A3, and in this step, the region that the position at described face place is corresponding is A2, human face region A4 are included in the region A2 that the position at face place is corresponding, specifically basis Recognition of face parameter (such as face size, the relative position of face face organ) etc. knows facial contour information, Thus the region A2 that the position at face place is corresponding is carried out refinement and separates, according to facial contour Information locating Go out the human face region A4 of reality, thus obtain human face region accurately.
Step 305, according to described facial contour information, current frame image information is separated, obtain face Region and non-face region, be defined as described first area by described human face region, by described non-face region It is defined as described second area.
Here it is possible to whole picture in each frame image information of described first media information that will identify that In local facial region as first area, then, in addition to local facial region, in whole picture Remaining region is second area.Follow-up, owing to the feature of first area and second area is different, The most different to the two image procossing strategy processed, i.e. need to be respectively adopted different filtering techniques, as In step 306, described first area use the first image procossing mode (such as filter skills such as mill skin, speckle dispelling Art) process, as in step 307, described second area use the second image procossing mode (as adjusted The filtering techniques such as high brightness and color saturation) process, so, from many for whole picture comprises Planting the shading value needed for different elements, different elements, color saturation etc. image quality effect is different angle Set out, based on this, the partial picture in whole picture is added respectively filter and carries out different Local treatment, and It not to use single a set of filtering techniques to process in whole picture, thus improve real-time recording video Video image quality.
Step 306, described human face region uses the first image procossing mode process, obtain the first figure As result.
Step 307, described non-face region uses the second image procossing mode process, obtain second Processing result image.
Step 308, described first processing result image and described second processing result image are carried out at fusion Reason, retrieves complete image co-registration information, and using described complete image co-registration information again as every The image information of one frame.
Here, one implements process and can be: for one section of video file of real-time recording, first From described video, obtain image stream and audio frequency respectively, obtain image stream by the camera interface of Android system, Audio frequency is obtained when video record by microphone samples by audio collection interface;Each frame to image stream Image, judges whether comprise face in each frame image information of image stream according to face characteristic value, to distinguish Go out human face region and non-face region, and respectively human face region and non-face region are used two sets targetedly Different filtering techniques carry out the image procossing of local respectively, generate two filter specially good effect image stream, re-use volume Two filter special effect graph picture streams with filter specially good effect are re-started image co-registration and encode by code device interface in real time For video flowing, and audio frequency is also recompiled into audio stream, use audio frequency and video combiner mixed video track And audio track, generate in real time while audio frequency and video are recorded and realize at topography through different filtering techniques Again image is merged again after reason, obtain eventually passing through the video file of amended real-time recording.
Based on above-described embodiment, in the embodiment of the present invention one embodiment, described method also includes: described Before triggering the collection of the first media information, detect that the acquisition module for the first media information collection is opened When opening and not yet start the acquisition operations of reality, identify relevant to described first media information collection current Scene information also collects described current scene information.
Based on above-described embodiment, in the embodiment of the present invention one embodiment, described method also includes: terminal During gathering described first media information, the described current scene information according to collecting is analyzed, Obtain analysis result;According to adaptively selected each for described first media information of described analysis result Frame image information carries out the image procossing mode of image procossing;Described image procossing mode includes: described first Image procossing mode and/or the second image procossing mode.
Here, one is implemented as: is entering video record application, is opening photographic head, but simply exist The personage needing shooting found by the view-finder that photographic head is corresponding, and external environment condition or internal medium etc., at this During owing to not having started formal video record, therefore, CPU is idle, now, at photographic head Open but during the most formally starting recorded video, can be based on view-finder corresponding to current photographic head The scene practical situation of display is that terminal estimates an optional filter, can be the filter only for human face region Mirror, it is also possible to be the filter only for non-face region, it is also possible to be for human face region and non-face region all A filter thering is provided respectively (i.e. 2 filters, in order to just can be respectively to human face region and inhuman in early stage Face region carries out pretreatment).
Here, except during by terminal idle, process load little when, outside estimating filter according to scene, Filter can also be estimated according to user's use habit of historical record or collection etc. for terminal, such as, use Family is a schoolgirl, then if certainly taking pictures, then her custom is likely to be had cosmetic and beautify face Demand, then, U.S. pupil filter, rouge filter etc. can be pushed for terminal in advance.If user is even Clap this, the filter that user's last time recorded video is used can be recorded, when user continues to record next time During video, push filter that this last time recorded video used etc. by terminal in advance.
Embodiment four:
A kind of terminal of the embodiment of the present invention, as shown in Figure 8, described terminal includes:
Trigger element 11, for opening application in terminal, obtains the first operation, triggers the first media information Gather;Recognition unit 12, for during gathering described first media information, knows according to preset strategy Not going out first area, described first area is the local in each frame image information of described first media information Region;Separative element 13, for described first area is separated from described each frame image information, The remaining region of each frame image information described after separation is designated as second area;First processing unit 14, uses In described first area uses the first image procossing mode process, obtain the first processing result image; Second processing unit 15, for using the second image procossing mode to process described second area, obtains Second processing result image;And integrated unit 16, for by described first processing result image and described second Processing result image carries out fusion treatment, retrieves complete image co-registration information, and by described complete Image co-registration information is again as the image information of each frame.
In one concrete application of the embodiment of the present invention, user is currently in use terminal (such as mobile phone 11), mobile phone Comprise various types of application icon in the user interface of 11, be illustrated in figure 3 and comprise various types of application One end-user interface of icon, application icon includes: such as music icon, function setting icon, Mail transmission/reception icon etc., user performs the first operation, should as clicked on the Video processing of A1 mark with finger With icon, enter the processing procedure of video record, thus trigger the collection of the first media information (such as video). Such as, one section of indoor scene can be recorded, or, carry out auto heterodyne etc. to oneself.At video record In processing procedure, by recognition of face location mechanism, terminal can capture the every of described first media information Local facial region in whole picture in one frame image information.Concrete, during recognition of face, Face recognition technology is face feature based on people, carries out the facial image in video record or video flowing Gather, first determine whether whether video flowing exists face, if there is face, the most further provide face Position and size, and orient the positional information of each major facial organ, obtain the respective of face in face Position.Continue in each frame image information of described first media information that will identify that in whole picture Local facial region is as first area, then, in addition to local facial region, remaining in whole picture Region be second area.Owing to the feature of first area and second area is different, the two is carried out The image procossing strategy processed is the most different, i.e. needs to be respectively adopted different filtering techniques, such as, to described First area uses the first image procossing mode (such as filtering techniques such as mill skin, speckle dispelling) to process, to institute Stating second area uses the second image procossing mode (such as heightening the filtering techniques such as brightness and color saturation) to enter Row processes, so, from for whole picture comprises multiple different element, and the shading value needed for different elements, The angle that color saturation etc. image quality effect is different is set out, and based on this, draws the local in whole picture Face adds filter respectively and carries out different Local treatment rather than whole picture is used single a set of filter skill Art processes, thus improves the video image quality of real-time recording video.
In the embodiment of the present invention one embodiment, described recognition unit, it is further used for: obtain face special Value indicative, judges whether wrap in each frame image information of described first media information according to described face characteristic value Containing face, obtain judged result;
When described judged result is for comprising face, then orient the position at face place in current frame image information, Described first area is included in the region that the position at face place is corresponding.
In the embodiment of the present invention one embodiment, described separative element, it is further used for: obtain current The position at face place in frame image information, in the position at described face place according to recognition of face parameter extraction Go out facial contour information;According to described facial contour information, current frame image information is separated, obtain people Face region and non-face region;Described human face region is defined as described first area;By described non-face district Territory is defined as described second area.
In the embodiment of the present invention one embodiment, described terminal also includes: detector unit, is used for: institute State trigger the first media information collection before, detect for first media information gather acquisition module When opening and not yet start actual acquisition operations, identify work as relevant to described first media information collection Front scene information also collects described current scene information.
In the embodiment of the present invention one embodiment, described terminal also includes: selects unit, is used for: eventually Holding during gathering described first media information, the described current scene information according to collecting is analyzed, Obtain analysis result;According to adaptively selected each for described first media information of described analysis result Frame image information carries out the image procossing mode of image procossing;Described image procossing mode includes: described first Image procossing mode and/or the second image procossing mode.
Embodiment five:
It is to be herein pointed out above-mentioned terminal can be this electronic equipment of PC, it is also possible to for such as PAD, Panel computer, this mancarried electronic aid of laptop computer, can also be intelligent mobile terminal as this in mobile phone, It is not limited to description here;Described server can be consisted of group system, for realizing each unit merit Can and merge into one or each unit function split arrange electronic equipment, terminal and server the most at least include use Data base in storage data and the processor for data process, or include that be arranged in server deposits Storage media or the storage medium being independently arranged.
Wherein, for for the processor that data process, when execution processes, micro-process can be used Device, central processing unit (CPU, Central Processing Unit), digital signal processor (DSP, Digital Singnal Processor) or programmable logic array (FPGA, Field-Programmable Gate Array) Realize;For storage medium, comprising operational order, this operational order can be that computer can perform generation Code, realizes each step in the invention described above embodiment information processing method flow process by described operational order Suddenly.
This terminal and this server as hardware entities S11 an example as shown in Figure 9.Described device bag Include processor 31, storage medium 32 and at least one external communication interface 33;Described processor 31, deposit Storage media 32 and external communication interface 33 are all connected by bus 34.
It need to be noted that: above is referred to the description of terminal and server entry, with said method description be It is similar to, describes with the beneficial effect of method, do not repeat.For terminal of the present invention and server example In the ins and outs that do not disclose, refer to the description of the inventive method embodiment.
As a example by a real world applications scene, the embodiment of the present invention is described below:
During the use that various video classes are applied, a kind of application scenarios is: during video record Add multiple filter, the image quality of shooting, a kind of filter such as recorded by real-time video can be optimized Mirror can realize grinding the image quality of skin skin Caring.Owing to each two field picture of whole video record existing difference Constructing elements, use existing single a set of filtering techniques that overall image quality can be caused on the contrary to decline, the most right Whole picture adds filter, and filter is single, can cause the decline of whole image quality after adding filter.Citing For, certain constructing elements (the local landscape part as in whole image) be because of insufficient light in the case of The ropy problem of video record, certain constructing elements (the local personage's part as in whole image) be because of The problem of the poor video quality that figure skin is the best and causes.It addition, after a filter processes, if whole Individual image quality is not complied with one's wishes, it will usually enables filter repeatedly and repeatedly processes, and treatment effeciency is low, and Add too much filter and cause the excessive problem of hardware spending.
This application scene uses the embodiment of the present invention to include: 1) opens at user's photographic head and does not has started recording During this period of time in begin to collect current scene information.Current scene is identified by algorithm;2) video record System starts, and according to the scene information above identified, selected filter style, respectively to dim, bright, just Often it is optimized.Such as: dim style improves picture brightness, goes noise;3) during video record, with Track face location, does mill skin targetedly to human face region;4) during video record, Real time identification people Face are carried out specific aim makeups optimization, such as by face face: increase rouge.Can also do special dressing/ Filter, increases interest, makes interesting amusing video.Visible, this application scene uses the present invention real Execute example, a kind of implement the filter being precisely beautify according to scene and face location and mill skin algorithm, carry out Local treatment, can retain other position image details, simultaneously while improving user's face skin quality Most suitable filters, filter variation can be selected according to different scenes, substantially increase video record finally in Existing image quality.
Corresponding handling process comprises the steps:
Step 501, obtain real-time recording video flowing in each picture frame after, by Face datection merit The position of face can be monitored in picture frame, and extract facial contour.
Step 502, according to facial contour, picture frame is separated, is divided into face and non-face two portions Point.
Step 503, non-face part is done brightness adjustment, remove the filters such as dry point and render.
Step 504, improving looks face part, the filter such as mill skin renders.
Step 505, the face part processed is re-mixed into non-face part complete picture frame.
In sum, above-mentioned handling process includes: the process (obtaining whole picture) that picture frame obtains;Face The process (identifying the subregion in whole picture, such as human face region) of detection;Image separate process (as Human face region is separated from whole picture, it may be assumed that obtain human face region and non-face region);Render filter The process of mirror (renders function such as open filter);Human face region use the process of filter (as ground skin and dispelling Speckle etc.);The process of image co-registration (after having processed such as human face region, is schemed with face extraneous areas again As merge, the image after being processed) etc. part form.In realizing in whole flow process, to image Various piece, the most only employs a filter, therefore, uses above-mentioned flow process to improve processing speed, reduces The expense calculated.
In several embodiments provided herein, it should be understood that disclosed equipment and method, can To realize by another way.Apparatus embodiments described above is only schematically, such as, and institute Stating the division of unit, be only a kind of logic function and divide, actual can have other dividing mode when realizing, As: multiple unit or assembly can be in conjunction with, or it is desirably integrated into another system, or some features can be neglected Slightly, or do not perform.It addition, the coupling each other of shown or discussed each ingredient or directly coupling Close or communication connection can be the INDIRECT COUPLING by some interfaces, equipment or unit or communication connection, can Be electrical, machinery or other form.
The above-mentioned unit illustrated as separating component can be or may not be physically separate, as The parts that unit shows can be or may not be physical location, i.e. may be located at a place, it is possible to To be distributed on multiple NE;Part or all of unit therein can be selected according to the actual needs Realize the purpose of the present embodiment scheme.
It addition, each functional unit in various embodiments of the present invention can be fully integrated in a processing unit, Can also be that each unit is individually as a unit, it is also possible to two or more unit are integrated in one In individual unit;Above-mentioned integrated unit both can realize to use the form of hardware, it would however also be possible to employ hardware adds soft The form of part functional unit realizes.
One of ordinary skill in the art will appreciate that: all or part of step realizing said method embodiment can Completing with the hardware relevant by programmed instruction, aforesaid program can be stored in an embodied on computer readable and deposit In storage media, this program upon execution, performs to include the step of said method embodiment;And aforesaid storage Medium includes: movable storage device, read only memory (ROM, Read-Only Memory), deposit at random Access to memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
Or, if the above-mentioned integrated unit of the present invention is using the form realization of software function module and as independent Production marketing or use time, it is also possible to be stored in a computer read/write memory medium.Based on so Understanding, the part that prior art is contributed by the technical scheme of the embodiment of the present invention the most in other words can Embodying with the form with software product, this computer software product is stored in a storage medium, bag Include some instructions with so that a computer equipment (can be personal computer, server or network Equipment etc.) perform all or part of of method described in each embodiment of the present invention.And aforesaid storage medium bag Include: various Jie that can store program code such as movable storage device, ROM, RAM, magnetic disc or CD Matter.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited to This, any those familiar with the art, in the technical scope that the invention discloses, can readily occur in Change or replacement, all should contain within protection scope of the present invention.Therefore, protection scope of the present invention should It is as the criterion with described scope of the claims.

Claims (10)

1. an information processing method, it is characterised in that described method includes:
Open application in terminal, obtain the first operation, trigger the collection of the first media information;
Terminal, during gathering described first media information, identifies first area according to preset strategy, Described first area is the regional area in each frame image information of described first media information;
Described first area is separated from described each frame image information, described each frame after separating The remaining region of image information is designated as second area;
Described first area uses the first image procossing mode process, obtains the first processing result image;
Described second area uses the second image procossing mode process, obtains the second processing result image;
Described first processing result image and described second processing result image are carried out fusion treatment, again obtains To complete image co-registration information, and using described complete image co-registration information again as the image of each frame Information.
Method the most according to claim 1, it is characterised in that described terminal is gathering described first matchmaker During body information, identify first area according to preset strategy, including:
Obtain face characteristic value, judge each frame figure of described first media information according to described face characteristic value As whether information comprises face, obtain judged result;
When described judged result is for comprising face, then orient the position at face place in current frame image information, Described first area is included in the region that the position at face place is corresponding.
Method the most according to claim 2, it is characterised in that by described first area from described each Frame image information is separated, the remaining region of each frame image information described after separation is designated as the secondth district Territory, including:
Obtain the position at face place in current frame image information, in the position at described face place according to face Identification parameter extracts facial contour information;
According to described facial contour information, current frame image information is separated, obtain human face region and inhuman Face region;
Described human face region is defined as described first area;
Described non-face region is defined as described second area.
4. according to the method described in any one of claims 1 to 3, it is characterised in that described method also includes:
Before the collection of described triggering the first media information, the collection gathered for the first media information detected When module has turned on and not yet starts the acquisition operations of reality, identify and gather phase with described first media information Close current scene information and collect described current scene information.
Method the most according to claim 4, it is characterised in that described method also includes:
Terminal is during gathering described first media information, and the described current scene information according to collecting is entered Row is analyzed, and obtains analysis result;
Adaptively selected for each frame image information to described first media information according to described analysis result Carry out the image procossing mode of image procossing;
Described image procossing mode includes: described first image procossing mode and/or the second image procossing mode.
6. a terminal, it is characterised in that described terminal includes:
Trigger element, for opening application in terminal, obtains the first operation, triggers adopting of the first media information Collection;
Recognition unit, for, during gathering described first media information, identifying according to preset strategy First area, described first area is the regional area in each frame image information of described first media information;
Separative element, for being separated from described each frame image information described first area, will divide It is designated as second area from the rear described remaining region of each frame image information;
First processing unit, for using the first image procossing mode to process described first area, To the first processing result image;
Second processing unit, for using the second image procossing mode to process described second area, To the second processing result image;
Integrated unit, for melting described first processing result image and described second processing result image Conjunction processes, and retrieves complete image co-registration information, and described complete image co-registration information is again made Image information for each frame.
Terminal the most according to claim 6, it is characterised in that described recognition unit, is further used for:
Obtain face characteristic value, judge each frame figure of described first media information according to described face characteristic value As whether information comprises face, obtain judged result;
When described judged result is for comprising face, then orient the position at face place in current frame image information, Described first area is included in the region that the position at face place is corresponding.
Terminal the most according to claim 7, it is characterised in that described separative element, is further used for:
Obtain the position at face place in current frame image information, in the position at described face place according to face Identification parameter extracts facial contour information;
According to described facial contour information, current frame image information is separated, obtain human face region and inhuman Face region;
Described human face region is defined as described first area;
Described non-face region is defined as described second area.
9. according to the terminal described in any one of claim 6 to 8, it is characterised in that described terminal also includes: Detector unit, is used for:
Before the collection of described triggering the first media information, the collection gathered for the first media information detected When module has turned on and not yet starts the acquisition operations of reality, identify and gather phase with described first media information Close current scene information and collect described current scene information.
Terminal the most according to claim 9, it is characterised in that described terminal also includes: select single Unit, is used for:
Terminal is during gathering described first media information, and the described current scene information according to collecting is entered Row is analyzed, and obtains analysis result;
Adaptively selected for each frame image information to described first media information according to described analysis result Carry out the image procossing mode of image procossing;
Described image procossing mode includes: described first image procossing mode and/or the second image procossing mode.
CN201610232008.XA 2016-04-13 2016-04-13 Information processing method and terminal Pending CN105847728A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610232008.XA CN105847728A (en) 2016-04-13 2016-04-13 Information processing method and terminal
PCT/CN2017/074455 WO2017177768A1 (en) 2016-04-13 2017-02-22 Information processing method, terminal, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610232008.XA CN105847728A (en) 2016-04-13 2016-04-13 Information processing method and terminal

Publications (1)

Publication Number Publication Date
CN105847728A true CN105847728A (en) 2016-08-10

Family

ID=56597535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610232008.XA Pending CN105847728A (en) 2016-04-13 2016-04-13 Information processing method and terminal

Country Status (2)

Country Link
CN (1) CN105847728A (en)
WO (1) WO2017177768A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331482A (en) * 2016-08-23 2017-01-11 努比亚技术有限公司 Photo processing device and method
CN106604147A (en) * 2016-12-08 2017-04-26 天脉聚源(北京)传媒科技有限公司 Video processing method and apparatus
CN107071333A (en) * 2017-05-19 2017-08-18 深圳天珑无线科技有限公司 Method of video image processing and video image processing device
WO2017177768A1 (en) * 2016-04-13 2017-10-19 腾讯科技(深圳)有限公司 Information processing method, terminal, and computer storage medium
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107563962A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107820027A (en) * 2017-11-02 2018-03-20 北京奇虎科技有限公司 Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN107945188A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Personage based on scene cut dresss up method and device, computing device
CN108010037A (en) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN108124101A (en) * 2017-12-18 2018-06-05 北京奇虎科技有限公司 Video capture method, device, electronic equipment and computer readable storage medium
CN108171716A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN108171719A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video penetration management method and device based on the segmentation of adaptive tracing frame
WO2018177364A1 (en) * 2017-03-29 2018-10-04 武汉斗鱼网络科技有限公司 Filter implementation method and device
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN109242802A (en) * 2018-09-28 2019-01-18 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN109640151A (en) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN111200705A (en) * 2018-11-16 2020-05-26 北京微播视界科技有限公司 Image processing method and device
CN112132085A (en) * 2020-09-29 2020-12-25 联想(北京)有限公司 Image processing method and electronic equipment
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN113132800A (en) * 2021-04-14 2021-07-16 Oppo广东移动通信有限公司 Video processing method and device, video player, electronic equipment and readable medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110896465A (en) * 2018-09-12 2020-03-20 北京嘉楠捷思信息技术有限公司 Image processing method and device and computer readable storage medium
CN111079520A (en) * 2019-11-01 2020-04-28 京东数字科技控股有限公司 Image recognition method, device and storage medium
CN110933354B (en) * 2019-11-18 2023-09-01 深圳传音控股股份有限公司 Customizable multi-style multimedia processing method and terminal thereof
CN114297436A (en) * 2021-01-14 2022-04-08 海信视像科技股份有限公司 Display device and user interface theme updating method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265624A1 (en) * 2004-05-27 2005-12-01 Konica Minolta Business Technologies, Inc. Image processing apparatus and image processing method
CN1808497A (en) * 2005-01-21 2006-07-26 建兴电子科技股份有限公司 Image processing unit and its image processing method
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
US20110134470A1 (en) * 2009-12-07 2011-06-09 Canon Kabushiki Kaisha Information processing apparatus, display control method, and storage medium
CN103179341A (en) * 2011-12-21 2013-06-26 索尼公司 Image processing device, image processing method, and program
CN104322050A (en) * 2012-05-22 2015-01-28 株式会社尼康 Electronic camera, image display device, and image display program
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device
CN104952036A (en) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 Facial beautification method and electronic equipment in real-time video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6011092B2 (en) * 2012-07-13 2016-10-19 カシオ計算機株式会社 Image processing apparatus, image tone conversion method, and program
CN105847728A (en) * 2016-04-13 2016-08-10 腾讯科技(深圳)有限公司 Information processing method and terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265624A1 (en) * 2004-05-27 2005-12-01 Konica Minolta Business Technologies, Inc. Image processing apparatus and image processing method
CN1808497A (en) * 2005-01-21 2006-07-26 建兴电子科技股份有限公司 Image processing unit and its image processing method
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
US20110134470A1 (en) * 2009-12-07 2011-06-09 Canon Kabushiki Kaisha Information processing apparatus, display control method, and storage medium
CN103179341A (en) * 2011-12-21 2013-06-26 索尼公司 Image processing device, image processing method, and program
CN104322050A (en) * 2012-05-22 2015-01-28 株式会社尼康 Electronic camera, image display device, and image display program
CN104952036A (en) * 2015-06-18 2015-09-30 福州瑞芯微电子有限公司 Facial beautification method and electronic equipment in real-time video
CN104902189A (en) * 2015-06-24 2015-09-09 小米科技有限责任公司 Picture processing method and picture processing device

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017177768A1 (en) * 2016-04-13 2017-10-19 腾讯科技(深圳)有限公司 Information processing method, terminal, and computer storage medium
CN106331482A (en) * 2016-08-23 2017-01-11 努比亚技术有限公司 Photo processing device and method
CN106604147A (en) * 2016-12-08 2017-04-26 天脉聚源(北京)传媒科技有限公司 Video processing method and apparatus
WO2018177364A1 (en) * 2017-03-29 2018-10-04 武汉斗鱼网络科技有限公司 Filter implementation method and device
CN107071333A (en) * 2017-05-19 2017-08-18 深圳天珑无线科技有限公司 Method of video image processing and video image processing device
CN107316281B (en) * 2017-06-16 2021-03-02 Oppo广东移动通信有限公司 Image processing method and device and terminal equipment
CN107316281A (en) * 2017-06-16 2017-11-03 广东欧珀移动通信有限公司 image processing method, device and terminal device
CN107563962A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107820027A (en) * 2017-11-02 2018-03-20 北京奇虎科技有限公司 Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN107945188A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Personage based on scene cut dresss up method and device, computing device
CN108010037A (en) * 2017-11-29 2018-05-08 腾讯科技(深圳)有限公司 Image processing method, device and storage medium
CN108124101A (en) * 2017-12-18 2018-06-05 北京奇虎科技有限公司 Video capture method, device, electronic equipment and computer readable storage medium
CN108171716A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN108171719A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video penetration management method and device based on the segmentation of adaptive tracing frame
CN108171719B (en) * 2017-12-25 2021-07-23 北京奇虎科技有限公司 Video crossing processing method and device based on self-adaptive tracking frame segmentation
CN108171716B (en) * 2017-12-25 2021-11-26 北京奇虎科技有限公司 Video character decorating method and device based on self-adaptive tracking frame segmentation
CN108683826A (en) * 2018-05-15 2018-10-19 腾讯科技(深圳)有限公司 Video data handling procedure, device, computer equipment and storage medium
CN109242802A (en) * 2018-09-28 2019-01-18 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN111200705A (en) * 2018-11-16 2020-05-26 北京微播视界科技有限公司 Image processing method and device
CN111200705B (en) * 2018-11-16 2021-05-25 北京微播视界科技有限公司 Image processing method and device
CN109640151A (en) * 2018-11-27 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency, device, electronic equipment and storage medium
CN112132085A (en) * 2020-09-29 2020-12-25 联想(北京)有限公司 Image processing method and electronic equipment
CN112991208A (en) * 2021-03-11 2021-06-18 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and electronic device
CN113132800A (en) * 2021-04-14 2021-07-16 Oppo广东移动通信有限公司 Video processing method and device, video player, electronic equipment and readable medium
CN113132800B (en) * 2021-04-14 2022-09-02 Oppo广东移动通信有限公司 Video processing method and device, video player, electronic equipment and readable medium

Also Published As

Publication number Publication date
WO2017177768A1 (en) 2017-10-19

Similar Documents

Publication Publication Date Title
CN105847728A (en) Information processing method and terminal
US10706892B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
JP6569687B2 (en) Information processing method, video processing apparatus, and program
WO2017160370A1 (en) Visualization of image themes based on image content
CN105791692A (en) Information processing method and terminal
WO2008155094A3 (en) Automated method for time segmentation of a video into scenes in scenes allowing for different types of transitions between image sequences
US8897603B2 (en) Image processing apparatus that selects a plurality of video frames and creates an image based on a plurality of images extracted and selected from the frames
JP2011517791A (en) Decoration as event marker
CN113298845A (en) Image processing method, device and equipment
JP4911191B2 (en) Image processing apparatus and image processing program
CN106416220A (en) Automatic insertion of video into a photo story
CN108271069A (en) The segment filter method and device of a kind of video frequency program
CN105022802A (en) Photo classification method and terminal
CN109981976A (en) Picture pick-up device and its control method and storage medium
CN108062158A (en) Information processing system and information processing method
CN106791389A (en) Image processing method, image processing apparatus and terminal
JP2008146191A (en) Image output device and image output method
JP5550305B2 (en) Imaging device
CN106303235A (en) Take pictures processing method and processing device
CN104869283B (en) A kind of image pickup method and electronic equipment
US9767587B2 (en) Image extracting apparatus, image extracting method and computer readable recording medium for recording program for extracting images based on reference image and time-related information
JP6373446B2 (en) Program, system, apparatus and method for selecting video frame
JP6583285B2 (en) Information processing method, video processing apparatus, and program
CN109889773A (en) Method, apparatus, equipment and the medium of the monitoring of assessment of bids room personnel
KR101738580B1 (en) System and service for providing audio source based on facial expression recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160810

RJ01 Rejection of invention patent application after publication