CN108960130A - Video file intelligent processing method and device - Google Patents

Video file intelligent processing method and device Download PDF

Info

Publication number
CN108960130A
CN108960130A CN201810705480.XA CN201810705480A CN108960130A CN 108960130 A CN108960130 A CN 108960130A CN 201810705480 A CN201810705480 A CN 201810705480A CN 108960130 A CN108960130 A CN 108960130A
Authority
CN
China
Prior art keywords
scene
video file
image
group
picture frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810705480.XA
Other languages
Chinese (zh)
Other versions
CN108960130B (en
Inventor
杨双新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810705480.XA priority Critical patent/CN108960130B/en
Publication of CN108960130A publication Critical patent/CN108960130A/en
Application granted granted Critical
Publication of CN108960130B publication Critical patent/CN108960130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

Present disclose provides a kind of video file intelligent processing methods, comprising: obtains video file, the video file includes at least two field pictures;Based on the intellectual analysis of the video file, the contextual data at least one scene that the video file is presented by at least two field pictures is obtained;Based on contextual data processing first group of picture frame corresponding with the first scene at least one described scene, so that the prominent region-of-interest of first scene that the video file is presented when playing, the region-of-interest is that first group of picture frame is presented by the way that treated.The disclosure additionally provides a kind of video file intelligent treatment device and a kind of electronic equipment.

Description

Video file intelligent processing method and device
Technical field
This disclosure relates to a kind of video file intelligent processing method and device.
Background technique
With the fast development of Internet technology, video file is answered more and more with the rich of its showed content The numerous areas such as work, life, amusement for people.In some cases, the scene that video file shows is more complicated, exhibition It may include multiple targets in existing content, user has no idea from the scene of complexity, when watch video playing from multiple mesh The object that video file wishes to be concerned is found in mark rapidly, directly, causes the expression effect of video file bad and user Viewing experience it is not ideal enough.
Summary of the invention
An aspect of this disclosure provides a kind of video file intelligent processing method, comprising: video file is obtained, it is described Video file includes at least two field pictures, based on the intellectual analysis of the video file, obtains the video file described in At least contextual data of at least one scene that is presented of two field pictures, based on contextual data processing with it is described at least one The corresponding first group of picture frame of the first scene in scene so that the video file play when presented described first Scene protrudes region-of-interest, and the region-of-interest is that first group of picture frame is presented by the way that treated.
Optionally, above-mentioned based on contextual data processing corresponding with the first scene at least one described scene the One group of picture frame include: based on contextual data processing and one group of picture frame corresponding at least one each described scene, First scene is one at least one described scene.
Optionally, the above method further include: obtain the first trigger action, first trigger action is used to indicate based on general Logical mode exports above-mentioned video file, or/and, the second trigger action is obtained, second trigger action is used to indicate based on increasing Strong mode exports above-mentioned video file, wherein the image that includes at least that treated when exporting above-mentioned video file with enhancement mode Frame.
Optionally, the above-mentioned intellectual analysis based on the video file obtains the video file by described at least two The contextual data at least one scene that frame image is presented includes: the analysis at least two field pictures, will meet predetermined condition Image specification be one group of picture frame, wherein for any frame image, when the correlation degree of the frame image and previous frame image height When preset threshold, which is divided in group belonging to the previous frame image.One group of picture frame corresponds to a field Scape, the image information data of an at least frame image is the contextual data of corresponding scene in one group of picture frame.
Optionally, the above method further include: for any scene, in an at least frame figure for the corresponding picture frame of the scene As in, the object for meeting pre-defined rule in addition to the perpetual object of the previous scenario of the scene is determined as the scene Perpetual object, wherein the pre-defined rule includes at least following a kind of: predetermined field depth, and/or, predeterminated target object is known Other parameter area.
Optionally, above-mentioned based on contextual data processing corresponding with the first scene at least one described scene the One group of picture frame includes: any frame image in first group of picture frame corresponding for the first scene, by the concern of the first scene Region corresponding to object is handled any frame image as region-of-interest, based on region-of-interest.
Optionally, above-mentioned to be based on region-of-interest to carry out processing to any frame image including: to obtain the region-of-interest The depth of field, virtualization processing is carried out to the object in described image in addition to the depth of field of the region-of-interest.
Another aspect of the disclosure provides a kind of video file intelligent treatment device, comprising: module is obtained, for obtaining Video file is taken, the video file includes at least two field pictures.Analysis module, for the intelligence based on the video file point Analysis obtains the contextual data at least one scene that the video file is presented by at least two field pictures.Handle mould Block, for handling first group of picture frame corresponding with the first scene at least one described scene based on the contextual data, So that the prominent region-of-interest of first scene that the video file is presented when playing, the region-of-interest is to pass through Treated, and first group of picture frame is presented.
Optionally, above-mentioned analysis module is based on contextual data processing and the first scene at least one described scene Corresponding first group of picture frame includes: analysis module, for based on contextual data processing and at least one each described field One group of picture frame corresponding to scape, first scene are one at least one described scene.
Optionally, above-mentioned apparatus further include: trigger module, for obtaining the first trigger action, first trigger action It is used to indicate and the video file is exported based on general mode, or/and, for obtaining the second trigger action, second triggering Operation, which is used to indicate, exports the video file based on enhancement mode, wherein exports the video file with the enhancement mode When include at least treated picture frame.
Optionally, intellectual analysis of the above-mentioned analysis module based on the video file obtains the video file and passes through institute The contextual data for stating at least one scene that at least two field pictures are presented includes: analysis module, for analyzing the video text The image specification for meeting predetermined condition is one group of picture frame by at least two field pictures in part, wherein for any frame image, When the correlation degree of the frame image and previous frame image is higher than preset threshold, which is divided to the previous frame image In affiliated group.One group of picture frame corresponds to a scene, and the image information data of an at least frame image is in one group of picture frame The contextual data of corresponding scene.
Optionally, above-mentioned apparatus further include: preprocessing module is used for for any scene, in the corresponding figure of the scene As frame an at least frame image in, by the object for meeting pre-defined rule in addition to the perpetual object of the previous scenario of the scene It is determined as the perpetual object of the scene.Wherein, the pre-defined rule includes at least following a kind of: predetermined field depth, and/ Or, predeterminated target Object identifying parameter area.
Optionally, above-mentioned processing module is based on contextual data processing and the first scene at least one described scene Corresponding first group of picture frame includes: processing module, for any frame in first group of picture frame corresponding for the first scene Image, using region corresponding to the perpetual object of the first scene as region-of-interest, based on region-of-interest to any frame figure As being handled.
Optionally, above-mentioned processing module is based on region-of-interest and carries out processing to any frame image to include: processing module, For obtaining the depth of field of the region-of-interest, the object in described image in addition to the depth of field of the region-of-interest is blurred Processing.
Another aspect of the present disclosure provides a kind of electronic equipment, including processor, memory and storage are on a memory And the computer program that can be run on a processor, the processor realize method as described above when executing described program.
Another aspect of the present disclosure provides a kind of non-volatile memory medium, is stored with computer executable instructions, institute Instruction is stated when executed for realizing method as described above.
Another aspect of the present disclosure provides a kind of computer program, and the computer program, which includes that computer is executable, to be referred to It enables, described instruction is when executed for realizing method as described above.
Detailed description of the invention
In order to which the disclosure and its advantage is more fully understood, referring now to being described below in conjunction with attached drawing, in which:
Fig. 1 diagrammatically illustrates the applied field of video file intelligent processing method and device according to an embodiment of the present disclosure Scape;
Fig. 2 diagrammatically illustrates the flow chart of video file intelligent processing method according to an embodiment of the present disclosure;
Fig. 3 diagrammatically illustrates the contextual data for the scene that acquisition video file according to an embodiment of the present disclosure is presented Process schematic diagram;
Fig. 4 A~Fig. 4 C diagrammatically illustrates three frames in the first scene of video file according to an embodiment of the present disclosure Image;
Fig. 4 D~Fig. 4 F diagrammatically illustrates three frames in the second scene of video file according to an embodiment of the present disclosure Image;
Fig. 5 diagrammatically illustrates the block diagram of video file intelligent treatment device according to an embodiment of the present disclosure;
Fig. 6 diagrammatically illustrates the block diagram of video file intelligent treatment device according to another embodiment of the present disclosure;With And
Fig. 7 diagrammatically illustrates the block diagram of electronic equipment according to an embodiment of the present disclosure.
Specific embodiment
Hereinafter, will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are only exemplary , and it is not intended to limit the scope of the present disclosure.In the following detailed description, to elaborate many specific thin convenient for explaining Section is to provide the comprehensive understanding to the embodiment of the present disclosure.It may be evident, however, that one or more embodiments are not having these specific thin It can also be carried out in the case where section.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid Unnecessarily obscure the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.It uses herein The terms "include", "comprise" etc. show the presence of the feature, step, operation and/or component, but it is not excluded that in the presence of Or add other one or more features, step, operation or component.
There are all terms (including technical and scientific term) as used herein those skilled in the art to be generally understood Meaning, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification Meaning, without that should be explained with idealization or excessively mechanical mode.
It, in general should be according to this using statement as " at least one in A, B and C etc. " is similar to Field technical staff is generally understood the meaning of the statement to make an explanation (for example, " system at least one in A, B and C " Should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, have B and C, and/or System etc. with A, B, C).Using statement as " at least one in A, B or C etc. " is similar to, generally come Saying be generally understood the meaning of the statement according to those skilled in the art to make an explanation (for example, " having in A, B or C at least One system " should include but is not limited to individually with A, individually with B, individually with C, with A and B, have A and C, have B and C, and/or the system with A, B, C etc.).It should also be understood by those skilled in the art that substantially arbitrarily indicating two or more The adversative conjunction and/or phrase of optional project shall be construed as either in specification, claims or attached drawing A possibility that giving including one of these projects, either one or two projects of these projects.For example, phrase " A or B " should A possibility that being understood to include " A " or " B " or " A and B ".
Shown in the drawings of some block diagrams and/or flow chart.It should be understood that some sides in block diagram and/or flow chart Frame or combinations thereof can be realized by computer program instructions.These computer program instructions can be supplied to general purpose computer, The processor of special purpose computer or other programmable data processing units, so that these instructions are when executed by this processor can be with Creation is for realizing function/operation device illustrated in these block diagrams and/or flow chart.
Therefore, the technology of the disclosure can be realized in the form of hardware and/or software (including firmware, microcode etc.).Separately Outside, the technology of the disclosure can take the form of the computer program product on the computer-readable medium for being stored with instruction, should Computer program product uses for instruction execution system or instruction execution system is combined to use.In the context of the disclosure In, computer-readable medium, which can be, can include, store, transmitting, propagating or transmitting the arbitrary medium of instruction.For example, calculating Machine readable medium can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation medium. The specific example of computer-readable medium includes: magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
Embodiment of the disclosure provides a kind of video file intelligent processing method and device.This method includes intellectual analysis Process and Intelligent treatment process.During intellectual analysis, the contextual data for each scene that video file is presented is obtained.In intelligence In energy treatment process, the corresponding picture frame of each scene is handled based on contextual data, so that video file is playing when institute The prominent corresponding region-of-interest of each scene presented.
Fig. 1 diagrammatically illustrates the applied field of video file intelligent processing method and device according to an embodiment of the present disclosure Scape.It should be noted that being only the example that can apply the scene of the embodiment of the present disclosure shown in Fig. 1, to help art technology Personnel understand the technology contents of the disclosure, but are not meant to that the embodiment of the present disclosure may not be usable for other equipment, system, environment Or scene.
As shown in Figure 1, the application scenarios illustrate the scene that user watches video by electronic equipment 110, electronic equipment 110 can be with display screen and support the various electronic equipments of video playing, including but not limited to smart phone, plate electricity Brain, pocket computer on knee and desktop computer etc., electronic equipment 110, which can have, supports video/image shooting function, It may not possess, when electronic equipment 110 has shooting function, electronic equipment 110 can have a camera or two A above camera.
The there is provided video file intelligent processing method of the embodiment of the present disclosure and device can be applied to electronics shown in FIG. 1 and set In standby, to realize the Intelligent treatment to video file, the video playing that the perpetual object in each stage is highlighted when playing is obtained Effect.
Fig. 2 diagrammatically illustrates the flow chart of video file intelligent processing method according to an embodiment of the present disclosure.
As shown in Fig. 2, this method includes operation S201~S203.
In operation S201, video file is obtained, the video file includes at least two field pictures.
The acquired video file of this operation can be the part of the video just in recording process, be also possible to record Make the video completed.
The video file is obtained by described at least two based on the intellectual analysis of the video file in operation S202 The contextual data at least one scene that frame image is presented.
Wherein, one or more scenes, this operation can be presented by least two field pictures therein due to video file The corresponding contextual data of any one scene that video file is presented, different fields are obtained based on the intellectual analysis to video file The corresponding contextual data of scape is different.Scene described here refers to the video display feelings by coming out constructed by an at least frame image Scape, in the same scene, the perpetual object of the shown content of video file should be consistent, and in different scenes, The perpetual object of the shown content of video file should be it is inconsistent, the switching of scene reflects the change of perpetual object in video Change.
In operation S203, based on contextual data processing corresponding with the first scene at least one described scene the One group of picture frame, so that the prominent region-of-interest of first scene that the video file is presented when playing, the pass Infusing region is that first group of picture frame is presented by the way that treated.
Wherein, any one scene in one or more scenes that the first scene expression video file is presented, first Scape is to express scene by the video come out constructed by an at least frame image, then the corresponding frame of a scene or multiframe figure Picture, a frame or multiple image constitute the corresponding first group of picture frame of the first scene.In treated first group of image In each frame image in frame, region corresponding to the perpetual object of the first scene is highlighted, so that video file is being broadcast The prominent region-of-interest of the first scene presented when putting.It should be noted that being closed for each frame image in first group of picture frame Note object is consistent, but perpetual object corresponding region in each frame image may be inconsistent.
As it can be seen that method shown in Fig. 2 distinguishes the difference that video file is presented by the intellectual analysis to video file Scene simultaneously obtains corresponding contextual data, then is handled based on contextual data one group of picture frame corresponding to any one scene, So that the prominent corresponding region-of-interest of each scene that video file is presented when playing, so that user is when watching video Can be more naturally with the perpetual object that each scene is shown in part highlighted in video concern video, and follow The switching of scene converts the corresponding perpetual object of each scene of concern natural and trippingly in video, that is, passes through the intelligence to video file Change handles the intelligence for realizing video playing, meets the video-see demand of user.
In one embodiment of the present disclosure, S203 is operated in method shown in Fig. 2 is based on contextual data processing and institute Stating the corresponding first group of picture frame of the first scene at least one scene includes: based on contextual data processing and each institute One group of picture frame corresponding at least one scene is stated, first scene is one at least one described scene.As Disclosed above, any of one or more scenes that the first scene is presented by video file, operation S203 is applicable in In the processing of the corresponding one group of picture frame of each scene presented for video file, so that video file is broadcast when playing Region-of-interest corresponding to the i.e. prominent scene of any scene is put, the perpetual object in video and variation are intelligently shown.
As an optional embodiment, can according to need to select whether video file is shown when playing by processing The corresponding picture frame of each scene afterwards, method shown in Fig. 2 further include: obtain the first trigger action, first trigger action It is used to indicate and the video file is exported based on general mode, or/and, obtain the second trigger action, second trigger action Be used to indicate and the video file exported based on enhancement mode, wherein with the enhancement mode export the video file up to Few picture frame that includes that treated.
Based on the present embodiment, the operation S203 of method shown in Fig. 2 to the corresponding picture frame of scene each in video file into After row processing, the processing result of obtained video file can be stored in default storage region, when user wishes that viewing does not have When treated original video, the first trigger action can be carried out, this programme, will be untreated in response to the first trigger action Each picture frame in original video file carries out output broadcasting sequentially in time, when user wishes viewing by Intelligent treatment Video when, the second trigger action can be carried out, this programme, will be by the video text of operation S203 in response to the second trigger action Each picture frame in part carries out output broadcasting sequentially in time, so that user selects the mode of viewing according to demand, obtains Corresponding result of broadcast.
Below by for video file in method shown in Fig. 2 intellectual analysis and treatment process carry out expansion explanation.
In one embodiment of the present disclosure, the operation S202 of method shown in Fig. 2 is based on the intelligence of the video file point Analysis, the contextual data for obtaining at least one scene that the video file is presented by at least two field pictures includes: point The analysis at least two field pictures, are one group of picture frame by the image specification for meeting predetermined condition, wherein for any frame image, When the correlation degree of the frame image and previous frame image is higher than preset threshold, which is divided to the previous frame image In affiliated group.In video file, one group of picture frame corresponds to a scene, the figure of an at least frame image in one group of picture frame As the contextual data that information data is corresponding scene.Wherein, the image information data of an at least frame image can in one group of picture frame It to be one of information entrained by image or a variety of, such as may include the strength information of the depth of view information of image, image Etc. one or more.
Fig. 3 diagrammatically illustrates the contextual data for the scene that acquisition video file according to an embodiment of the present disclosure is presented Process schematic diagram.
As shown in figure 3, including 6 frame images in video file, sequentially in time from front to back successively are as follows: image 1, image 2, image 3, image 4, image 5 and image 6.Intellectual analysis based on video file advises the picture frame for meeting predetermined condition It is set to one group of picture frame specifically successively to be analyzed from front to back sequentially in time, first image 1 is analyzed, due to Image 1 is the first frame image in video file, and temporarily image 1 is placed individually into first group of picture frame, then to image 2 into Row analysis, calculates the correlation degree of image 2 and image 1, when the correlation degree of image 2 and image 1 is higher than preset threshold, will scheme It is divided in first group of picture frame as 2, when the correlation degree of image 2 and image 1 is not higher than preset threshold, temporarily by image 2 It is placed individually into second group of picture frame, can also determine there was only 1 one frame image of image in first group of picture frame at this time.Again to image 3 are analyzed, and the correlation degree of image 3 and image 2 is calculated, when the correlation degree of image 3 and image 2 is higher than preset threshold, Image 3 is divided in group belonging to image 2, when the correlation degree of image 3 and image 2 is not higher than preset threshold, explanatory diagram As 3 with image 2 be not belonging to same group, temporarily image 3 is placed individually into a new group.And so on, successively to image 4~figure As 6 are analyzed, each image is divided in corresponding group based on the analysis results.Generally, for every in video file The frame image is divided to previous by one frame image when the correlation degree of the frame image and previous frame image is higher than preset threshold In group belonging to frame image, when the correlation degree of the frame image and previous frame image is not higher than preset threshold, by the frame image It is divided to new one group.As seen from Figure 3, image 1 and image 2 are divided in first group of picture frame, image 3,4 and of image Image 5 is divided in second group of picture frame, and image 6 is divided in third group picture frame, and first group of picture frame corresponds to scene 1, second group of picture frame corresponds to scene 2, and third group picture frame corresponds to scene 3, that is, passes through intellectual analysis, what video file was presented Three scenes are divided out, wherein the contextual data of scene 1 includes: the depth of view information and strength information of image 1, Yi Jitu The depth of view information and strength information, the contextual data of scene 2 of picture 2 include: the depth of view information and strength information of image 3, image 4 The depth of view information and strength information of depth of view information and strength information and image 5, the contextual data of scene 3 include: image 6 Depth of view information and strength information.
Wherein, calculate two frame adjacent images between correlation degree mode can there are many, be with image 1 and image 2 Example can calculate the similarity and/or variation of image 1 and image 2 according to the strength information of image 1 and the strength information of image 2 Degree characterizes the correlation degree of image 1 and image 2 by the similarity and/or variation degree, can also be according to the scape of image 1 Deeply convince that the depth of view information of breath and image 2 calculates the similarity and/or variation degree of image 1 and image 2, by the similarity and/ Or the correlation degree of variation degree characterization image 1 and image 2, more accurately, the depth of view information and strength information of image 1 being capable of structures At the corresponding three-dimensional picture of image 1, the depth of view information and strength information of image 2 can constitute the corresponding three-dimensional picture of image 2, can Image is characterized to calculate similarity and/or the variation degree of the corresponding three-dimensional picture of image 1 three-dimensional picture corresponding with image 2 1 and image 2 correlation degree.If above-mentioned intellectual analysis treatment process real-time perfoming during video record, can also lead to The variation degree of the physical features data of the equipment of recorded video is crossed to characterize the correlation degree of image 1 Yu image 2, the object of equipment Reason characteristic can be the attitude data of equipment and/or the lens focus of equipment, such as be adopted by the attitude transducer in equipment The attitude data of equipment when collecting and compile imaged 1 and image 2 can be recognized if the variation of attitude data is less than default first numerical value It is higher than preset threshold for the correlation degree of image 1 and image 2, if the variation of attitude data is more than default first numerical value, explanation Rotation, movement by a relatively large margin etc. has occurred in equipment in recording process, and the scene that the video file recorded is presented is inevitable It switches, i.e., the correlation degree of image 1 and image 2 similarly passes through the driving horse of the camera lens of equipment not higher than preset threshold Up to the lens focus of equipment when obtaining recording image 1 and image 2, if the variation of lens focus is less than default second value, It is believed that the correlation degree of image 1 and image 2 is higher than preset threshold, if the variation of lens focus is more than default second value, Illustrate that equipment has occurred in recording process and compared with degree furthers or push away remote camera lens, the scene that the video file recorded is presented It necessarily switches, i.e., the correlation degree of image 1 and image 2 is not higher than preset threshold.
Further, after marking off each scene in video file, it is thus necessary to determine that the perpetual object of each scene, then Method shown in Fig. 2 further include: will be removed in an at least frame image for the corresponding picture frame of the scene for any scene The object for meeting pre-defined rule other than the perpetual object of the previous scenario of the scene is determined as the perpetual object of the scene. Wherein, the pre-defined rule includes at least following a kind of: predetermined field depth, and/or, predeterminated target Object identifying parameter model It encloses.
Continue to use above example shown in Fig. 3, video file is divided into scene 1, scene 2 and scene 3, from a scene to Adjacent next scene corresponds to the scene switching in video, then the perpetual object of adjacent scene is different.For scene 1, As needed, the smallest target of the depth of field in image 1 and image 2 can be determined as to the perpetual object of scene 1, alternatively, can also be with By the identification to predeterminated target object, the predeterminated target identified in image 1 and image 2 is determined as to the concern of scene 1 Object, wherein the recognition of face that can be broad sense for the identification of predeterminated target, be also possible to particular persons (such as certain Star) recognition of face, be also possible to the identification to certain certain objects (such as certain animal, plant), herein with no restrictions.Together Reason ground, for scene 2, in 3~image of image 5, can will meet the object of pre-defined rule in addition to the perpetual object of scene 1 It is determined as the perpetual object of scene 2, it such as can be the smallest by the depth of field in 3~image of image 5 in addition to the perpetual object of scene 1 Target is determined as the perpetual object of scene 2, alternatively, can also be by the identification to predeterminated target object, by 3~image of image 5 In the predeterminated target object that is identified in addition to the perpetual object of scene 1 be determined as the perpetual object of scene 1.Similarly, right The determination of the perpetual object of scene 3 is also that can will meet pre- set pattern in addition to the perpetual object of scene 2 in this way, in image 6 Object then is determined as the perpetual object of scene 3, repeats no more.
It is optional as one on the basis of above-described embodiment determines in video file each scene corresponding perpetual object Embodiment, the operation S203 of method shown in Fig. 2 is based on contextual data processing and first at least one described scene The corresponding first group of picture frame of scape includes: any frame image in first group of picture frame corresponding for the first scene, by first Region corresponding to the perpetual object of scene is handled described image as region-of-interest, based on region-of-interest.Wherein, One scene indicates any one scene in one or more scenes that video file is presented, and the present embodiment is with the figure of the first scene As treatment process is illustrated, the image processing process suitable for any of video file scene.
Specifically, the above-mentioned process handled based on region-of-interest any frame image may include: to obtain institute The depth of field for stating region-of-interest carries out virtualization processing to the object in described image in addition to the depth of field.
Example shown in Fig. 3 is still continued to use, video file is divided into scene 1, scene 2 and scene 3, and 1~image of image 2 is right Scene 2 should be corresponded in scene 1,3~image of image 5, image 6 corresponds to scene 3, it is assumed that passes through the pass of scene 1 identified above Infusing object is first object, and the perpetual object of scene 2 is the second target, and the perpetual object of scene 3 is third target.For scene 1, in the image 1, by first object in the image 1 corresponding region be image 1 region-of-interest, by the concern in image 1 Region other than region carries out virtualization processing, specifically, can be accurately known that the concern area by the depth of field of the region-of-interest The coordinate range in domain, therefore virtualization processing accurately can be carried out to the region other than the coordinate range, similarly, in image 2, It is the region-of-interest of image 2 by first object region corresponding in image 2, by the area other than the region-of-interest in image 2 Domain carries out virtualization processing.It is the pass of image 3 by the second target region corresponding in image 3 in image 3 for scene 2 Region is infused, the region other than the region-of-interest in image 3 is subjected to virtualization processing, similarly, in image 4, the second target is existed Corresponding region is the region-of-interest of image 4 in image 4, and the region other than the region-of-interest in image 4 is carried out at virtualization Second target region corresponding in image 5 is similarly the region-of-interest of image 5 in image 5 by reason, will be in image 5 Region other than the region-of-interest carries out virtualization processing.For scene 3, in image 6, by third target, institute is right in image 6 The region answered is the region-of-interest of image 6, and the region other than the region-of-interest in image 6 is carried out virtualization processing.
Below with reference to Fig. 4 A~Fig. 4 F, method shown in Fig. 2 is described further in conjunction with specific embodiments.
One video file illustrates the people to keep oneself in the foreground (from camera lens closer location), and rotary head is seen to station rear backward The dynamic process of one people in face (from camera lens compared with distant positions).The video file includes a frame or multiple image, first to the video File carries out scene partitioning, i.e., a frame or multiple image is divided in corresponding picture frame, further according to each scene Contextual data handles the corresponding picture frame of the scene, so that each scene prominent corresponding concern area when video file broadcasting Domain.
Fig. 4 A~Fig. 4 C diagrammatically illustrates three frames in the first scene of video file according to an embodiment of the present disclosure Image.
Fig. 4 D~Fig. 4 F diagrammatically illustrates three frames in the second scene of video file according to an embodiment of the present disclosure Image.
The perpetual object that can be seen that the first scene from Fig. 4 A~Fig. 4 C is the people of rotary head backward of keeping oneself in the foreground.From Fig. 4 D It is station in people below that~Fig. 4 F, which can be seen that the perpetual object of the second scene,.It, can be according to keeping oneself in the foreground when dividing scene The head variation degree of people determine whether each frame image belongs to the corresponding picture frame of the first scene, wherein head variation degree It can be characterized by the area change data of face contour, and face contour can be obtained by way of recognition of face. For example, previous frame image, which has determined that, belongs to the first scene, when the head of the people to keep oneself in the foreground in the frame image for a frame image When the correlation degree of head variation degree in variation degree and previous frame image is higher than preset threshold, determine the frame image with before One frame image belongs to the corresponding picture frame of the first scene, otherwise determines that the frame image belongs to the corresponding picture frame of the second scene.? After having divided scene, for the first scene, the smallest people of the depth of field in the corresponding at least frame image of the first scene is determined as The perpetual object of first scene, that is, the people to keep oneself in the foreground, for the second scene, in the corresponding at least frame image of the second scene In, by addition to the perpetual object of the first scene the smallest people of the depth of field be determined as the perpetual object of the second scene, that is, stand later People.
Then the corresponding picture frame of scene each in video file is handled based on contextual data, for the first scene, In the corresponding every frame image of the first scene, the corresponding depth of view information of the people to keep oneself in the foreground is obtained, is determined according to the depth of view information The coordinate profiles of the people to keep oneself in the foreground carry out virtualization processing, processing result such as Fig. 4 A~Fig. 4 C institute to the region other than the profile Show.For the second scene, in the corresponding every frame image of the second scene, acquisition station is in the corresponding depth of view information of people below, root Determine that station in the coordinate profiles of people below, carries out virtualization processing, processing knot to the region other than the profile according to the depth of view information Fruit is as shown in Fig. 4 D~Fig. 4 F.
Fig. 5 diagrammatically illustrates the block diagram of video file intelligent treatment device according to an embodiment of the present disclosure.
As shown in figure 5, video file intelligent treatment device 500 includes obtaining module 510, analysis module 520 and processing mould Block 530.The video file intelligent treatment device 500 can execute the method described above with reference to Fig. 2~Fig. 4 F, to realize video The Intelligent treatment of file.
Specifically, it obtains module 510 and includes at least two field pictures for obtaining video file, the video file.
Analysis module 520 is used for the intellectual analysis based on the video file, obtain the video file by it is described extremely The contextual data at least one scene that few two field pictures are presented.
Processing module 530 is used for corresponding with the first scene at least one described scene based on contextual data processing First group of picture frame so that the prominent region-of-interest of the video file first scene for being presented when playing, institute Stating region-of-interest is that first group of picture frame is presented by the way that treated.
As it can be seen that device shown in fig. 5 distinguishes the difference that video file is presented by the intellectual analysis to video file Scene simultaneously obtains corresponding contextual data, then is handled based on contextual data one group of picture frame corresponding to any one scene, So that the prominent corresponding region-of-interest of each scene that video file is presented when playing, so that user is when watching video Can be more naturally with the perpetual object that each scene is shown in part highlighted in video concern video, and follow The switching of scene converts the corresponding perpetual object of each scene of concern natural and trippingly in video, that is, passes through the intelligence to video file Change handles the intelligence for realizing video playing, meets the video-see demand of user.
In one embodiment of the present disclosure, analysis module 520 be based on the contextual data processing with it is described at least one The corresponding first group of picture frame of the first scene in scene include: analysis module 520 be used for based on the contextual data processing with One group of picture frame corresponding at least one each described scene, first scene are one at least one described scene It is a.
Fig. 6 diagrammatically illustrates the block diagram of video file intelligent treatment device according to another embodiment of the present disclosure.
As shown in fig. 6, video file intelligent treatment device 600 includes obtaining module 510, analysis module 520, processing module 530, trigger module 540 and preprocessing module 550.
Wherein, it obtains module 510, analysis module 520, processing module 530 hereinbefore to have been described, duplicate part is not It repeats again.
Trigger module 540 is used to indicate for obtaining the first trigger action, first trigger action based on general mode The video file is exported, or/and, for obtaining the second trigger action, second trigger action is used to indicate based on enhancing Mode exports the video file, wherein schemes when exporting the video file with the enhancement mode including at least treated As frame.
In one embodiment of the present disclosure, above-mentioned intellectual analysis of the analysis module 520 based on the video file obtains The contextual data at least one scene that the video file is presented by at least two field pictures includes: analysis module 520 for analyzing at least two field pictures in the video file, is one group of picture frame by the image specification for meeting predetermined condition, Wherein, for any frame image, when the correlation degree of the frame image and previous frame image is higher than preset threshold, by the frame image It is divided in group belonging to the previous frame image.One group of picture frame corresponds to a scene, an at least frame in one group of picture frame The image information data of image is the contextual data of corresponding scene.
On this basis, as an optional embodiment, preprocessing module 550 is used for for any scene, described It is in an at least frame image for the corresponding picture frame of scene, the satisfaction in addition to the perpetual object of the previous scenario of the scene is pre- The object of set pattern then is determined as the perpetual object of the scene.Wherein, the pre-defined rule includes at least following a kind of: predetermined scape Deep range, and/or, predeterminated target Object identifying parameter area.
In one embodiment of the present disclosure, above-mentioned processing module 530 be based on the contextual data processing with it is described at least The corresponding first group of picture frame of the first scene in one scene includes: processing module 530 for corresponding for the first scene Any frame image in first group of picture frame is based on using region corresponding to the perpetual object of the first scene as region-of-interest Region-of-interest handles any frame image.
Specifically, above-mentioned processing module 530 is based on region-of-interest and carries out processing to any frame image to include: processing mould Block 530 is used to obtain the depth of field of the region-of-interest, to the object in described image in addition to the depth of field of the region-of-interest into Row virtualization processing.
It should be noted that in device section Example each module/unit/subelement etc. embodiment, the skill of solution Art problem, the function of realization and the technical effect reached respectively with the implementation of corresponding step each in method section Example Mode, the technical issues of solving, the function of realization and the technical effect that reaches are same or like, and details are not described herein.
It is module according to an embodiment of the present disclosure, submodule, unit, any number of or in which any more in subelement A at least partly function can be realized in a module.It is single according to the module of the embodiment of the present disclosure, submodule, unit, son Any one or more in member can be split into multiple modules to realize.According to the module of the embodiment of the present disclosure, submodule, Any one or more in unit, subelement can at least be implemented partly as hardware circuit, such as field programmable gate Array (FPGA), programmable logic array (PLA), system on chip, the system on substrate, the system in encapsulation, dedicated integrated electricity Road (ASIC), or can be by the hardware or firmware for any other rational method for integrate or encapsulate to circuit come real Show, or with any one in three kinds of software, hardware and firmware implementations or with wherein any several appropriately combined next reality It is existing.Alternatively, can be at least by part according to one or more of the module of the embodiment of the present disclosure, submodule, unit, subelement Ground is embodied as computer program module, when the computer program module is run, can execute corresponding function.
For example, obtaining module 510, analysis module 520, processing module 530, trigger module 540 and preprocessing module Any number of in 550 may be incorporated in a module realize or any one module therein can be split into it is more A module.Alternatively, at least partly function of one or more modules in these modules can be with other modules at least partly Function combines, and realizes in a module.In accordance with an embodiment of the present disclosure, module 510, analysis module 520, processing are obtained At least one of module 530, trigger module 540 and preprocessing module 550 can at least be implemented partly as hardware electricity Road, such as field programmable gate array (FPGA), programmable logic array (PLA), system on chip, the system on substrate, encapsulation On system, specific integrated circuit (ASIC), or can be by carrying out any other reasonable side that is integrated or encapsulating to circuit The hardware such as formula or firmware realize, or with any one in three kinds of software, hardware and firmware implementations or with wherein any It is several appropriately combined to realize.Alternatively, obtain module 510, analysis module 520, processing module 530, trigger module 540, with And at least one of preprocessing module 550 can at least be implemented partly as computer program module, when the computer journey When sequence module is run, corresponding function can be executed.
Fig. 7 diagrammatically illustrates the electronic equipment according to an embodiment of the present disclosure for being adapted for carrying out method as described above Block diagram.Electronic equipment shown in Fig. 7 is only an example, should not function to the embodiment of the present disclosure and use scope bring and appoint What is limited.
As shown in fig. 7, electronic equipment 700 includes processor 710 and computer readable storage medium 720.The electronic equipment 700 can execute the method according to the embodiment of the present disclosure.
Specifically, processor 710 for example may include general purpose microprocessor, instruction set processor and/or related chip group And/or special microprocessor (for example, specific integrated circuit (ASIC)), etc..Processor 710 can also include using for caching The onboard storage device on way.Processor 710 can be the different movements for executing the method flow according to the embodiment of the present disclosure Single treatment unit either multiple processing units.
Computer readable storage medium 720, such as can be times can include, store, transmitting, propagating or transmitting instruction Meaning medium.For example, readable storage medium storing program for executing can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, Device or propagation medium.The specific example of readable storage medium storing program for executing includes: magnetic memory apparatus, such as tape or hard disk (HDD);Optical storage Device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication chain Road.
Computer readable storage medium 720 may include computer program 721, which may include generation Code/computer executable instructions execute processor 710 according to the embodiment of the present disclosure Method or its any deformation.
Computer program 721 can be configured to have the computer program code for example including computer program module.Example Such as, in the exemplary embodiment, the code in computer program 721 may include one or more program modules, for example including 721A, module 721B ....It should be noted that the division mode and number of module are not fixation, those skilled in the art can To be combined according to the actual situation using suitable program module or program module, when these program modules are combined by processor 710 When execution, processor 710 is executed according to the method for the embodiment of the present disclosure or its any deformation.
According to an embodiment of the invention, obtain module 510, analysis module 520, processing module 530, trigger module 540, with And at least one of preprocessing module 550 can be implemented as the computer program module with reference to Fig. 7 description, by processor When 710 execution, corresponding operating described above may be implemented.
In embodiment disclosed in the present application for any video file play when at least have general mode and Enhancement mode, be exactly under general mode be exactly video file decode and is exported show it is not for further analysis.And enhance mould Formula then needs to do the further image procossing for being directed to video frame after the decoding before exporting, will treated exports.For example, It can be the enhancing for perpetual object in the video file for enhancement mode.That is, exporting the video under the enhancement mode During file at least with the perpetual object picture frame output when the picture frame in the perpetual object relative to other It is higher to seeing, no matter in picture frame whether the perpetual object is located at prospect, even be located at the background perpetual object It is also clearer relative to other objects.Wherein, which can be provided interactive interface in the enhanced mode Some object (for example, performer, cartoon object, object etc.) in the video file of upper user's selection.In addition, the concern Object can be the parameter information analysis determination based on the video file when shooting with video-corder in the enhanced mode.For example, different fields The object positioned at prospect under scape based on depth of field determination is perpetual object;Or the center based on preceding scene each under different scenes Object determined by position is perpetual object.To realize during video file plays, dynamically analyzed based on AI Scene determines that perpetual object is background blurring in addition to perpetual object to carry out in current scene.
The disclosure additionally provides a kind of computer-readable medium, which, which can be in above-described embodiment, retouches Included in the equipment/device/system stated;It is also possible to individualism, and without in the supplying equipment/device/system.On It states computer-readable medium and carries one or more program, when said one or multiple programs are performed, realize root According to the method for the embodiment of the present disclosure.
In accordance with an embodiment of the present disclosure, computer-readable medium can be computer-readable signal media or computer can Read storage medium either the two any combination.Computer readable storage medium for example can be --- but it is unlimited In system, device or the device of --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or any above combination.It calculates The more specific example of machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, portable of one or more conducting wires Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In the disclosure, computer readable storage medium can be it is any include or storage program Tangible medium, which can be commanded execution system, device or device use or in connection.And in this public affairs In opening, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by Instruction execution system, device or device use or program in connection.The journey for including on computer-readable medium Sequence code can transmit with any suitable medium, including but not limited to: wireless, wired, optical cable, radiofrequency signal etc., or Above-mentioned any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction It closes to realize.
It will be understood by those skilled in the art that the feature recorded in each embodiment and/or claim of the disclosure can To carry out multiple combinations or/or combination, even if such combination or combination are not expressly recited in the disclosure.Particularly, exist In the case where not departing from disclosure spirit or teaching, the feature recorded in each embodiment and/or claim of the disclosure can To carry out multiple combinations and/or combination.All these combinations and/or combination each fall within the scope of the present disclosure.
Although the disclosure, art technology has shown and described referring to the certain exemplary embodiments of the disclosure Personnel it should be understood that in the case where the spirit and scope of the present disclosure limited without departing substantially from the following claims and their equivalents, A variety of changes in form and details can be carried out to the disclosure.Therefore, the scope of the present disclosure should not necessarily be limited by above-described embodiment, But should be not only determined by appended claims, also it is defined by the equivalent of appended claims.

Claims (10)

1. a kind of video file intelligent processing method, comprising:
Video file is obtained, the video file includes at least two field pictures;
Based on the intellectual analysis of the video file, obtain the video file by at least two field pictures presented to The contextual data of a few scene;
Based on contextual data processing first group of picture frame corresponding with the first scene at least one described scene, so that The prominent region-of-interest of first scene that the video file is presented when playing is obtained, the region-of-interest is to pass through processing What first group of picture frame afterwards was presented.
2. described based on contextual data processing and at least one described scene according to the method described in claim 1, wherein In the corresponding first group of picture frame of the first scene include:
Based on contextual data processing and one group of picture frame corresponding at least one each described scene, first scene For one at least one described scene.
3. according to the method described in claim 2, further include:
The first trigger action is obtained, first trigger action, which is used to indicate, exports the video file based on general mode;Or/ With obtain the second trigger action, second trigger action, which is used to indicate, exports the video file based on enhancement mode, wherein Treated picture frame is included at least when exporting the video file with the enhancement mode.
4. according to the method described in claim 1, wherein, the intellectual analysis based on the video file obtains the view The contextual data at least one scene that frequency file is presented by at least two field pictures includes:
The analysis at least two field pictures, are one group of picture frame by the image specification for meeting predetermined condition, wherein for any frame The frame image is divided to described previous by image when the correlation degree of the frame image and previous frame image is higher than preset threshold In group belonging to frame image;
One group of picture frame corresponds to a scene, and the image information data of an at least frame image is corresponding scene in one group of picture frame Contextual data.
5. according to the method described in claim 4, further include:
For any scene, in an at least frame image for the corresponding picture frame of the scene, the previous field of the scene will be removed The object for meeting pre-defined rule other than the perpetual object of scape is determined as the perpetual object of the scene;
Wherein, the pre-defined rule includes at least following a kind of: predetermined field depth, and/or, predeterminated target Object identifying parameter Range.
6. described based on contextual data processing and at least one described scene according to the method described in claim 5, wherein In the corresponding first group of picture frame of the first scene include:
Any frame image in first group of picture frame corresponding for the first scene, will be corresponding to the perpetual object of the first scene Region is handled described image as region-of-interest, based on region-of-interest.
7. according to the method described in claim 6, wherein, the region-of-interest that is based on carries out processing packet to any frame image It includes:
Obtain the depth of field of the region-of-interest;
Virtualization processing is carried out to the object in described image in addition to the depth of field.
8. a kind of video file intelligent treatment device, comprising:
Module is obtained, for obtaining video file, the video file includes at least two field pictures;
Analysis module obtains the video file and passes through at least two frames for the intellectual analysis based on the video file The contextual data at least one scene that image is presented;
Processing module, for based on contextual data processing and the first scene corresponding first at least one described scene Group picture frame, so that the prominent region-of-interest of first scene that the video file is presented when playing, the concern Region is that first group of picture frame is presented by the way that treated.
9. the apparatus according to claim 1, wherein the analysis module be based on the contextual data processing with it is described at least The corresponding first group of picture frame of the first scene in one scene include:
The analysis module, for based on contextual data processing and a group picture corresponding at least one each described scene As frame, first scene is one at least one described scene.
10. a kind of electronic equipment including processor, memory and stores the calculating that can be run on a memory and on a processor Machine program, the processor are realized when executing described program as at video file intelligence according to any one of claims 1 to 7 Reason method.
CN201810705480.XA 2018-06-29 2018-06-29 Intelligent video file processing method and device Active CN108960130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810705480.XA CN108960130B (en) 2018-06-29 2018-06-29 Intelligent video file processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810705480.XA CN108960130B (en) 2018-06-29 2018-06-29 Intelligent video file processing method and device

Publications (2)

Publication Number Publication Date
CN108960130A true CN108960130A (en) 2018-12-07
CN108960130B CN108960130B (en) 2021-11-16

Family

ID=64484748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810705480.XA Active CN108960130B (en) 2018-06-29 2018-06-29 Intelligent video file processing method and device

Country Status (1)

Country Link
CN (1) CN108960130B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188218A (en) * 2019-06-28 2019-08-30 联想(北京)有限公司 Image treatment method, device and electronic equipment
CN111918025A (en) * 2020-06-29 2020-11-10 北京大学 Scene video processing method and device, storage medium and terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995012197A1 (en) * 1993-10-29 1995-05-04 Kabushiki Kaisha Toshiba Multi-scene recording medium and reproduction apparatus
CN102821291A (en) * 2011-06-08 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN104462099A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
CN107046651A (en) * 2016-02-05 2017-08-15 百度在线网络技术(北京)有限公司 Method and apparatus for show object to be presented in video
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN107635093A (en) * 2017-09-18 2018-01-26 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995012197A1 (en) * 1993-10-29 1995-05-04 Kabushiki Kaisha Toshiba Multi-scene recording medium and reproduction apparatus
CN102821291A (en) * 2011-06-08 2012-12-12 索尼公司 Image processing apparatus, image processing method, and program
CN104462099A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
CN107046651A (en) * 2016-02-05 2017-08-15 百度在线网络技术(北京)有限公司 Method and apparatus for show object to be presented in video
CN107509043A (en) * 2017-09-11 2017-12-22 广东欧珀移动通信有限公司 Image processing method and device
CN107635093A (en) * 2017-09-18 2018-01-26 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer-readable recording medium
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOHANNES STALLKAM等: "Video-based Face Recognition on Real-World Data", 《2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
向金海: "视频中运动目标检测与跟踪相关问题研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188218A (en) * 2019-06-28 2019-08-30 联想(北京)有限公司 Image treatment method, device and electronic equipment
CN110188218B (en) * 2019-06-28 2021-08-17 联想(北京)有限公司 Image processing method and device and electronic equipment
CN111918025A (en) * 2020-06-29 2020-11-10 北京大学 Scene video processing method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN108960130B (en) 2021-11-16

Similar Documents

Publication Publication Date Title
US10762351B2 (en) Methods and systems of spatiotemporal pattern recognition for video content development
US11482192B2 (en) Automated object selection and placement for augmented reality
CN109145784B (en) Method and apparatus for processing video
CN110119757A (en) Model training method, video category detection method, device, electronic equipment and computer-readable medium
US9460351B2 (en) Image processing apparatus and method using smart glass
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN109803175A (en) Method for processing video frequency and device, equipment, storage medium
US11748870B2 (en) Video quality measurement for virtual cameras in volumetric immersive media
US20130272609A1 (en) Scene segmentation using pre-capture image motion
CN102067615A (en) Image generating method and apparatus and image processing method and apparatus
CN105635712A (en) Augmented-reality-based real-time video recording method and recording equipment
CN103136746A (en) Image processing device and image processing method
US20210165481A1 (en) Method and system of interactive storytelling with probability-based personalized views
CN103918010B (en) Method, device and computer program product for generating the animated image being associated with content of multimedia
CN108305308A (en) It performs under the line of virtual image system and method
CN110035236A (en) Image processing method, device and electronic equipment
CN109348277A (en) Move pixel special video effect adding method, device, terminal device and storage medium
CN104091608B (en) A kind of video editing method and device based on ios device
CN108364029A (en) Method and apparatus for generating model
WO2018057449A1 (en) Auto-directing media construction
CN108960130A (en) Video file intelligent processing method and device
CN104113682B (en) A kind of image acquiring method and electronic equipment
CN109862019A (en) Data processing method, device and system
CN104185008B (en) A kind of method and apparatus of generation 3D media datas
CN108985275A (en) The display method for tracing and device of augmented reality equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant