CN108986117A - Video image segmentation method and device - Google Patents

Video image segmentation method and device Download PDF

Info

Publication number
CN108986117A
CN108986117A CN201810802302.9A CN201810802302A CN108986117A CN 108986117 A CN108986117 A CN 108986117A CN 201810802302 A CN201810802302 A CN 201810802302A CN 108986117 A CN108986117 A CN 108986117A
Authority
CN
China
Prior art keywords
image
split
information
cut zone
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810802302.9A
Other languages
Chinese (zh)
Other versions
CN108986117B (en
Inventor
曾伟
刘浩
王爽
王昊
刘琦
聂冉
李泉材
李佳明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Beijing Youku Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youku Technology Co Ltd filed Critical Beijing Youku Technology Co Ltd
Priority to CN201810802302.9A priority Critical patent/CN108986117B/en
Publication of CN108986117A publication Critical patent/CN108986117A/en
Application granted granted Critical
Publication of CN108986117B publication Critical patent/CN108986117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

This disclosure relates to Video Image Segmentation method and device, which comprises identify that the target object in image to be split, the image to be split are the video frame images in video;Cut zone corresponding with the target object identified is determined in the image to be split;The segmented video image of the image to be split is generated according to the cut zone, controlling terminal plays the segmented video image.The embodiment of the present disclosure may be implemented to generate using an image to be split for different target object, or the different other segmented video images of scape reduce the resource requirement of shooting and editing link to meet different viewing demands.

Description

Video Image Segmentation method and device
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of Video Image Segmentation method and device.
Background technique
In traditional video broadcasting method, it usually needs shoot the different other video figures of scape using different capture apparatus Picture, or by single capture apparatus different other video images of scape repeatedly, when broadcasting, it is also necessary to pass through the video editing system in later period Make, selection is broadcasted after selecting one of other video image of scape according to demand, in the editing stage of video capture and video, all There are the wastes of resource.
Summary of the invention
In view of this, the present disclosure proposes a kind of Video Image Segmentation method and device, to solve shooting or editing view When frequency the problem of the existing wasting of resources.
According to the one side of the disclosure, a kind of Video Image Segmentation method is provided, which comprises
Identify that the target object in image to be split, the image to be split are the video frame images in video;
Cut zone corresponding with the target object identified is determined in the image to be split;
The segmented video image of the image to be split is generated according to the cut zone,
Controlling terminal plays the segmented video image.
In one possible implementation, the target object in image to be split is identified, comprising:
Using deep learning algorithm, the target object in image to be split is identified.
In one possible implementation, determination is corresponding with the target object identified in the image to be split Cut zone, comprising:
Using deep learning algorithm, the target object for determining and identifying in the image to be split according to composition rule Corresponding cut zone.
In one possible implementation, determination is corresponding with the target object identified in the image to be split Cut zone, comprising:
Determination is corresponding with the target object identified in the image to be split, meets the other cut section of specified scape Domain.
In one possible implementation, determination is corresponding with the target object identified in the image to be split Cut zone, comprising:
The cut zone including the target object is determined in the image to be split.
In one possible implementation, the cut section including the target object is determined in the image to be split Domain, comprising:
In the image to be split, determining at least two cut zone including the target object, described at least two In a cut zone, image corresponding to the biggish cut zone of area includes figure corresponding to the lesser cut zone of area Picture.
In one possible implementation, determination is corresponding with the target object identified in the image to be split Cut zone, comprising:
Identify the target site on the target object;
The corresponding cut zone of the target site is determined in the image to be split.
In one possible implementation, the method also includes:
Determine the resolution information of the image to be split;
The division size information of the image to be split is determined according to the resolution information;
Cut zone corresponding with the target object identified is determined in the image to be split, comprising:
Determination is corresponding with the target object identified in the image to be split, meets the division size information Cut zone.
In one possible implementation, the method also includes:
Determine the sharpness information of the image to be split;
The division size information of the image to be split is determined according to the resolution information, comprising:
According to the resolution information and sharpness information, the division size information of the image to be split is determined.
In one possible implementation, the method also includes:
Determine that the first broadcasting shows information, described first, which plays display information, includes screen physical size information and/or broadcast Put resolution information;
Wherein, the segmented video image that terminal is played meets the screen physical size information and/or plays resolution ratio Information.
In one possible implementation, the method also includes:
Determine that the second broadcasting shows that information, the second broadcasting display information include transverse screen display information or vertical screen display letter Breath;
Wherein, the segmented video image that terminal is played meets the transverse screen display information or the vertical screen display information.
In one possible implementation, the segmentation video figure of the image to be split is generated according to the cut zone Picture, comprising:
Determine the cut zone corresponding coordinate information in the image to be split;
The segmentation video of the image to be split image is generated according to the coordinate information.
In one possible implementation, controlling terminal plays the segmented video image, comprising:
Determine weight of each segmented video image in the image to be split;
According to the weight, recommendation results are determined in each segmented video image;
Controlling terminal plays the recommendation results.
In one possible implementation, controlling terminal plays the segmented video image, comprising:
It obtains target object recognition information and/or scape does not select information;
Information is not selected according to the target object recognition information and/or scape, choosing is determined in the segmented video image Select result;
Controlling terminal plays the selection result.
According to the one side of the disclosure, a kind of Video Image Segmentation device is provided, described device includes:
Recongnition of objects module, the target object in image to be split for identification, the image to be split is video In video frame images;
Cut zone determining module, for determining corresponding with the target object identified point in the image to be split Cut region;
Segmented video image generation module, for generating the segmentation video of the image to be split according to the cut zone Image,
Playing module plays the segmented video image for controlling terminal.
In one possible implementation, the recongnition of objects module, comprising:
First object Object identifying submodule identifies the target pair in image to be split for utilizing deep learning algorithm As.
In one possible implementation, the cut zone determining module, comprising:
First cut zone determines submodule, for utilizing deep learning algorithm, according to composition rule described to be split Cut zone corresponding with the target object identified is determined in image.
In one possible implementation, the cut zone determining module, comprising:
Second cut zone determines submodule, the target object pair for determining with identifying in the image to be split It answers, meets the other cut zone of specified scape.
In one possible implementation, the cut zone determining module, comprising:
Third cut zone determines submodule, includes dividing for the target object for determining in the image to be split Cut region.
In one possible implementation, the third cut zone determines submodule, is used for:
In the image to be split, determining at least two cut zone including the target object, described at least two In a cut zone, image corresponding to the biggish cut zone of area includes figure corresponding to the lesser cut zone of area Picture.
In one possible implementation, the cut zone determining module, comprising:
Target site identifies submodule, for identification the target site on the target object;
4th cut zone determines submodule, for determining described corresponding point of target site in the image to be split Cut region.
In one possible implementation, described device further include:
Resolution information determining module, for determining the resolution information of the image to be split;
Division size information determination module, for determining the segmentation ruler of the image to be split according to the resolution information Very little information;
The cut zone determining module, comprising:
5th cut zone determines submodule, the target object pair for determining with identifying in the image to be split It answers, meets the cut zone of the division size information.
In one possible implementation, described device further include:
Sharpness information determining module, for determining the sharpness information of the image to be split;
The division size information determination module, comprising:
First segmentation information determines submodule, for according to the resolution information and sharpness information, determine it is described to The division size information of segmented image.
In one possible implementation, described device further include:
First plays display information determination module, and for determining that the first broadcasting shows information, described first plays display letter Breath includes screen physical size information and/or broadcasting resolution information;
Wherein, the segmented video image that terminal is played meets the screen physical size information and/or plays resolution ratio Information.
In one possible implementation, described device further include:
Second plays display information determination module, and for determining that the second broadcasting shows information, described second plays display letter Breath includes transverse screen display information or vertical screen display information;
Wherein, the segmented video image that terminal is played meets the transverse screen display information or the vertical screen display information.
In one possible implementation, the segmented video image generation module, comprising:
Coordinate information determines submodule, for determining the cut zone corresponding coordinate letter in the image to be split Breath;
First segmented video image generates submodule, for generating point of the image to be split according to the coordinate information Cut the video image.
In one possible implementation, the playing module includes:
Weight determines submodule, for determining weight of each segmented video image in the image to be split;
Recommendation results determine submodule, for determining in each segmented video image and recommending knot according to the weight Fruit;
Recommendation results play submodule, play the recommendation results for controlling terminal.
In one possible implementation, the playing module, comprising:
Submodule is selected, does not select information for obtaining target object recognition information and/or scape;
Selection result determines submodule, for not selecting information according to the target object recognition information and/or scape, in institute It states and determines selection result in segmented video image;
Selection result plays submodule, plays the selection result for controlling terminal.
According to the one side of the disclosure, a kind of Video Image Segmentation device is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute above-mentioned Video Image Segmentation method.
According to the one side of the disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, meter is stored thereon with Calculation machine program instruction, the computer program instructions realize above-mentioned Video Image Segmentation method when being executed by processor.
In disclosure the present embodiment, by identifying the target object in image to be split, and in image to be split really After fixed cut zone corresponding with target object, segmented video image is generated according to cut zone, and described in controlling terminal broadcasting Segmented video image.The embodiment of the present disclosure can be different by determination target object, and determination it is corresponding with target object Different cut zone is realized and is generated using an image to be split for different target object, or the other segmentation of different scapes Video image reduces the resource requirement of shooting and editing link to meet different viewing demands.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the disclosure Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 2 shows the flow charts according to the Video Image Segmentation method of one embodiment of the disclosure;
Fig. 3 shows in the Video Image Segmentation method according to one embodiment of the disclosure and determines cut zone according to composition rule Schematic diagram;
Fig. 4 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 5 shows the other schematic diagram of different scapes in the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 6 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 7 shows the schematic diagram of cut zone in the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 8 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Fig. 9 shows the corresponding cut zone of target site in the Video Image Segmentation method according to one embodiment of the disclosure Schematic diagram;
Figure 10 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 11 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 12 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 13 shows the schematic diagram of transverse screen and vertical screen display in the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 14 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 15 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 16 shows the schematic diagram of weight in the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 17 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 18 shows the schematic diagram of segmented video image in the Video Image Segmentation method according to one embodiment of the disclosure;
Figure 19 shows the schematic diagram of the Video Image Segmentation device according to one embodiment of the disclosure;
Figure 20 shows the schematic diagram of the Video Image Segmentation device according to one embodiment of the disclosure;
Figure 21 is a kind of block diagram of device for Video Image Segmentation shown according to an exemplary embodiment;
Figure 22 is a kind of block diagram of device for Video Image Segmentation shown according to an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure. It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in Figure 1, shown view Frequency image partition method includes:
Step S10 identifies that the target object in image to be split, the image to be split are the video frame figures in video Picture.
In one possible implementation, image to be split may include the view using the shooting of any video capture equipment Video frame images in frequency.Video can be live video stream, be also possible to recorded broadcast video.Image to be split can be video bat The original video frame image for taking the photograph equipment shooting is also possible to the video frame figure for obtaining original video frame image after pretreatment Picture, pretreatment may include noise reduction process, adjustment resolution ratio etc..
Any object in image to be split can be determined as target object according to demand.For example, target object can be with It is people, animal, plant, vehicle etc..Target object may be the face of people, the leg of people, animal the different objects such as face On sub-portion.Target object may include an object, also may include multiple objects.For example, including in image A to be split The people of two dialogues, respectively Zhang San and Li Si.Target object can be Zhang San or Li Si, can know in image A to be split It Chu not Zhang San or Li Si.Target object may be the face of Zhang San or the face of Li Si, can identify in image A to be split The face of Zhang San or the face of Li Si out.
It can use the technologies such as image recognition, identify target object in image to be split.
Step S20 determines cut zone corresponding with the target object identified in the image to be split.
In one possible implementation, cut zone may include the arbitrary shapes such as rectangle.Cut zone can wrap Include the region of setting area.The area of the corresponding cut zone of different target object can be identical in image to be split, can also be with It is different.For example, can determine Zhang San according to setting area for the target object Zhang San and Li Si identified in image A to be split Corresponding cut zone 1, Li Si correspond to cut zone 2.The area of cut zone 1 and cut zone 2 can be identical, can also not Together.
The area of cut zone can also be determined according to target object area shared in image to be split.For example, In image A to be split, according to Zhang San and Li Si shared area in the picture, cut zone 1 corresponding with Zhang San is determined Area is the 40% of the gross area of image A to be split, and the area of cut zone 2 corresponding with Li Si is image A's to be split The 20% of the gross area.
Corresponding multiple cut zone can be determined entirely by according to target object, it can also be according to target object One or more sub-portions determine corresponding multiple cut zone.
The sub-portion of target object or target object can be located at the setting position of cut zone.For example, target object or The sub-portion of target object can be located at the center of cut zone, can also be located at the position of the lower middle side of cut zone It sets.
Corresponding multiple cut zone can be determined for a target object, to meet different viewing demands.Example Such as, for the area of the cut zone set as area a, area b and area c, the size of area a, area b and area c are different. The target object identified in image A to be split is Zhang San and Li Si.The cut zone that goes out determined for Zhang San may include: face The cut zone 1c that cut zone 1b that cut zone 1a, the area that product is a are b, area are c, the segmentation determined for Li Si Region may include: area be a cut zone 2a, area be b cut zone 2b, area be c cut zone 2c.
Step S30 generates the segmented video image of the image to be split according to the cut zone.
In one possible implementation, it can use cut zone and be partitioned into image in image to be split, and will The image being partitioned into is determined as segmented video image.
Multiple segmented video images can be generated for a target object in image to be split.One target object Multiple segmented video images can be used to show one or more sub-portions of target object, can also be used to other in different scapes Show target object.Step S40, controlling terminal play the segmented video image.
In one possible implementation, can according to demand, controlling terminal plays the segmented video image, can be with One or more segmented video images are played on the screen of terminal.For example, point of one or more target objects can be played Cut video image.For image A to be split, the segmented video image of Zhang San or Li Si can be played, can also play and open Three and/or Li Si multiple segmented video images.
In one possible implementation, the Video Image Segmentation method in the embodiment of the present disclosure, can be in server Side is completed, and can be completed, can also be completed jointly by server and terminal in the terminal side for playing video.Wherein, by server When being completed jointly with terminal, the cut zone that server side can will determine, in a manner of file together with image to be split It is sent to terminal, is broadcasted after generating segmented video image according to the coordinate set of cut zone and image to be split by terminal.Clothes Business device side may be based on segmented video image and generate independent segmentation video, and segmentation video is supplied to terminal and is played out. The disclosure does not limit the executing subject of Video Image Segmentation method.
In the present embodiment, by identifying the target object in image to be split, and the determining and mesh in image to be split After marking the corresponding cut zone of object, segmented video image is generated according to cut zone, and controlling terminal plays the segmentation view Frequency image.It by the different target object of determination, and determines different cut zone corresponding from target object, may be implemented It is generated using an image to be split for different target object, or the other segmented video image of different scapes, to meet not Same viewing demand, reduces the resource requirement of shooting and editing link.
Fig. 2 shows the flow charts according to the Video Image Segmentation method of one embodiment of the disclosure, as shown in Fig. 2, shown view Step S10 includes: in frequency image partition method
Step S11 identifies the target object in image to be split using deep learning algorithm.
In one possible implementation, what deep learning algorithm can be more abstract by the formation of combination low-level feature High level indicates attribute classification or feature, to find that the distributed nature of data indicates.Deep learning is a kind of base in machine learning In the method for carrying out representative learning to data.For piece image, various ways can be used to indicate, such as each image pixel intensities The vector of value, or be more abstractively expressed as a series of sides, the region of specific shape etc..It can use deep learning algorithm, make Learnt to take office business (such as recognition of face task) from the sample instance of image with certain specific representation methods, and in the picture Identify the task that study is arrived.
Deep neural network includes the neural network based on deep learning algorithm.It can be based on deep neural network wait divide It cuts and identifies target object in image.
Based on the powerful processing capacity of deep learning algorithm, the present embodiment can be carried out in real time, online live video Video Image Segmentation.After can carrying out Video Image Segmentation to the live streaming picture that capture apparatus is shot, segmented video image is generated, And broadcasted after being selected according to demand in live streaming, to realize using the image of capture apparatus shooting, obtain other point of more scapes It cuts video image and broadcasts, avoid wasting of resources when live video shooting and editor, enhance the expression energy of live streaming picture Power, while meeting the real-time requirement of live streaming.
In the present embodiment, it can use deep learning algorithm and identify target object in image to be split.Deep learning The processing capacity of algorithm is strong, and processing result is accurate, and the recognition accuracy of target object in Video Image Segmentation method can be improved And recognition efficiency.
In one possible implementation, determination is corresponding with the target object identified in the image to be split Cut zone, comprising:
Using deep learning algorithm, the target object for determining and identifying in the image to be split according to composition rule Corresponding cut zone.
In one possible implementation, deep neural network includes the neural network based on deep learning algorithm.It can To determine cut zone corresponding with the target object identified in image to be split based on deep neural network.
Composition rule may include that location, shared area are set in cut zone by target object Rule.According to the cut zone for the target object that composition rule is determined, the segmentation video figure obtained further according to cut zone As after, the picture of segmented video image is coordinated, complete, artistic expression is strong, meets the aesthetic of viewer.
Fig. 3 shows in the Video Image Segmentation method according to one embodiment of the disclosure and determines cut zone according to composition rule Schematic diagram, as shown in figure 3, in Fig. 3 top half include two composition rules, wherein left side composition rule 1 are as follows: will divide Region is divided into nine sub-regions using four straight lines of groined type, and straight line, which intersects, generates four crosspoints.Crosspoint is composition When the position preferentially placed of target object.Different weights can be set for four crosspoints, the high crosspoint of weight is preferential The position of placement.The composition rule 2 on right side is composition curve, target object can be carried out according to the region that composition curve divides It places.
The lower half portion in Fig. 3 shows three images.Wherein the figure expression of the leftmost side identifies in image to be split Target object, target object are the face of right side personage in image.Intermediate figure indicates the target object that will identify that according to structure Rule map 1 is patterned, and according to structure rule 1, target object is placed therein a crosspoint, determines cut zone.It is intermediate Figure expression segmented video image is finally obtained according to cut zone.
Deep neural network can be trained to be carried out in image to be split according to target object using the composition rule of setting Composition simultaneously determines cut zone.Using trained deep neural network, cut zone can be quickly and accurately determined.
It in the present embodiment, can be according to composition rule in the image to be split, fastly using deep learning algorithm Speed accurately determines cut zone corresponding with the target object identified, and the cut zone determined meets composition rule, It can satisfy the aesthetic requirement of viewer.
Fig. 4 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 4, shown view Step S20 includes: in frequency image partition method
Step S21, determination is corresponding with the target object identified in the image to be split, and it is other to meet specified scape Cut zone.
In one possible implementation, scape does not refer to different at a distance from subject due to capture apparatus, and causes The difference for the range size that subject is showed in capture apparatus viewfinder.The other division of scape, generally can be divided into five kinds, by Closely to be far respectively feature (referring to shoulders of human body or more), it is close shot (referring to human chest or more), middle scape (referring to human knee or more), complete Scape (whole and ambient background of human body), distant view (subject local environment).In video, can benefit alternately using it is various not Same scape is other, so that the narration of video plot, the expression of personage's thoughts and feelings, the processing of character relation has more expressive force, from And enhance the appeal of video.
Fig. 5 shows the other schematic diagram of different scapes in the Video Image Segmentation method according to one embodiment of the disclosure, such as Fig. 5 institute Show, for target object people, tight close-up, bust shot, full feature, wide feature, close can be marked off on the basis of the head of people The other camera lens of the difference scape such as scape, medium long shot, middle scape, middle panorama and panorama.For different target objects, there can be different scapes Other division mode.
It can determine that the specified scape of each target object in image to be split is other according to demand.For example, can be according to video Content, the hobby of viewer or setting etc. determine that specified scape is other.
It is other different specified scapes can be set for the target object in image to be split.For example, in image A to be split Target object Zhang San, Zhang San is leading role, and the specified scape of Zhang San can not be feature, close shot and middle scape.For image A to be split In target object Li Si, Li Si is supporting role, and the specified scape of Li Si can not be close shot and distant view.It may be image to be split In target object to set identical specified scape other.For example, can determine that identical specified scape is other for Zhang San and Li Si: feature and Close shot.
In the present embodiment, other by setting specified scape, it can be determined in image to be split not corresponding with specified scape Cut zone so that generate segmented video image it is more targeted, can satisfy different viewing demands.
Fig. 6 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in fig. 6, shown view Step S20 includes: in frequency image partition method
Step S22 determines the cut zone including the target object in the image to be split.
In one possible implementation, after identifying target object in image to be split, in determining and target pair When as corresponding cut zone, the segmentation of further sub-portion, determine and target object are no longer carried out to target object Corresponding cut zone includes the entirety of target object.When target object has multiple corresponding cut zone, each cut section Domain includes the entirety of target object.
The area of the corresponding multiple cut zone of target object can be identical, and target object can be located at multiple areas The different location of identical cut zone.
The area of the corresponding multiple cut zone of target object can also be different.Target object can be located at multiple faces The different location or same position of the different cut zone of product.
The different sub-portions of target object can be also determined as target object.For target object and target object Different sub-portions determine corresponding cut zone, to generate the other segmentation video figure of different scapes for being directed to target object Picture.
It in the present embodiment, include target object in the cut zone determined, it can be in the segmented video image of generation The middle complete information for retaining target object.And cut zone can be rapidly generated for target object.
In one possible implementation, step S22 includes:
In the image to be split, determining at least two cut zone including the target object, described at least two In a cut zone, image corresponding to the biggish cut zone of area includes figure corresponding to the lesser cut zone of area Picture.
Fig. 7 shows the schematic diagram of cut zone in the Video Image Segmentation method according to one embodiment of the disclosure, such as Fig. 7 institute Show, the head of personage is target object in image to be split, and three cut zone, most median surface are determined in image to be split The smallest cut zone of product is the first cut zone, and the maximum cut zone of area is third cut zone, the first cut zone It is the second cut zone with the cut zone among third cut zone.As it can be seen that figure corresponding to the biggish cut zone of area As including image corresponding to the lesser cut zone of area.
In the present embodiment, in the multiple cut zone determined in image to be split, the biggish cut zone of area Corresponding image includes image corresponding to the lesser cut zone of area.Pass through the nesting between cut zone, Ke Yi Cut zone is quickly determined in image to be split, and the relevance between segmentation area is stronger, when being convenient for follow-up play Carry out selection use.
Fig. 8 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 8, shown view Step S20 includes: in frequency image partition method
Step S23 identifies the target site on the target object.
In one possible implementation, after identifying target object in image to be split, can according to demand into One step identifies one or more target sites on target object.For example, the target object identified is behaved, it can be by the head of people The sub-portions such as portion, hand, leg are determined as target site.And target site is further identified in image to be split.
It can use deep learning algorithm, identify target site in image to be split.
Step S24 determines the corresponding cut zone of the target site in the image to be split.
In one possible implementation, one or more cut zone can be determined for a target site.Fig. 9 shows Out according to the schematic diagram of the corresponding cut zone of target site in the Video Image Segmentation method of one embodiment of the disclosure.Such as Fig. 9 Shown, top half is image to be split in Fig. 9.The target object identified in image to be split is behaved, can be further Using the head of people and hand as target site, and after further identifying head and hand in image to be split, head is determined Cut zone corresponding with hand, and regarded according to two segmentations that the cut zone on head and hand generates lower half portion in Fig. 9 Frequency image.
In the present embodiment, cut zone is determined according to the target site on target object, and generates and is directed to different target The segmented video image at position.It is other to provide video-see scape more abundant for the different details that target object can be embodied.
Figure 10 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in Figure 10, shown Video Image Segmentation method further include:
Step S50 determines the resolution information of the image to be split.
In one possible implementation, according to the difference of capture apparatus, and the different parameters being arranged when shooting, to Segmented image may include different resolution ratio.With the continuous upgrading of capture apparatus, the resolution ratio of image to be split can reach To 2K (1920*1080), 4K (3840*2160), 5K (5120*2160), 8K (7680*4320) and 10K (10240*4320) etc.. Resolution ratio is higher, and the pixel quantity in image to be split is more.In the case where amplifying identical multiple, the higher image of resolution ratio Clarity it is better.
Step S60 determines the division size information of the image to be split according to the resolution information.
In one possible implementation, division size information may include smallest partition size.It can be according to resolution Rate information determines the smallest partition size of image to be split.The resolution ratio of image to be split is higher, and smallest partition size gets over rootlet According to the segmented video image that the cut zone that smallest partition size determines generates, clarity can satisfy viewing demand.Work as utilization After determining cut zone less than smallest partition size, the poor definition of the segmented video image of generation is not able to satisfy the need of viewing It asks.
Division size information also may include division size section, and image to be split can be sized to maximum point Size is cut, and according to maximum fractionation size and the above-mentioned smallest partition size determined according to resolution ratio, determines division size section.
Step S20 includes:
Step S25, determination is corresponding with the target object identified in the image to be split, meets the segmentation ruler The cut zone of very little information.
In one possible implementation, multiple cut sections being sized can be determined according to division size information Domain, wherein the size of the smallest cut zone is greater than or equal to the cut zone of smallest partition size.It can also be according to segmentation ruler Very little information determines the cut zone of arbitrary dimension, wherein the size of the smallest cut zone is greater than or waits in image to be split In the cut zone of smallest partition size.
It in the present embodiment, can be wait divide after determining division size information according to the resolution information of image to be split It is corresponding with the target object identified to cut in image determination, meets the cut zone of the division size information.Based on resolution The cut zone that rate information determines can make the segmented video image generated clear, and the resolution ratio of segmented video image can Meet viewing demand.
In one possible implementation, the method also includes:
Determine the sharpness information of the image to be split.
In one possible implementation, clarity refers to the readability on each thin portion shadow line and its boundary in image. If influenced by shooting environmental, shooting angle and shooting light, the clarity of image to be split is poor, then according to figure to be split The clarity for the segmented video image that picture generates is also poor, and segmented video image is not able to satisfy viewing demand.
Step S60 includes:
According to the resolution information and sharpness information, the division size information of the image to be split is determined.
It in one possible implementation, can be according to the resolution information and sharpness information of image to be split, altogether With the division size information for determining image to be split.Under identical resolution ratio, the minimum of the higher image to be split of clarity Division size is smaller.Under identical clarity, the smallest partition size of the higher image to be split of resolution ratio is smaller.
Different weights can be set for resolution information and sharpness information, and be believed according to resolution information, resolution ratio Weight, sharpness information and sharpness information weight are ceased, determines the division size information of the image to be split jointly.
In the present embodiment, according to the clarity and resolution information of image to be split, the image to be split is determined Division size information can make the clarity of the segmented video image generated and resolution ratio be all satisfied viewing demand.
Figure 11 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 11, shown Video Image Segmentation method further include:
Step S70 determines that the first broadcasting shows information, and the first broadcasting display information includes screen physical size information And/or play resolution information;Wherein, the segmented video image that terminal is played meet the screen physical size information and/ Or play resolution information.
In one possible implementation, the physical size of the screen of different terminals differs greatly, for example, the screen of mobile phone Curtain size can be 3.5 inches, and the screen for being used to play the terminal of advertisement in exterior walls of buildings can achieve tens meters.It is set The influence of standby processing capacity and the screen configured, the broadcasting resolution information of different terminals are also different.Divide in terminal plays When video image, needs broadcasting to meet its screen physical size information and/or play resolution information.
In one possible implementation, the screen physics ruler with playback terminal can be generated according to image to be split The segmented video image that very little information and/or broadcasting resolution information are consistent.Terminal directly broadcasts the segmented video image of generation ?.For example, can be generated and the screen physical size information of the mobile phone of certain brand and/or broadcasting point according to image to be split The segmented video image that resolution information is consistent, the mobile phone of certain brand can directly broadcast the segmented video image of generation.
In one possible implementation, the screen physics with different terminals can also be generated according to image to be split More set segmented video images that dimension information and/or broadcasting resolution information are consistent.In terminal plays, selection with itself The segmented video image that screen physical size information and/or broadcasting resolution information are consistent broadcasts.For example, can be according to wait divide Cut image, generate respectively the laptop with the mobile phones of multiple brands and multiple brands screen physical size information and/ Or play more set segmented video images that resolution information is consistent.The mobile phone of one of brand can therefrom select and itself Screen physical size information and/or play the segmented video image that is consistent of resolution information and broadcast.
In the present embodiment, according to the screen physical size information of terminal and/or resolution information can be played, determine with Its segmented video image being consistent.Segmented video image can be made to be more in line with the broadcast requirement of terminal.
Figure 12 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 12, shown Video Image Segmentation method further include:
Step S80 determines that the second broadcasting shows information, and described second, which plays display information, includes transverse screen display information or erect On-screen-display message;Wherein, the segmented video image that terminal is played meets the transverse screen display information or vertical screen display letter Breath.
In one possible implementation, terminal is when playing segmented video image, can carry out transverse screen display and/or Vertical screen display.The screen of terminal is generally rectangular, and when the longer side of rectangular screen is laterally disposed, terminal is transverse screen display shape State.When the shorter side of rectangular screen is laterally disposed, terminal is vertical screen display state.Transverse screen display information includes indicating terminal For the identification information of transverse screen display state, vertical screen display information includes indicating that terminal is the identification information of vertical screen display state.
It is accustomed to according to viewing, identical target object in image to be split, in transverse screen display and vertical screen display, is corresponded to Video playing image in content it is different.
Figure 13 shows the schematic diagram of transverse screen and vertical screen display in the Video Image Segmentation method according to one embodiment of the disclosure, As shown in figure 13, for for identical image to be split, when top half is transverse screen display in Figure 13, the segmentation video determined Image.When lower half portion in Figure 13 is vertical screen display, the segmented video image determined.
In one possible implementation, it can generate and believe with the transverse screen display of playback terminal according to image to be split The segmented video image that breath or vertical screen display information are consistent.Terminal directly broadcasts the segmented video image of generation.For example, The segmentation being consistent with the transverse screen display information of the mobile phone of certain brand or vertical screen display information can be generated according to image to be split Video image, the mobile phone of certain brand can directly broadcast the segmented video image of generation.
In one possible implementation, the transverse screen display with different terminals can also be generated according to image to be split More set segmented video images that information or vertical screen display information are consistent.In terminal plays, the transverse screen display of selection and itself The segmented video image that information or vertical screen display information are consistent broadcasts.For example, can according to image to be split, generate respectively with The transverse screen display information or vertical screen display information of the mobile phone of multiple brands and the laptop of multiple brands are consistent more Cover segmented video image.The mobile phone of one of brand can therefrom select to believe with itself transverse screen display information or vertical screen display The segmented video image that manner of breathing meets broadcasts.
In the present embodiment, can be according to the transverse screen display information or vertical screen display information of terminal, determination is consistent with it Segmented video image.Segmented video image can be made to be more in line with the broadcast requirement of terminal.
Figure 14 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 14, shown Step S30 includes: in Video Image Segmentation method
Step S31 determines the cut zone corresponding coordinate information in the image to be split.
In one possible implementation, it is right in the image coordinate system of image to be split to can use it for cut zone The coordinate position answered is expressed.For example, cut zone can be rectangle, according to the target pair identified in image A to be split As Zhang San, the corresponding cut zone 1a of Zhang San can be determined.Specific location of the cut zone 1a in image to be split, can wrap Include four positions of four vertex of cut zone 1a in image coordinate system: 1 (x of coordinate points1,y1), 2 (x of coordinate points1,y2)、 3 (x of coordinate points2,y1), 4 (x of coordinate points2,y2)。
It can determine multiple cut zone in image to be split, the corresponding coordinate information in the image to be split. Coordinate information may include any form of expression such as array or matrix.Text file (such as XML can be generated according to coordinate information Text file).
Step S32 generates the segmented video image of the image to be split according to the coordinate information.
In one possible implementation, according to coordinate information, it can divide in image to be split and obtain and coordinate The corresponding image of information.For example, according to 1 (x of coordinate points1,y1), 2 (x of coordinate points1,y2), 3 (x of coordinate points2,y1), coordinate points 4 (x2,y2), the segmented video image of rectangle can be partitioned into image to be split.
In the present embodiment, by determining cut zone corresponding coordinate information in image to be split, segmentation view is obtained Frequency image.According to the coordinate information of cut zone, segmented video image can be accurately and quickly determined.
Figure 15 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 15, shown Step S40 includes: in Video Image Segmentation method
Step S41 determines weight of each segmented video image in the image to be split.
In one possible implementation, segmentation video can be determined according to the target object in segmented video image Weight of the image in image to be split.It may include multiple target objects in image to be split, it can be according to multiple targets pair As shared ratio in the picture of image to be split, and/or the significance level according to target object in video, determine therewith The weight of corresponding segmented video image.
Weight of the segmented video image in image to be split can also be determined according to the clarity of segmented video image.
For example, in image A to be split, including Zhang San and Li Si, wherein Zhang San is the leading role of video, and Li Si is supporting role.Then Higher weight can be determined for the corresponding segmented video image of Zhang San, determine for the corresponding segmented video image of Li Si lower Weight.
Figure 16 shows the schematic diagram of weight in the Video Image Segmentation method according to one embodiment of the disclosure, such as Figure 16 institute Show, image to be split is the video frame images in a variety show video.Wherein, the weight of the host of the picture leftmost side is A grades, the weight of four welcome guests is B grades, and the weight of pittite is C grades.A grades of weight is greater than B grades, and B grades of weight is greater than C Grade.
Step S42 determines recommendation results according to the weight in each segmented video image.
In one possible implementation, since image to be split may include multiple segmented video images, usual feelings Under condition, it is only necessary to broadcast one of segmented video image.To enable the segmented video image broadcasted to embody to the greatest extent Performance content in image to be split can be determined in multiple segmented video images and be used according to the weight of segmented video image In the recommendation results of broadcasting.Recommendation results may include one or more segmented video images.It as shown in figure 16, can be according to power Weight, by the corresponding segmented video image of host, is determined as recommendation results.
For example, sequence that can be descending according to weight, after segmented video image is ranked up, by weight it is maximum or Weight is determined as recommendation results in the segmented video image of front three.
Step S43, controlling terminal play the recommendation results.
In one possible implementation, can according to demand, by weight in segmented video image it is higher one or Multiple segmented video images are played out in terminal.
In the present embodiment, it can determine the weight of segmented video image, and be determined after recommendation results according to weight at end End plays out.The recommendation results determined according to weight can preferably express the performance content in video, so that viewer There is good viewing experience.
Figure 17 shows the flow chart of the Video Image Segmentation method according to one embodiment of the disclosure, as shown in figure 17, shown Step S40 includes: in Video Image Segmentation method
Step S44, obtains target object recognition information and/or scape does not select information.
Step S45 does not select information according to the target object recognition information and/or scape, in the segmented video image Middle determining selection result.
Step S46, controlling terminal play the selection result.
In one possible implementation, different viewers can use target object recognition information, select wait divide Cut target object different in image.For example, viewer A may be interested in star A, it is desirable to watch in video more The image of star A.Viewer A can use target object recognition information, select star A.
Some viewer's habits see close shot and feature, and some viewer's habits see panorama.Different viewers can be with benefit Information is not selected with scape, selects the different other segmented video images of scape.
Can provide target object recognition information and/or scape not Xuan Ze information information input frame or Option Box, for viewing Person selects.
Can establish each segmented video image and target object and/or scape it is other between corresponding relationship, can be according to viewing The target object or scape of person's selection are other, selection result are determined in segmented video image, and in terminal plays selection result.
In the present embodiment, information is not selected by target object recognition information and/or scape, it can be in segmented video image Middle determining selection result, and in terminal plays selection result.Information is not selected by target object recognition information and/or scape, it can To meet the viewing demand of different viewers.
Using example:
Figure 18 shows the schematic diagram of segmented video image in the Video Image Segmentation method according to one embodiment of the disclosure, such as Shown in Figure 18, the image of the top is the video frame images in the video of capture apparatus shooting in Figure 18, is image to be split.? It include two target objects, the personage C in left side and the personage D on right side in image to be split.
Specified scape can not be middle scape, according to two target objects, respectively obtain two segmentations view in the middle position Figure 18 Frequency image, by the segmented video image of the left-to-right middle scape for being respectively as follows: personage C, the segmented video image of the middle scape of personage D.
It can determine the head of target object and be target site with hand, and according to the target portion of personage C and personage D Position, obtains four segmented video images of bottom in Figure 18.By the segmentation video figure of the left-to-right hand for being respectively as follows: personage C The segmentation video on the head of picture, the segmented video image on the head of personage C, the segmented video image of the hand of personage D, personage D Image.Four are directed to the segmented video image of target site, are the other segmented video image of feature scape of personage C and personage D.
When playing video, can be selected in six segmented video images in Figure 18.It can not selected according to scape It selects, can also be selected according to target object.The image to be split of the top in Figure 18 can also be played.
The present embodiment is realized carry out Video Image Segmentation for an image to be split after, it is other to obtain different scapes, and For the segmented video image of different target object, segmented video image can be selected to broadcast according to demand in terminal.It avoids Wasting of resources when video capture or video broadcast.
Figure 19 shows the schematic diagram of the Video Image Segmentation device according to one embodiment of the disclosure, as shown in figure 19, described Video Image Segmentation device includes:
Recongnition of objects module 10, the target object in image to be split, the image to be split are views for identification Video frame images in frequency;
Cut zone determining module 20, for determining corresponding with the target object identified in the image to be split Cut zone;
Segmented video image generation module 30, the segmentation for generating the image to be split according to the cut zone regard Frequency image,
Playing module 40 plays the segmented video image for controlling terminal.
Figure 20 shows the schematic diagram of the Video Image Segmentation device according to one embodiment of the disclosure, as shown in figure 20, one In the possible implementation of kind, the recongnition of objects module 10, comprising:
First object Object identifying submodule 11 identifies the target in image to be split for utilizing deep learning algorithm Object.
In one possible implementation, the cut zone determining module 20, comprising:
First cut zone determines submodule 21, for utilizing deep learning algorithm, according to composition rule described wait divide It cuts and determines cut zone corresponding with the target object identified in image.
In one possible implementation, the cut zone determining module 20, comprising:
Second cut zone determines submodule 22, the target object for determining with identifying in the image to be split It is corresponding, meet the other cut zone of specified scape.
In one possible implementation, the cut zone determining module 20, comprising:
Third cut zone determines submodule 23, includes the target object for determining in the image to be split Cut zone.
In one possible implementation, the third cut zone determines submodule 23, is used for:
In the image to be split, determining at least two cut zone including the target object, described at least two In a cut zone, image corresponding to the biggish cut zone of area includes figure corresponding to the lesser cut zone of area Picture.
In one possible implementation, the cut zone determining module 20, comprising:
Target site identifies submodule 24, for identification the target site on the target object;
4th cut zone determines submodule 25, for determining that the target site is corresponding in the image to be split Cut zone.
In one possible implementation, described device further include:
Resolution information determining module 50, for determining the resolution information of the image to be split;
Division size information determination module 60, for determining the segmentation of the image to be split according to the resolution information Dimension information;
The cut zone determining module 20, comprising:
5th cut zone determines submodule 26, the target object for determining with identifying in the image to be split It is corresponding, meet the cut zone of the division size information.
In one possible implementation, described device further include:
Sharpness information determining module 70, for determining the sharpness information of the image to be split;
The division size information determination module 60, comprising:
First segmentation information determines submodule 61, described in determining according to the resolution information and sharpness information The division size information of image to be split.
In one possible implementation, described device further include:
First plays display information determination module 80, and for determining that the first broadcasting shows information, described first plays display Information includes screen physical size information and/or broadcasting resolution information;
Wherein, the segmented video image that terminal is played meets the screen physical size information and/or plays resolution ratio Information.
In one possible implementation, described device further include:
Second plays display information determination module 90, and for determining that the second broadcasting shows information, described second plays display Information includes transverse screen display information or vertical screen display information;
Wherein, the segmented video image that terminal is played meets the transverse screen display information or the vertical screen display information.
In one possible implementation, the segmented video image generation module 30, comprising:
Coordinate information determines submodule 31, for determining the cut zone corresponding coordinate in the image to be split Information;
First segmented video image generates submodule 32, for generating the image to be split according to the coordinate information Divide the video image.
In one possible implementation, the playing module 40 includes:
Weight determines submodule 41, for determining weight of each segmented video image in the image to be split;
Recommendation results determine submodule 42, for determining and recommending in each segmented video image according to the weight As a result;
Recommendation results play submodule 43, play the recommendation results for controlling terminal.
In one possible implementation, the playing module 40, comprising:
Submodule 44 is selected, does not select information for obtaining target object recognition information and/or scape;
Selection result determines submodule 45, for not selecting information according to the target object recognition information and/or scape, Selection result is determined in the segmented video image;
Selection result plays submodule 46, plays the selection result for controlling terminal.
Figure 21 is a kind of block diagram of device 800 for Video Image Segmentation shown according to an exemplary embodiment.Example Such as, device 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, and plate is set It is standby, Medical Devices, body-building equipment, personal digital assistant etc..
Referring to Figure 21, device 800 may include following one or more components: processing component 802, memory 804, power supply Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and Communication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented Estimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is described Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800 Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed above-mentioned to complete by the processor 820 of device 800 Method.
Figure 22 is a kind of block diagram of device 1900 for Video Image Segmentation shown according to an exemplary embodiment.Example Such as, device 1900 may be provided as a server.Referring to Figure 22, device 1900 includes processing component 1922, is further wrapped One or more processors and memory resource represented by a memory 1932 are included, it can be by processing component for storing The instruction of 1922 execution, such as application program.The application program stored in memory 1932 may include one or one with On each correspond to one group of instruction module.In addition, processing component 1922 is configured as executing instruction, to execute above-mentioned side Method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and one Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating The memory 1932 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1922 of device 1900 to complete The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology Other those of ordinary skill in domain can understand each embodiment disclosed herein.

Claims (30)

1. a kind of Video Image Segmentation method, which is characterized in that the described method includes:
Identify that the target object in image to be split, the image to be split are the video frame images in video;
Cut zone corresponding with the target object identified is determined in the image to be split;
The segmented video image of the image to be split is generated according to the cut zone,
Controlling terminal plays the segmented video image.
2. the method according to claim 1, wherein identifying the target object in image to be split, comprising:
Using deep learning algorithm, the target object in image to be split is identified.
3. method according to claim 1 or 2, which is characterized in that determine and identify in the image to be split The corresponding cut zone of target object, comprising:
Using deep learning algorithm, according to composition rule, determination is corresponding with the target object identified in the image to be split Cut zone.
4. method according to claim 1 or 2, which is characterized in that determine and identify in the image to be split The corresponding cut zone of target object, comprising:
Determination is corresponding with the target object identified in the image to be split, meets the other cut zone of specified scape.
5. method according to claim 1 to 4, which is characterized in that in the image to be split determine with The corresponding cut zone of the target object identified, comprising:
The cut zone including the target object is determined in the image to be split.
6. according to the method described in claim 5, it is characterized in that, determining to include the target pair in the image to be split The cut zone of elephant, comprising:
In the image to be split, determining at least two cut zone including the target object, described at least two points It cuts in region, image corresponding to the biggish cut zone of area includes image corresponding to the lesser cut zone of area.
7. method according to claim 1 to 4, which is characterized in that in the image to be split determine with The corresponding cut zone of the target object identified, comprising:
Identify the target site on the target object;
The corresponding cut zone of the target site is determined in the image to be split.
8. the method according to claim 1, wherein the method also includes:
Determine the resolution information of the image to be split;
The division size information of the image to be split is determined according to the resolution information;
Cut zone corresponding with the target object identified is determined in the image to be split, comprising:
Determination is corresponding with the target object identified in the image to be split, meets the segmentation of the division size information Region.
9. according to the method described in claim 8, it is characterized in that, the method also includes:
Determine the sharpness information of the image to be split;
The division size information of the image to be split is determined according to the resolution information, comprising:
According to the resolution information and sharpness information, the division size information of the image to be split is determined.
10. the method according to claim 1, wherein the method also includes:
Determine that the first broadcasting shows information, the first broadcasting display information includes screen physical size information and/or broadcasting point Resolution information;
Wherein, the segmented video image that terminal is played meets the screen physical size information and/or plays resolution information.
11. the method according to claim 1, wherein the method also includes:
Determine that the second broadcasting shows information, the second broadcasting display information includes transverse screen display information or vertical screen display information;
Wherein, the segmented video image that terminal is played meets the transverse screen display information or the vertical screen display information.
12. the method according to claim 1, wherein generating the image to be split according to the cut zone Segmented video image, comprising:
Determine the cut zone corresponding coordinate information in the image to be split;
The segmentation video of the image to be split image is generated according to the coordinate information.
13. the method according to claim 1, wherein controlling terminal plays the segmented video image, comprising:
Determine weight of each segmented video image in the image to be split;
According to the weight, recommendation results are determined in each segmented video image;
Controlling terminal plays the recommendation results.
14. the method according to claim 1, wherein controlling terminal plays the segmented video image, comprising:
It obtains target object recognition information and/or scape does not select information;
Information is not selected according to the target object recognition information and/or scape, selection knot is determined in the segmented video image Fruit;
Controlling terminal plays the selection result.
15. a kind of Video Image Segmentation device, which is characterized in that described device includes:
Recongnition of objects module, the target object in image to be split for identification, the image to be split is in video Video frame images;
Cut zone determining module, for determining cut section corresponding with the target object identified in the image to be split Domain;
Segmented video image generation module, for generating the segmentation video figure of the image to be split according to the cut zone Picture,
Playing module plays the segmented video image for controlling terminal.
16. device according to claim 15, which is characterized in that the recongnition of objects module, comprising:
First object Object identifying submodule identifies the target object in image to be split for utilizing deep learning algorithm.
17. device according to claim 15 or 16, which is characterized in that the cut zone determining module, comprising:
First cut zone determines submodule, for utilizing deep learning algorithm, according to composition rule in the image to be split Middle determination cut zone corresponding with the target object identified.
18. device according to claim 15 or 16, which is characterized in that the cut zone determining module, comprising:
Second cut zone determines submodule, for determining corresponding with the target object identified in the image to be split , meet the other cut zone of specified scape.
19. device described in any one of 5 to 18 according to claim 1, which is characterized in that the cut zone determining module, Include:
Third cut zone determines submodule, for determining the cut section including the target object in the image to be split Domain.
20. device according to claim 19, which is characterized in that the third cut zone determines submodule, is used for:
In the image to be split, determining at least two cut zone including the target object, described at least two points It cuts in region, image corresponding to the biggish cut zone of area includes image corresponding to the lesser cut zone of area.
21. device described in any one of 5 to 18 according to claim 1, which is characterized in that the cut zone determining module, Include:
Target site identifies submodule, for identification the target site on the target object;
4th cut zone determines submodule, for determining the corresponding cut section of the target site in the image to be split Domain.
22. device according to claim 15, which is characterized in that described device further include:
Resolution information determining module, for determining the resolution information of the image to be split;
Division size information determination module, for determining that the division size of the image to be split is believed according to the resolution information Breath;
The cut zone determining module, comprising:
5th cut zone determines submodule, for determining corresponding with the target object identified in the image to be split , meet the cut zone of the division size information.
23. device according to claim 22, which is characterized in that described device further include:
Sharpness information determining module, for determining the sharpness information of the image to be split;
The division size information determination module, comprising:
First segmentation information determines submodule, for determining described to be split according to the resolution information and sharpness information The division size information of image.
24. device according to claim 15, which is characterized in that described device further include:
First plays display information determination module, and for determining that the first broadcasting shows information, described first plays display packet It includes screen physical size information and/or plays resolution information;
Wherein, the segmented video image that terminal is played meets the screen physical size information and/or plays resolution information.
25. device according to claim 15, which is characterized in that described device further include:
Second plays display information determination module, and for determining that the second broadcasting shows information, described second plays display packet Include transverse screen display information or vertical screen display information;
Wherein, the segmented video image that terminal is played meets the transverse screen display information or the vertical screen display information.
26. device according to claim 15, which is characterized in that the segmented video image generation module, comprising:
Coordinate information determines submodule, for determining the cut zone corresponding coordinate information in the image to be split;
First segmented video image generates submodule, and the segmentation for generating the image to be split according to the coordinate information regards Frequently the image.
27. device according to claim 15, which is characterized in that the playing module includes:
Weight determines submodule, for determining weight of each segmented video image in the image to be split;
Recommendation results determine submodule, for determining recommendation results in each segmented video image according to the weight;
Recommendation results play submodule, play the recommendation results for controlling terminal.
28. device according to claim 15, which is characterized in that the playing module, comprising:
Submodule is selected, does not select information for obtaining target object recognition information and/or scape;
Selection result determines submodule, for not selecting information according to the target object recognition information and/or scape, at described point It cuts and determines selection result in video image;
Selection result plays submodule, plays the selection result for controlling terminal.
29. a kind of Video Image Segmentation device characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim require any one of 1 to 14 described in method.
30. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute It states and realizes method described in any one of claim 1 to 14 when computer program instructions are executed by processor.
CN201810802302.9A 2018-07-18 2018-07-18 Video image segmentation method and device Active CN108986117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810802302.9A CN108986117B (en) 2018-07-18 2018-07-18 Video image segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810802302.9A CN108986117B (en) 2018-07-18 2018-07-18 Video image segmentation method and device

Publications (2)

Publication Number Publication Date
CN108986117A true CN108986117A (en) 2018-12-11
CN108986117B CN108986117B (en) 2021-06-04

Family

ID=64549449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810802302.9A Active CN108986117B (en) 2018-07-18 2018-07-18 Video image segmentation method and device

Country Status (1)

Country Link
CN (1) CN108986117B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276318A (en) * 2019-06-26 2019-09-24 北京航空航天大学 Nighttime road rains recognition methods, device, computer equipment and storage medium
CN111246237A (en) * 2020-01-22 2020-06-05 视联动力信息技术股份有限公司 Panoramic video live broadcast method and device
CN112839227A (en) * 2019-11-22 2021-05-25 浙江宇视科技有限公司 Image coding method, device, equipment and medium
WO2022077995A1 (en) * 2020-10-12 2022-04-21 北京达佳互联信息技术有限公司 Video conversion method and video conversion device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541494A (en) * 2010-12-30 2012-07-04 中国科学院声学研究所 Video size switching system and video size switching method facing display terminal
CN103747210A (en) * 2013-12-31 2014-04-23 深圳市佳信捷技术股份有限公司 Method and device for data processing of video monitoring system
CN105163188A (en) * 2015-08-31 2015-12-16 小米科技有限责任公司 Video content processing method, device and apparatus
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN107545576A (en) * 2017-07-31 2018-01-05 华南农业大学 Image edit method based on composition rule
CN107547803A (en) * 2017-09-25 2018-01-05 北京奇虎科技有限公司 Video segmentation result edge optimization processing method, device and computing device
CN108124194A (en) * 2017-12-28 2018-06-05 北京奇艺世纪科技有限公司 A kind of net cast method, apparatus and electronic equipment
CN108156459A (en) * 2016-12-02 2018-06-12 北京中科晶上科技股份有限公司 Telescopic video transmission method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541494A (en) * 2010-12-30 2012-07-04 中国科学院声学研究所 Video size switching system and video size switching method facing display terminal
CN103747210A (en) * 2013-12-31 2014-04-23 深圳市佳信捷技术股份有限公司 Method and device for data processing of video monitoring system
CN105163188A (en) * 2015-08-31 2015-12-16 小米科技有限责任公司 Video content processing method, device and apparatus
CN108156459A (en) * 2016-12-02 2018-06-12 北京中科晶上科技股份有限公司 Telescopic video transmission method and system
CN106792092A (en) * 2016-12-19 2017-05-31 广州虎牙信息科技有限公司 Live video flow point mirror display control method and its corresponding device
CN107545576A (en) * 2017-07-31 2018-01-05 华南农业大学 Image edit method based on composition rule
CN107547803A (en) * 2017-09-25 2018-01-05 北京奇虎科技有限公司 Video segmentation result edge optimization processing method, device and computing device
CN108124194A (en) * 2017-12-28 2018-06-05 北京奇艺世纪科技有限公司 A kind of net cast method, apparatus and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276318A (en) * 2019-06-26 2019-09-24 北京航空航天大学 Nighttime road rains recognition methods, device, computer equipment and storage medium
CN112839227A (en) * 2019-11-22 2021-05-25 浙江宇视科技有限公司 Image coding method, device, equipment and medium
CN112839227B (en) * 2019-11-22 2023-03-14 浙江宇视科技有限公司 Image coding method, device, equipment and medium
CN111246237A (en) * 2020-01-22 2020-06-05 视联动力信息技术股份有限公司 Panoramic video live broadcast method and device
WO2022077995A1 (en) * 2020-10-12 2022-04-21 北京达佳互联信息技术有限公司 Video conversion method and video conversion device

Also Published As

Publication number Publication date
CN108986117B (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN109089170A (en) Barrage display methods and device
CN109257645A (en) Video cover generation method and device
CN108986117A (en) Video image segmentation method and device
CN109618184A (en) Method for processing video frequency and device, electronic equipment and storage medium
CN109495684A (en) A kind of image pickup method of video, device, electronic equipment and readable medium
CN109963166A (en) Online Video edit methods and device
CN108260020A (en) The method and apparatus that interactive information is shown in panoramic video
CN109963200A (en) Video broadcasting method and device
CN109151356A (en) video recording method and device
CN108833991A (en) Video caption display methods and device
CN110278450A (en) Multimedia content playback method and device
CN108737891A (en) Video material processing method and processing device
CN108924644A (en) Video clip extracting method and device
CN109963168A (en) Video previewing method and device
CN110121106A (en) Video broadcasting method and device
CN106875446B (en) Camera method for relocating and device
CN110519655A (en) Video clipping method and device
CN108521593A (en) The display methods and device of the information of multimedia content
CN108540850A (en) Barrage display methods and device
CN109358780A (en) Method for showing interface and device
CN109063101A (en) The generation method and device of video cover
CN109446346A (en) Multimedia resource edit methods and device
CN108174269A (en) Visualize audio frequency playing method and device
CN107122456A (en) The method and apparatus for showing video search result
CN109151553A (en) Display control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200424

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 100000 room 26, 9 Building 9, Wangjing east garden four, Chaoyang District, Beijing.

Applicant before: BEIJING YOUKU TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant