CN109600544A - A kind of local dynamic station image generating method and device - Google Patents

A kind of local dynamic station image generating method and device Download PDF

Info

Publication number
CN109600544A
CN109600544A CN201710939457.2A CN201710939457A CN109600544A CN 109600544 A CN109600544 A CN 109600544A CN 201710939457 A CN201710939457 A CN 201710939457A CN 109600544 A CN109600544 A CN 109600544A
Authority
CN
China
Prior art keywords
dynamic
frame
video data
target
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710939457.2A
Other languages
Chinese (zh)
Other versions
CN109600544B (en
Inventor
耿军
朱斌
胡康康
马春阳
李郭
刍牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710939457.2A priority Critical patent/CN109600544B/en
Priority to TW107120687A priority patent/TW201915946A/en
Priority to PCT/CN2018/106633 priority patent/WO2019062631A1/en
Publication of CN109600544A publication Critical patent/CN109600544A/en
Application granted granted Critical
Publication of CN109600544B publication Critical patent/CN109600544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application provides a kind of local dynamic station image generating method, device, is related to image processing techniques.The described method includes: obtaining the target video data that user uploads;The pixel value of each frame of the target video data is analyzed, determines at least one dynamic area in the target video data;It receives user and determines target dynamic region from least one described dynamic area;Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.The embodiment of the present application reduces user's operation, improves the precision that the dynamic area of main object is chosen, reduces manpower and time cost;Also, by system automatic identification dynamic area, avoid by eye recognition be difficult identification less clearly main object the case where, it is lower to the main object separation requirement of video, thus the demand to reducing to video material.

Description

A kind of local dynamic station image generating method and device
Technical field
This application involves technical field of image processing, more particularly to a kind of local dynamic station image generating method and device.
Background technique
With the continuous development of image technique, there is a kind of local dynamic station image, i.e., only part is main in entire image Body object is moving.
In first technology, mono- local dynamic station image of Yao Shengcheng needs user that video data is imported CINEMAGRAPH Then (magical fine motor skills in still photo) software creates local dynamic station image as follows: firstly, will view Frequency division is two layers, and first layer is static frame-layer, and the second layer is dynamic frame-layer;Secondly, user draws one piece on the first layer manually Contour area;Again, by the image-erasing of the contour area in first layer static frames, the dynamic of contour area is appeared in the second layer State frame;Finally, export includes the local dynamic station image of this two layers composition.
Inventor has found during application above-mentioned technology, needs to go out by user hand animation a block main body object Profile marks dynamic area, to realize the generation of the local dynamic station image of video, still: by drawing main object manually Profile, operation difficulty is big, is easy to cause effect not accurate, usually also draws in profile other undesirable pictorial elements, leads Other undesirable pictorial elements are caused also to move, and if wanting effect accurate, need user to use the picture of large amount of complex Operation, waste of manpower and time cost;Furthermore due to the main object for going to differentiate its demand by human eye, main object point Preferable from gem-pure video effect, to when main object is not clear enough, human eye is easy to cause profile not accurate, makes At picture distortion or be not overlapped.
Summary of the invention
In view of the above problems, the embodiment of the present application provides a kind of local dynamic station image generating method, with by automatically according to Registration in the sequence frame of video between the main object of each frame determines the dynamic area of main object, then automatically generates For the dynamic area of the main object, solve user hand animation in the prior art and go out profile to lead to that operation difficulty is big, draws Profile is not accurate, leads to the problem of picture distortion or is not overlapped.
Correspondingly, the embodiment of the present application also provides a kind of local dynamic station video generation devices, to guarantee the above method Realization and application.
To solve the above-mentioned problems, the embodiment of the present application discloses a kind of local dynamic station image generating method, comprising:
Obtain the target video data that user uploads;
The pixel value of each frame of the target video data is analyzed, is determined at least one in the target video data A dynamic area;
It receives user and determines target dynamic region from least one described dynamic area;
Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.
The embodiment of the present application also discloses a kind of local dynamic station image generating method, comprising:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, is determined at least one in the target video data A dynamic area;
From at least one described dynamic area, target dynamic region belonging to target subject object is determined;
Generate the local dynamic station image for being directed to the target dynamic region.
The embodiment of the present application also discloses a kind of image processing method, comprising:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, is determined at least one in the target video data A dynamic area.
The embodiment of the present application also discloses a kind of local dynamic station video generation device, comprising:
First video acquiring module, for obtaining the target video data of user's upload;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine institute State at least one dynamic area in target video data;
First object determining module determines target dynamic area for receiving user from least one described dynamic area Domain;
Local image generation module, the target dynamic region for being determined based on user, is generated and is directed to the target dynamic The local dynamic station image in region.
The embodiment of the present application also discloses a kind of local dynamic station video generation device, comprising:
Second video acquiring module, for obtaining target video data;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine institute State at least one dynamic area in target video data;
Second target determination module, for determining belonging to target subject object from least one described dynamic area Target dynamic region;
Local image generation module, for generating the local dynamic station image for being directed to the target dynamic region.
The embodiment of the present application also discloses a kind of image processing apparatus, comprising:
Second video acquiring module, for obtaining target video data;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine institute State at least one dynamic area in target video data.
The embodiment of the present application also discloses a kind of equipment, including processor, memory and is stored on the memory simultaneously The computer program that can be run on the processor realizes following step when the computer program is executed by the processor It is rapid: to obtain the target video data that user uploads;The pixel value of each frame of the target video data is analyzed, determines institute State at least one dynamic area in target video data;It receives user and determines target dynamic from least one described dynamic area Region;Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.
The embodiment of the present application also discloses a kind of computer readable storage medium, which is characterized in that described computer-readable Computer program is stored on storage medium, the computer program realizes following steps when being executed by processor: obtaining on user The target video data of biography;The pixel value of each frame of the target video data is analyzed, determines the target video number At least one dynamic area in;It receives user and determines target dynamic region from least one described dynamic area;Based on use The target dynamic region that family determines generates the local dynamic station image for being directed to the target dynamic region.
The embodiment of the present application also discloses a kind of equipment, including processor, memory and is stored on the memory simultaneously The computer program that can be run on the processor realizes following step when the computer program is executed by the processor It is rapid: to obtain target video data;The pixel value of each frame of the target video data is analyzed, determines the target video At least one dynamic area in data;From at least one described dynamic area, determine that target belonging to target subject object is dynamic State region;Generate the local dynamic station image for being directed to the target dynamic region.
The embodiment of the present application also discloses a kind of computer readable storage medium, deposits on the computer readable storage medium Computer program is stored up, the computer program realizes following steps when being executed by processor: obtaining target video data;To described The pixel value of each frame of target video data is analyzed, and determines at least one dynamic area in the target video data;From In at least one described dynamic area, target dynamic region belonging to target subject object is determined;It generates dynamic for the target The local dynamic station image in state region.
The embodiment of the present application also discloses a kind of equipment, including processor, memory and is stored on the memory simultaneously The computer program that can be run on the processor realizes following step when the computer program is executed by the processor It is rapid: to obtain target video data;The pixel value of each frame of the target video data is analyzed, determines the target video At least one dynamic area in data.
The embodiment of the present application also discloses a kind of computer readable storage medium, which is characterized in that described computer-readable Computer program is stored on storage medium, the computer program realizes following steps when being executed by processor: obtaining target view Frequency evidence;The pixel value of each frame of the target video data is analyzed, is determined at least one in the target video data A dynamic area.
The embodiment of the present application includes the following advantages:
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, it, can then for the target dynamic region at least one dynamic area To automatically generate the local dynamic station image for being directed to the target dynamic region.Automatic identification motion interval is realized to generate office Portion's dynamic image, to reduce user's operation;Since dynamic area is the region that main object moves in video, also can The precision that the dynamic area of main object is chosen is improved, manpower and time cost are reduced;Also, it is dynamic by system automatic identification State region, avoid by eye recognition be difficult identification less clearly main object the case where, to the main object point of video It is lower from requiring, thus the demand to reducing to video material.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of local dynamic station image generating method embodiment of the application;
Figure 1A is a kind of local dynamic station video generation system architecture example of the application;
Figure 1B is a kind of dynamic area outlined example of the application;
Fig. 2 is the step flow chart of another local dynamic station image generating method embodiment of the application;
Fig. 3 is the step flow chart of another image processing method embodiment of the application;
Fig. 4 is a kind of structural block diagram of local dynamic station video generation Installation practice of the application;
Fig. 5 is the structural block diagram of another local dynamic station video generation Installation practice of the application;
Fig. 6 is a kind of structural block diagram of image processing apparatus embodiment of the application;
Fig. 7 is the hardware structural diagram for the equipment that another embodiment of the application provides.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing and it is specific real Applying mode, the present application will be further described in detail.
Local dynamic station image is the combination of a kind of Dynamic Photography and static images.In general, it is that camera is fixed Mode is shot, and the video then obtained to shooting is handled, and retains the dynamical state for needing the main object moved, and His part is then remain stationary except its main object.Such as a video is had taken using fixing camera, which includes A, B, tri- people of C are waving, and if merely desiring to allow the movement of waving of performance A, can be using a certain frame image of video as carrying on the back Scape immobilizes, and retains the movement of waving of personage A, and local dynamic station image then shows other people all and is and is motionless at this time, and A It is waving.
The embodiment of the present application can automatically analyze video material, determine the dynamic area where main object therein Domain, the local dynamic station image in generation target dynamic region that then can be intelligent.
Referring to Fig.1, it illustrates a kind of step flow charts of local dynamic station image generating method embodiment of the application, specifically May include:
Step 101, the target video data that user uploads is obtained;
In the embodiment of the present application, user can obtain target video data by all means, for example be clapped using mobile phone Video, video camera shooting video are taken the photograph, copies video from the terminal of other users, the modes such as foradownloaded video from network, this Shen Please embodiment it is not limited.
The target video data that user can be got, is uploaded in system.
In conjunction with Figure 1A, it illustrates a kind of local dynamic station video generation example architectures of the embodiment of the present application.It includes clothes Business device 20, client 10.The embodiment of the present application can use client+server framework.User can in the client by Target video data is uploaded to server, and server carries out subsequent processing to video data, then returns to client.
It should be noted that in the embodiment of the present application, can also in the target video data of client local reception user, It is local to target video data progress subsequent processing in client.
Step 102, the pixel value of each frame of the target video data is analyzed, determines the target video data In at least one dynamic area;
It in the embodiment of the present application, can be to the target video after the target video data for receiving user's upload The pixel value of each frame of data carries out intelligent analysis, may thereby determine that at least dynamic area in the target video data Domain is provided to user's selection.Certainly in practical applications, it can determine the dynamic area that can be identified, then provide It is selected to user.
In conjunction with Figure 1A, server is analyzed in the pixel value to each frame to the target video data, described in determination In target video data behind at least one dynamic area, the profile of the dynamic area can be marked;Then dynamic area will be marked The image of domain profile returns to client and is shown.User can be dynamic to one or more according to the profile of label in client State region is selected.
In the embodiment of the present application, which can be the video material of selection fixed background, fixation back The video material of scape can be the video material that fixed lens are shot, naturally it is also possible to it is the material that other modes obtain, The embodiment of the present application does not limit it.
Preferably, in another embodiment of the application, step 102 includes:
Target video data is converted to sequence frame by sub-step A11;
In the embodiment of the present application, the target video data uploaded for user, can call Video Quality Metric function first Target video is converted into sequence frame, the Video Quality Metric function such as VideoToImage ().Certainly, specific Video Quality Metric Homologous ray may be not different for function, and the embodiment of the present application does not limit it.In the sequence frame, image presses the elder generation of play time After sort.
Sub-step A12, according to the registration between the block of pixels for belonging to different pixels position in each frame image, determine described in At least one dynamic area in target video data.
In the embodiment of the present application, since abovementioned steps have been converted to sequence frame, and its every frame for sequence frame Resolution ratio be consistent, for example target video data is the resolution ratio of 800*600, then in sequence frame every frame image resolution Rate is also 800*600.For the main object moved in a video, which is actually in every frame image One block of pixels, then difference of the block of pixels between the pixel value before movement and the pixel value after movement is little.It is so main The process that body object moves in video, it can be understood as the pixel of each location of pixels in the block of pixels at its place before moving Value corresponds to the pixel value of each location of pixels in block of pixels after movement is substituted.So the embodiment of the present application can be with base different pixels Registration between the block of pixels of position, determines dynamic area.After dynamic area determines, the main body pair wrapped up in the dynamic area As also determining that.
Such as if there is a shoes moving in target video data, this shoes occupancy is made of 1000 pixels Block of pixels, in first frame in the block of pixels of region A1.After the shoes are mobile, in next frame, the block of pixels of A2 in region Middle display.So in the block of pixels of region A1 in the value and region A2 block of pixels of 1000 pixels 1000 pixels value base It is consistent on this.
So, sequentially each block of pixels in every frame image is matched with each frame image later by above-mentioned, it can To determine the shoes occur in which frame, and which pixel locating area locating for mobile block of pixels has.Than if any 100 frames There is the main object of above-mentioned shoes, and the region that shoes occur in this 100 frame is respectively A1, A2 ... A100, due to these regions Pixel position pointed by frame where only then obtains the shifting of this shoes then the pixel position in this 100 regions takes intersection Dynamic pixel locating area.Hence, it can be determined that the shoes are from the 1st frame to the movement of the time dimension and the shoes of 100 frame these frames Pixel locating area this Spatial Dimension, so that it is determined that dynamic area locating for the shoes.
Certainly, in practical applications, the same object may change in different positions, certain pixels, because This, the pixel value of block of pixels where the same object may be not quite identical in different frame, because with the variation of light, it is different Block of pixels in might have the values of some pixels and change, then the registration of each pixel when two block of pixels Reach certain numerical value, it may be considered that two pieces of regions situation that be the same object mobile in different positions, and then can be with Dynamic area is determined based on the block of pixels.
It should be noted that can determine dynamic area by the coincidence of block of pixels, in dynamic area in the embodiment of the present application After domain determines, cover object picture in the region of package then based on object.
It is appreciated that in each frame image, if one or more block of pixels being overlapped, the pixel occurred in each frame The band of position is consistent, such as in the video of 800*600 resolution ratio, has block of pixels to be overlapped in each frame, and all (0,0), (0,100), (100,0), (100,100) } this region, then it is considered that the block of pixels is a static element.
Preferably, in another embodiment of the application, step 102 includes:
Target video data is converted to sequence frame by sub-step A21;
Sub-step A22, for each location of pixels in the target video data, according to location of pixels described in each frame The variation degree of pixel value determines recyclable location of pixels;
In the embodiment of the present application, for example the rate respectively of target video data is 800*600, then the resolution ratio of every frame It is 800*600, the pixel value of the same location of pixels changes according to the variation of frame during video playing.So this Shen Please embodiment location of pixels can be divided, such as image do not change in the case where, the pixel value of location of pixels is one Sample, this kind of location of pixels is just object that is static, moving in video, may cause what it was moved to during the motion Location of pixels generates variation, then this kind of location of pixels can then be divided into recyclable location of pixels.
In practical applications, the effect obtained according to experiment, can be divided into three classes for location of pixels: static pixel position It sets, not recyclable location of pixels, recyclable location of pixels:
1, static location of pixels is exactly since the first frame of frame sequence to a last frame end, the pixel of the location of pixels Value does not change.
Static location of pixels can be understood as the position where image stationary in video, so its location of pixels Pixel value does not change.
2, not recyclable location of pixels is exactly to a last frame end since the first frame of frame sequence, and pixel value is one It is straight to increase or reduce always.
This kind of situation is learnt by experimental data, does not appear in the location of pixels that the object of movement passes through substantially In.
3, being recycled location of pixels is since first frame to a last frame end, and pixel value both has the case where increasing There is also reduced.
Statistics by the location of pixels passed through to moving objects in video, the location of pixels passed through will lead to the pixel The value of position both exist increase the case where there is also reduce the case where, therefore can will both exist increase the case where there is also reductions The case where location of pixels as recyclable position, so as to determine dynamic area from recyclable position.In the application reality It applies in example, the lower limit of a pixel value increased upper limit and pixel value reduction can be set, for example increase within 10 pixel values, Reduce by 10 pixels.It may be the object being added in shooting process, this kind of situation may identify inaccurate if variation is too big Really.It should be noted that the value of above-mentioned upper and lower limit can be set according to actual needs, the embodiment of the present application is not subject to it Limitation.
The embodiment of the present application is then according to the above process, for the variation journey of pixel value of each location of pixels in different frame Degree determines its whether recyclable location of pixels.
Certainly, one kind can be merged by 1,2 in the embodiment of the present application, the location of pixels in main the 3rd point of identification.
Sub-step A23 in the region being connected to based on each recyclable location of pixels, is determined in the target video data At least one dynamic area.
Due to being recycled the above-mentioned definition of location of pixels, then the object moved in video will lead to the pixel position of its process The value set generates variation, just includes each dynamic area then obtaining in region after each recyclable location of pixels connection, so as to To determine at least one dynamic area in the target video data from the region that each recyclable location of pixels is connected to.
It should be noted that in the embodiment of the present application, aforementioned dynamic area has time dimension attribute and Spatial Dimension Attribute, it is understood that for the dynamic area include pixel locating area and the pixel locating area start frame and Duration is recycled, wherein the pixel locating area is the region that main object therein moves in entire video.The circulation duration Can for when each frame is marked with play time, time of the start frame that main object occurs between end frame when Between, or when being numbered with playing sequence, the frame number of the serial number of start frame to end frame.
Preferably, in another embodiment of the application, the sub-step A23, comprising:
Sub-step A231 obtains the time consistency parameter and Space Consistency parameter of the recyclable location of pixels;
In the embodiment of the present application, it when taking the dynamic area for calculating main object based on recyclable location of pixels, needs first Obtain the time consistency parameter and Space Consistency parameter of recyclable location of pixels.
It should be noted that can be opened from first frame for each recyclable location of pixels A, time parameter of consistency Begin, it is poor with frame of a later frame to former frame, calculate the time consistency of the recyclable location of pixels.The frame difference such as pixel value Difference, naturally it is also possible to be the other parameters based on pixel value.It should be noted that time consistency is needle in the embodiment of the present application To the calculating between the same recyclable location of pixels different frame.
For each recyclable location of pixels A, Space Consistency parameter can be based on the recyclable location of pixels and its Adjacent recyclable location of pixels is calculated.It in practical applications, can be first for the recyclable location of pixels A of every frame and its Adjacent recyclable location of pixels calculates the spatial parameter of the frame, and the spatial parameter for being then based on each frame calculates the recyclable pixel The Space Consistency of position.
Sub-step A232 is determined according to the time consistency parameter and Space Consistency parameter of each recyclable location of pixels The start frame and circulation duration of each recyclable location of pixels;
The time consistency parameter and Space Consistency parameter for being then based on each recyclable location of pixels are as energy value, band Enter figure and cut algorithm to be calculated, that is, can determine the start frame and circulation duration of each recyclable location of pixels.It should be noted that It is without restriction that the embodiment of the present application cuts algorithm to figure, can using above-mentioned time consistency parameter and Space Consistency parameter as One of its input parameter.
Sub-step A233, from the region that each recyclable location of pixels is connected to, selection meets the region of connected domain condition Pixel locating area as the dynamic area;
In the embodiment of the present application, recyclable location of pixels has been determined by abovementioned steps, and in recyclable location of pixels There may be noises, and in order to avoid the presence of noise, the embodiment of the present application then will be from the region that each recyclable location of pixels wraps up In, select the region for meeting connected domain condition as the pixel locating area, which is then used as dynamic area Spatial Dimension attribute.
Wherein connected domain is the region being understood that be surrounded by a line.It in practical applications, can be flat according to Gauss Requirement that is sliding, filling out hole and morphological analysis etc. sets corresponding connected domain condition.Wherein the connected domain condition is such as: removing Area is less than the connected domain of area threshold.It should be noted that the area threshold can be arranged according to actual needs, the application is real Example is applied not limit it.
It in practical applications, can other pixel positions according to recyclable location of pixels and in addition to recyclable location of pixels It sets, video image two-value is turned into a gray level image, such as the video of aforementioned 800*600 resolution ratio, be recycled pixel position Setting gray value can be set to 255, other location of pixels in addition to recyclable location of pixels can be set to 0, to generate One gray level image.
The region for being so 255 for the gray value in gray level image, which carries out cutting, can be obtained dynamic area.
Sub-step A234, start frame and circulation duration based on each location of pixels in each pixel locating area, determines The start frame and circulation duration of the dynamic area.
A start frame and circulation duration are calculated due to being each recycled location of pixels in abovementioned steps, then due to It include multiple recyclable location of pixels in one pixel locating area, and may between the start frame of each recyclable location of pixels Inconsistent, circulation duration may also be inconsistent, then can be earliest based on the start frame of each recyclable location of pixels selection one Start frame of the start frame as dynamic area, then selecting one includes the corresponding frame the latest of each recyclable location of pixels Circulation time of the circulation time as dynamic area, so as to which the start frame and circulation duration then can be used as dynamic area The time dimension attribute in domain.
For example include: recyclable location of pixels 1 in the A of circulation position region, it is recycled location of pixels 2 ... and is recycled pixel Position 10.
The start frame for being wherein recycled location of pixels 1 is 2, when circulation a length of 40 frames.Recyclable location of pixels 2-9's rises Beginning frame is 3, when circulation a length of 41 frames.The start frame of recyclable location of pixels 10 is 4, when circulation a length of 50 frames.
Dynamic area so corresponding for circulation position region A, start frame then can choose 2, because start frame 2 is most It is early.And the end frame of recyclable location of pixels 1 is 2+40=42 frame, the end frame for being recycled location of pixels 2-9 is 3+41=44 Frame, the end frame for being recycled location of pixels 10 is 4+50=54 frame, then circulation duration can be calculated with 54 frames, i.e. 54-2=52 Frame.
Certainly, if circulation duration uses other counting modes, when can calculate the circulation using corresponding calculation Long, the embodiment of the present application does not limit it.
Step 103, it receives user and determines target dynamic region from least one described dynamic area;
In practical applications, due to that can identify one or more dynamic areas, by the one or more dynamic area User can be sent to select, which dynamic area user needs, then goes to generation office based on its dynamic area needed Portion's dynamic image.
In conjunction with Figure 1A, server 20 can draw profile to each dynamic area, then after identifying various dynamic areas It selects a frame image from video, after the mark profile, returns to client 10 and be shown, for selection by the user.When In right practical application, the image of dynamic area profile can choose any one frame image including the race way, then at this Dynamic area profile is added on frame image, the embodiment of the present application does not limit it.
It is understood that in the embodiment of the present application, being carried out automatically due to being system to the dynamic area of main object Identification, may identify multiple dynamic areas, it would be possible that being related to multiple main objects.In certain practical application, the application Embodiment can identify all dynamic areas that can be identified.And for a user, it may not be needed whole main bodys pair As all Dynamically Announces, therefore the embodiment of the present application marks dynamic area profile for it, then will after identifying all dynamic areas The image after the profile of dynamic area is marked and returns to user, such as Figure 1B, there are two dynamic area profiles for user's choosing It selects.
It should be noted that addition dynamic area profile can convert the image into ash by the way of aforementioned binaryzation Image is spent, is set as 255 subject to recyclable location of pixels, other location of pixels are set as 0, then select the connected domain of 255 packages, so Determine the location of pixels at the edge of the UNICOM domain again afterwards.After the frame image stated before the selection, the pixel at the edge of the record Red lines are added on position, and dynamic area profile can be obtained.
Then user can select one or more dynamic areas as target dynamic region, client in client 10 The target dynamic region that user selects then is uploaded to server.
It should be noted that client recognizes at least when using in the case where client locally carries out the framework of video processing It can be directly shown behind one dynamic area, user can locally directly select target dynamic region in client.
Step 104, the target dynamic region determined based on user generates the local dynamic station for being directed to the target dynamic region Image.
In the embodiment of the present application, since user has determined one or more target dynamic regions, then target can be based on Dynamic area generates the local dynamic station image for being directed to the target dynamic region.
Certainly, in practical applications, when user has selected multiple target dynamic regions, all mesh that user can be selected Dynamic area is marked, a local dynamic station image is generated, the dynamic in all target dynamic regions is shown in the local dynamic station image Image.It can also be respectively for each target dynamic region of user's selection, one local dynamic station image of each self-generating, Mei Geju Portion's dynamic image shows the dynamic image in a target dynamic region.Certainly there can also be other combinations, the application is implemented Example does not limit it.
Preferably, step 104 includes:
Sub-step 1041 determines the subsequence frame in the corresponding target dynamic region;
In practical applications, as in previous example, for an intercycle, its start frame and circulation duration are calculated, It can so be determined in the frame sequence of the video data according to its start frame and circulation duration for generating local dynamic station image Subsequence frame.Wherein, when there is multiple intercycles, due to the start frame and circulation each own difference of duration of each intercycle Not, then can choose the start frame and circulation duration including multiple intercycles.
Preferably, sub-step 1041 includes:
Step A31 determines the target dynamic region according to the start frame in the target dynamic region and circulation duration Subsequence frame.
For example two dynamic areas in Figure 1B, each dynamic area are the moving region of a shoes.When user selects Two dynamic areas.If the start frame of the corresponding dynamic area A of a shoes is 10, when circulation a length of 50 frames;Another The start frame of the corresponding dynamic area B of shoes is 12, a length of 52 when circulation.
If two dynamic areas are put into a local dynamic station image, then local dynamic station shadow can be set The start frame of picture is 10, a length of 54 when circulation.So available subsequence frame 10-64 frame at this time.
If two dynamic areas to be respectively arranged into a local dynamic station image respectively certainly, root can be distinguished Subsequence frame is obtained according to corresponding start frame and circulation duration.
Sub-step 1042, among the subsequence frame, the background image of the subsequent frame after start frame replaces with described The background image of start frame;The background image is the image except target dynamic region described in every frame image;
Such as the 10-64 frame of the target video data of earlier figures 1B, using the 10th frame as start frame, then the dynamic of the 10th frame Background outside region contour is still image.1st frame of so the 10th frame as new local dynamic station image.By target video Background outside the dynamic area contour area of 11st frame of data replaces with the background outside the dynamic area profile of the 10th frame, then replaces The background of image after changing is consistent with the 10th frame, then using replaced image as the 1st frame for new local dynamic station image. Processing mode of other frames and so on, it is known that as new office after having replaced the background of the 64th frame of target video data 55th frame of portion's dynamic image.
It is appreciated that the embodiment of the present application does not limit it for other situations and so on.
Sub-step 1043 is generated dynamic for the target based on the start frame and the subsequent frame that background image is substituted The local dynamic station image in state region.
Such as previous example, by the 1st frame of the new local dynamic station image of correspondence 55 frames on earth, sequentially combination produces two The local dynamic station image of shoes.
In certain practical application, the local dynamic station image for generating video format can be continued.It can also be by the 55 frame image Generate the local dynamic station image of gif (Graphics Interchange Format, graphic interchange format) format.Local dynamic station Specific format the embodiment of the present application of image does not limit it.
It is appreciated that when needing to generate two local dynamic station images to two shoes respectively, it can be by corresponding subsequence Frame one local dynamic station image of each self-generating in the manner described above.The embodiment of the present application does not limit it.
Certainly after local dynamic station video generation, user can choose export local dynamic station image, or clicks and share Button shares the local dynamic station image to some application, and certain user can also be uploaded to the page of oneself.The application is real Example is applied not limit it.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, it, can then for the target dynamic region at least one dynamic area To automatically generate the local dynamic station image for being directed to the target dynamic region.Automatic identification motion interval is realized to generate office Portion's dynamic image, to reduce user's operation;Since dynamic area is the region that main object moves in video, also can The precision that the dynamic area of main object is chosen is improved, manpower and time cost are reduced;Also, it is dynamic by system automatic identification State region, avoid by eye recognition be difficult identification less clearly main object the case where, to the main object point of video It is lower from requiring, thus the demand to reducing to video material.In addition, the embodiment of the present application can be with automatic identification target video number Then multiple dynamic areas in are selected for user, then can automatically generate the part of its needs according to the demand of user Dynamic image.
Referring to Fig. 2, it illustrates the step flow chart of another local dynamic station image generating method embodiment of the application, Include:
Step 201, target video data is obtained;
In the embodiment of the present application, video data can be obtained by various modes.When what is analyzed video data When executing orientation server, then user can upload its target video data by client.
When the equipment of the execution orientation user analyzed video data, then user can be by its target video data Import its equipment.
Certainly, specific target video data acquisition modes the embodiment of the present application does not limit it.
Step 202, the pixel value of each frame of the target video data is analyzed, determines the target video data In at least one dynamic area;
This step is similar with the step 102 of previous embodiment, and this will not be detailed here.
Preferably, step 203 includes:
Target video data is converted to sequence frame by sub-step B11;
Sub-step B12, according to the registration between the block of pixels for belonging to different pixels position in each frame image, determine described in At least one dynamic area in target video data.
Sub-step B11-B12 sub-step A11-A12 with reference to the foregoing embodiments, this will not be detailed here.
Preferably, step 203 includes:
Target video data is converted to sequence frame by sub-step B21;
Sub-step B22, for each location of pixels in the target video data, according to location of pixels described in each frame The variation degree of pixel value determines recyclable location of pixels;
Sub-step B23 in the region being connected to based on each recyclable location of pixels, is determined in the target video data At least one dynamic area.
Sub-step B21-B23 sub-step A21-A23 with reference to the foregoing embodiments, this will not be detailed here.
Step 203, from least one described dynamic area, target dynamic region belonging to target subject object is determined;
In practical applications, it then can be selected for user, then by least one above-mentioned dynamic area is marked profile Target dynamic region is determined according to the user's choice.
The main object in identification dynamic area can also be removed by image recognition mode, if be the mesh of user demand Main object is marked, if it is, determining that the dynamic area is target dynamic region.For example, user can pre-select " shoes " two Whether word, then system can obtain the feature of " shoes " from database, then go in the image for identifying each dynamic area to go out Now a little feature, if there is then selecting the dynamic area for target dynamic region.
Certainly, specifically determine target subject object belonging to target dynamic region can there are many, the embodiment of the present application is not It is limited.
Step 204, the local dynamic station image for being directed to the target dynamic region is generated.
This step is similar with the step 204 of previous embodiment, and this will not be detailed here.
Preferably, step 204 includes:
Sub-step 2041 determines the subsequence frame in the corresponding target dynamic region;
Sub-step 2042, among the subsequence frame, the background image of the subsequent frame after start frame replaces with described The background image of start frame;The background image is the image except target dynamic region described in every frame image;
Sub-step 2043 is generated dynamic for the target based on the start frame and the subsequent frame that background image is substituted The local dynamic station image in state region.
The sub-step 1041-1043 of sub-step 2041-2043 with reference to the foregoing embodiments, this will not be detailed here.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, it, can then for the target dynamic region at least one dynamic area To automatically generate the local dynamic station image for being directed to the target dynamic region.Automatic identification motion interval is realized to generate office Portion's dynamic image, to reduce user's operation;Since dynamic area is the region that main object moves in video, also can The precision that the dynamic area of main object is chosen is improved, manpower and time cost are reduced;Also, it is dynamic by system automatic identification State region, avoid by eye recognition be difficult identification less clearly main object the case where, to the main object point of video It is lower from requiring, thus the demand to reducing to video material.In addition, the embodiment of the present application can be with automatic identification user demand The manpower and time cost of user are more reduced in the dynamic area of main object.
Referring to Fig. 3, it illustrates a kind of step flow charts of image processing method embodiment of the application, comprising:
Step 301, target video data is obtained;
This step is referring to the description of abovementioned steps 101 or 201, and this will not be detailed here.
Step 302, the pixel value of each frame of the target video data is analyzed, determines the target video data In at least one dynamic area.
This step is referring to the description of abovementioned steps 102, and this will not be detailed here.
Preferably, step 302 includes:
Target video data is converted to sequence frame by sub-step C11;
Sub-step C12, according to the registration between the block of pixels for belonging to different pixels position in each frame image, determine described in At least one dynamic area in target video data.
Sub-step C11-C12 sub-step A11-A12 with reference to the foregoing embodiments, this will not be detailed here.
Preferably, step 302 includes:
Target video data is converted to sequence frame by sub-step C21;
Sub-step C22, for each location of pixels in the target video data, according to location of pixels described in each frame The variation degree of pixel value determines recyclable location of pixels;
Sub-step C23 in the region being connected to based on each recyclable location of pixels, is determined in the target video data At least one dynamic area.
Sub-step C21-C23 sub-step A21-A23 with reference to the foregoing embodiments, this will not be detailed here.
Preferably, sub-step C23 includes:
Sub-step C231 obtains the time consistency parameter and Space Consistency parameter of the recyclable location of pixels;
Sub-step C232 is determined according to the time consistency parameter and Space Consistency parameter of each recyclable location of pixels The start frame and circulation duration of each recyclable location of pixels;
Sub-step C233, from the region that each recyclable location of pixels is connected to, selection meets the region of connected domain condition Pixel locating area as the dynamic area;
Sub-step C234, start frame and circulation duration based on each location of pixels in each pixel locating area, determines The start frame and circulation duration of the dynamic area.
Sub-step C231-C234 sub-step A231-A234 with reference to the foregoing embodiments, this will not be detailed here.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, then for the target dynamic region at least one dynamic area, from And the process of user hand animation profile is reduced, the accuracy of dynamic area is improved, manpower and time cost are reduced, and passes through and is Unite automatic identification dynamic area, avoid by eye recognition be difficult identification less clearly main object the case where, to video Main object separation requirement it is lower, thus to the demand to video material of reduction.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, the embodiment of the present application is not limited by the described action sequence, because according to According to the embodiment of the present application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and related movement not necessarily the application is implemented Necessary to example.
Referring to Fig. 4, a kind of structural block diagram of local dynamic station video generation Installation practice of the application is shown, specifically may be used To include following module:
First video acquiring module 401, for obtaining the target video data of user's upload;
Dynamic area analysis module 402, the pixel value for each frame to the target video data are analyzed, and are determined At least one dynamic area in the target video data;
First object determining module 403 determines target dynamic for receiving user from least one described dynamic area Region;
Local image generation module 404, the target dynamic region for being determined based on user are generated dynamic for the target The local dynamic station image in state region.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for belonging between the block of pixels of different pixels position according in each frame image Registration, determine at least one dynamic area in the target video data.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, for for each location of pixels in the target video data, according to The variation degree of the pixel value of location of pixels described in each frame determines recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines institute State at least one dynamic area in target video data.
Preferably, the second dynamic area analysis submodule includes:
Parameter of consistency acquiring unit, for obtaining time consistency parameter and the space one of the recyclable location of pixels Cause property parameter;
Pixel-parameters determination unit, for the time consistency parameter and Space Consistency according to each recyclable location of pixels Parameter determines the start frame and circulation duration of each recyclable location of pixels;
Pixel locating area determination unit, for from the region that each recyclable location of pixels is connected to, selection to meet company Pixel locating area of the region of logical domain condition as the dynamic area;
Frame parameter unit for the start frame based on each location of pixels in each pixel locating area and recycles duration, Determine the start frame and circulation duration of the dynamic area.
Preferably, the local image generation module includes:
Subsequence frame determines submodule, for determining the subsequence frame in the corresponding target dynamic region;
Submodule is replaced, for by among the subsequence frame, the background image of the subsequent frame after start frame to be replaced with The background image of the start frame;The background image is the image except target dynamic region described in every frame image;
First generates submodule, based on the start frame and the subsequent frame that background image is substituted, generates and is directed to the mesh Mark the local dynamic station image of dynamic area.
Preferably, the subsequence frame determines that submodule includes:
Subsequence frame determination unit determines that the target is dynamic according to the start frame and circulation duration in the target dynamic region The subsequence frame in state region.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, it, can then for the target dynamic region at least one dynamic area To automatically generate the local dynamic station image for being directed to the target dynamic region.Automatic identification motion interval is realized to generate office Portion's dynamic image, to reduce user's operation;Since dynamic area is the region that main object moves in video, also can The precision that the dynamic area of main object is chosen is improved, manpower and time cost are reduced;Also, it is dynamic by system automatic identification State region, avoid by eye recognition be difficult identification less clearly main object the case where, to the main object point of video It is lower from requiring, thus the demand to reducing to video material.In addition, the embodiment of the present application can be with automatic identification target video number Then multiple dynamic areas in are selected for user, then can automatically generate the part of its needs according to the demand of user Dynamic image.
Referring to Fig. 5, the structural block diagram of another local dynamic station video generation Installation practice of the application is shown, specifically May include following module:
Second video acquiring module 501, for obtaining target video data;
Dynamic area analysis module 502, the pixel value for each frame to the target video data are analyzed, and are determined At least one dynamic area in the target video data;
Second target determination module 503, for determining belonging to target subject object from least one described dynamic area Target dynamic region;
Local image generation module 504, for generating the local dynamic station image for being directed to the target dynamic region.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for belonging between the block of pixels of different pixels position according in each frame image Registration, determine at least one dynamic area in the target video data.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, for for each location of pixels in the target video data, according to The variation degree of the pixel value of location of pixels described in each frame determines recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines institute State at least one dynamic area in target video data.
Preferably, the local image generation module includes:
Subsequence frame determines submodule, for determining the subsequence frame in the corresponding target dynamic region;
Submodule is replaced, for by among the subsequence frame, the background image of the subsequent frame after start frame to be replaced with The background image of the start frame;The background image is the image except target dynamic region described in every frame image;
First generates submodule, based on the start frame and the subsequent frame that background image is substituted, generates and is directed to the mesh Mark the local dynamic station image of dynamic area.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, it, can then for the target dynamic region at least one dynamic area To automatically generate the local dynamic station image for being directed to the target dynamic region.Automatic identification motion interval is realized to generate office Portion's dynamic image, to reduce user's operation;Since dynamic area is the region that main object moves in video, also can The precision that the dynamic area of main object is chosen is improved, manpower and time cost are reduced;Also, it is dynamic by system automatic identification State region, avoid by eye recognition be difficult identification less clearly main object the case where, to the main object point of video It is lower from requiring, thus the demand to reducing to video material.In addition, the embodiment of the present application can be with automatic identification user demand The manpower and time cost of user are more reduced in the dynamic area of main object.
Referring to Fig. 6, show a kind of structural block diagram of image processing apparatus embodiment of the application, can specifically include as Lower module:
Second video acquiring module 601, for obtaining target video data;
Dynamic area analysis module 602, the pixel value for each frame to the target video data are analyzed, and are determined At least one dynamic area in the target video data.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for belonging between the block of pixels of different pixels position according in each frame image Registration, determine at least one dynamic area in the target video data.
Preferably, the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, for for each location of pixels in the target video data, according to The variation degree of the pixel value of location of pixels described in each frame determines recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines institute State at least one dynamic area in target video data.
Preferably, the second dynamic area analysis submodule includes:
Parameter of consistency acquiring unit, for obtaining time consistency parameter and the space one of the recyclable location of pixels Cause property parameter;
Pixel-parameters determination unit, for the time consistency parameter and Space Consistency according to each recyclable location of pixels Parameter determines the start frame and circulation duration of each recyclable location of pixels;
Pixel locating area determination unit, for from the region that each recyclable location of pixels is connected to, selection to meet company Pixel locating area of the region of logical domain condition as the dynamic area;
Frame parameter unit for the start frame based on each location of pixels in each pixel locating area and recycles duration, Determine the start frame and circulation duration of the dynamic area.
The embodiment of the present application is analyzed by the pixel value to each frame to the target video data, and intelligence determines institute At least one dynamic area in target video data is stated, then for the target dynamic region at least one dynamic area, from And the process of user hand animation profile is reduced, the accuracy of dynamic area is improved, manpower and time cost are reduced, and passes through and is Unite automatic identification dynamic area, avoid by eye recognition be difficult identification less clearly main object the case where, to video Main object separation requirement it is lower, thus to the demand to video material of reduction.
The embodiment of the present application also provides a kind of non-volatile readable storage medium, be stored in the storage medium one or Multiple modules (programs) when the one or more module is used in equipment, can make the equipment execute the application reality Apply the instruction (instructions) of various method steps in example.
Fig. 7 is the hardware structural diagram for the equipment that another embodiment of the application provides.As shown in fig. 7, the present embodiment Equipment includes processor 81 and memory 82.
Processor 81 executes the computer program code that memory 82 is stored, and realizes Fig. 1 to Fig. 4 in above-described embodiment Local dynamic station image generating method.
Memory 82 is configured as storing various types of data to support the operation in equipment.The example packet of these data Include the instruction of any application or method for operating in equipment, such as message, picture, video etc..Memory 82 can Can include random access memory (random access memory, abbreviation RAM), it is also possible to further include nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Optionally, processor 81 is arranged in processing component 80.The equipment can also include: communication component 83, power supply group Part 84, multimedia component 85, audio component 86, input/output interface 87 and/or sensor module 88.Equipment specifically included Component etc. set according to actual demand, the present embodiment is not construed as limiting this.
The integrated operation of the usually control equipment of processing component 80.Processing component 80 may include one or more processors 81 It executes instruction, to complete all or part of the steps of above-mentioned Fig. 1 to Fig. 4 method.In addition, processing component 80 may include one Or multiple modules, convenient for the interaction between processing component 80 and other assemblies.For example, processing component 80 may include multimedia mould Block, to facilitate the interaction between multimedia component 85 and processing component 80.
Power supply module 84 provides electric power for the various assemblies of equipment.Power supply module 84 may include power-supply management system, and one A or multiple power supplys and other with for equipment generate, manage, and distribute the associated component of electric power.
Multimedia component 85 includes the display screen of one output interface of offer between equipment and user.In some implementations In example, display screen may include liquid crystal display (LCD) and touch panel (TP).If display screen includes touch panel, display Screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.
Audio component 86 is configured as output and/or input audio signal.For example, audio component 86 includes a microphone (MIC), when equipment is in operation mode, when such as speech recognition mode, microphone is configured as receiving external audio signal.It is connect The audio signal of receipts can be further stored in memory 82 or send via communication component 83.In some embodiments, sound Frequency component 86 further includes a loudspeaker, is used for output audio signal.
Input/output interface 87 provides interface, above-mentioned peripheral interface mould between processing component 80 and peripheral interface module Block can be click wheel, button etc..These buttons may include, but are not limited to: volume button, start button and locking press button.
Sensor module 88 includes one or more sensors, for providing the status assessment of various aspects for equipment.Example Such as, sensor module 88 can detecte the state that opens/closes of equipment, the relative positioning of component, and user contacts with equipment Existence or non-existence.Sensor module 88 may include proximity sensor, be configured to without any physical contact It detects the presence of nearby objects, including detection user at a distance from equipment room.In some embodiments, which goes back It may include camera etc..
Communication component 83 is configured to facilitate the communication of wired or wireless way between equipment and other equipment.Equipment can be with Access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In one embodiment, in the equipment It may include SIM card slot, which allows equipment to log in GPRS network, pass through interconnection for being inserted into SIM card Net is communicated with server foundation.
From the foregoing, it will be observed that communication component 83, audio component 86 involved in Fig. 7 embodiment and input/output interface 87, sensor module 88 can be used as the implementation of input equipment.
In a kind of equipment of the present embodiment, the processor, for obtaining the target video data of user's upload;To institute The pixel value for stating each frame of target video data is analyzed, and determines at least one dynamic area in the target video data; It receives user and determines target dynamic region from least one described dynamic area;Based on user determine target dynamic region, Generate the local dynamic station image for being directed to the target dynamic region;Or for obtaining target video data;The target is regarded The pixel value of each frame of frequency evidence is analyzed, and determines at least one dynamic area in the target video data;From it is described to In a few dynamic area, target dynamic region belonging to target subject object is determined;It generates and is directed to the target dynamic region Local dynamic station image;Or for obtaining target video data;The pixel value of each frame of the target video data is carried out Analysis, determines at least one dynamic area in the target video data.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiments of the present application may be provided as method and device or meters Calculation machine program product.Therefore, the embodiment of the present application can be used complete hardware embodiment, complete software embodiment or combine software With the form of the embodiment of hardware aspect.Moreover, it wherein includes computer that the embodiment of the present application, which can be used in one or more, The computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of usable program code The form of the computer program product of upper implementation.
The embodiment of the present application is referring to the method, equipment (system) and computer program product according to the embodiment of the present application Flowchart and/or the block diagram describe.It should be understood that can be realized by computer program instructions in flowchart and/or the block diagram The combination of process and/or box in each flow and/or block and flowchart and/or the block diagram.It can provide these calculating Processing of the machine program instruction to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices Device is to generate a machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute For realizing the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram Device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the embodiments of the present application have been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and all change and modification within the scope of the embodiments of the present application.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
A kind of local dynamic station image generating method provided herein, a kind of local dynamic station video generation are filled above It sets, a kind of image processing method, a kind of image processing apparatus, is described in detail, specific case used herein is to this The principle and embodiment of application is expounded, the present processes that the above embodiments are only used to help understand and Its core concept;At the same time, for those skilled in the art in specific embodiment and is answered according to the thought of the application With in range, there will be changes, in conclusion the contents of this specification should not be construed as limiting the present application.

Claims (34)

1. a kind of local dynamic station image generating method characterized by comprising
Obtain the target video data that user uploads;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
It receives user and determines target dynamic region from least one described dynamic area;
Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.
2. the method according to claim 1, wherein the pixel value of each frame to the target video data The step of being analyzed, determining at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
According to the registration between the block of pixels for belonging to different pixels position in each frame image, determine in the target video data At least one dynamic area.
3. the method according to claim 1, wherein the pixel value of each frame to the target video data The step of being analyzed, determining at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
For each location of pixels in the target video data, according to the variation journey of the pixel value of location of pixels described in each frame Degree determines recyclable location of pixels;
In the region being connected to based on each recyclable location of pixels, at least one dynamic area in the target video data is determined Domain.
4. according to the method described in claim 3, it is characterized in that, the region being connected to based on each recyclable location of pixels In, the step of determining at least one dynamic area in the target video data, comprising:
Obtain the time consistency parameter and Space Consistency parameter of the recyclable location of pixels;
According to the time consistency parameter and Space Consistency parameter of each recyclable location of pixels, each recyclable location of pixels is determined Start frame and circulation duration;
From the region that each recyclable location of pixels is connected to, select the region for meeting connected domain condition as the dynamic area Pixel locating area;
Start frame and circulation duration based on each location of pixels in each pixel locating area, determine rising for the dynamic area Beginning frame and circulation duration.
5. method according to claim 1 or 4, which is characterized in that the target dynamic region determined based on user, it is raw The step of at the local dynamic station image for being directed to the target dynamic region, comprising:
Determine the subsequence frame in the corresponding target dynamic region;
Among the subsequence frame, the background image of the subsequent frame after start frame replaces with the Background of the start frame Picture;The background image is the image except target dynamic region described in every frame image;
Based on the start frame and the subsequent frame that background image is substituted, the local dynamic station for being directed to the target dynamic region is generated Image.
6. according to the method described in claim 5, it is characterized in that, including the dynamic area in the determination video data The step of subsequence frame in domain includes:
According to the start frame in the target dynamic region and circulation duration, the subsequence frame in the target dynamic region is determined.
7. a kind of local dynamic station image generating method characterized by comprising
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
From at least one described dynamic area, target dynamic region belonging to target subject object is determined;
Generate the local dynamic station image for being directed to the target dynamic region.
8. the method according to the description of claim 7 is characterized in that the pixel value of each frame to the target video data The step of being analyzed, determining at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
According to the registration between the block of pixels for belonging to different pixels position in each frame image, determine in the target video data At least one dynamic area.
9. according to the method described in claim 8, it is characterized in that, the pixel value of each frame to the target video data The step of being analyzed, determining at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
For each location of pixels in the target video data, according to the variation journey of the pixel value of location of pixels described in each frame Degree determines recyclable location of pixels;
In the region being connected to based on each recyclable location of pixels, at least one dynamic area in the target video data is determined Domain.
10. the method according to the description of claim 7 is characterized in that the target dynamic region determined based on user, is generated For the target dynamic region local dynamic station image the step of, comprising:
Determine the subsequence frame in the corresponding target dynamic region;
Among the subsequence frame, the background image of the subsequent frame after start frame replaces with the Background of the start frame Picture;The background image is the image except target dynamic region described in every frame image;
Based on the start frame and the subsequent frame that background image is substituted, the local dynamic station for being directed to the target dynamic region is generated Image.
11. a kind of image processing method characterized by comprising
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region.
12. according to the method for claim 11, which is characterized in that the pixel of each frame to the target video data The step of value is analyzed, and determines at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
According to the registration between the block of pixels for belonging to different pixels position in each frame image, determine in the target video data At least one dynamic area.
13. according to the method for claim 12, which is characterized in that the pixel of each frame to the target video data The step of value is analyzed, and determines at least one dynamic area in the target video data, comprising:
Target video data is converted into sequence frame;
For each location of pixels in the target video data, according to the variation journey of the pixel value of location of pixels described in each frame Degree determines recyclable location of pixels;
In the region being connected to based on each recyclable location of pixels, at least one dynamic area in the target video data is determined Domain.
14. according to the method for claim 13, which is characterized in that the area being connected to based on each recyclable location of pixels In domain, the step of determining at least one dynamic area in the target video data, comprising:
Obtain the time consistency parameter and Space Consistency parameter of the recyclable location of pixels;
According to the time consistency parameter and Space Consistency parameter of each recyclable location of pixels, each recyclable location of pixels is determined Start frame and circulation duration;
From the region that each recyclable location of pixels is connected to, select the region for meeting connected domain condition as the dynamic area Pixel locating area;
Start frame and circulation duration based on each location of pixels in each pixel locating area, determine rising for the dynamic area Beginning frame and circulation duration.
15. a kind of local dynamic station video generation device characterized by comprising
First video acquiring module, for obtaining the target video data of user's upload;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine the mesh Mark at least one dynamic area in video data;
First object determining module determines target dynamic region for receiving user from least one described dynamic area;
Local image generation module, the target dynamic region for being determined based on user, is generated and is directed to the target dynamic region Local dynamic station image.
16. device according to claim 15, which is characterized in that the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for according to the weight between the block of pixels for belonging to different pixels position in each frame image It is right, determine at least one dynamic area in the target video data.
17. device according to claim 15, which is characterized in that the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, is used for for each location of pixels in the target video data, according to each frame Described in location of pixels pixel value variation degree, determine recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines the mesh Mark at least one dynamic area in video data.
18. device according to claim 17, which is characterized in that analyze submodule and include: in second dynamic area
Parameter of consistency acquiring unit, for obtaining the time consistency parameter and Space Consistency of the recyclable location of pixels Parameter;
Pixel-parameters determination unit, for being joined according to the time consistency parameter and Space Consistency of each recyclable location of pixels Number determines the start frame and circulation duration of each recyclable location of pixels;
Pixel locating area determination unit, for from the region that each recyclable location of pixels is connected to, selection to meet connected domain Pixel locating area of the region of condition as the dynamic area;
Frame parameter unit is determined for start frame and circulation duration based on each location of pixels in each pixel locating area The start frame and circulation duration of the dynamic area.
19. device described in 5 or 18 according to claim 1, which is characterized in that the local image generation module includes:
Subsequence frame determines submodule, for determining the subsequence frame in the corresponding target dynamic region;
Submodule is replaced, for by among the subsequence frame, the background image of the subsequent frame after start frame to replace with described The background image of start frame;The background image is the image except target dynamic region described in every frame image;
First generates submodule, based on the start frame and the subsequent frame that background image is substituted, generates dynamic for the target The local dynamic station image in state region.
20. device according to claim 19, which is characterized in that the subsequence frame determines that submodule includes:
Subsequence frame determination unit determines the target dynamic area according to the start frame and circulation duration in the target dynamic region The subsequence frame in domain.
21. a kind of local dynamic station video generation device characterized by comprising
Second video acquiring module, for obtaining target video data;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine the mesh Mark at least one dynamic area in video data;
Second target determination module, for determining target belonging to target subject object from least one described dynamic area Dynamic area;
Local image generation module, for generating the local dynamic station image for being directed to the target dynamic region.
22. device according to claim 21, which is characterized in that the dynamic area analysis module includes:
Second Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for according to the weight between the block of pixels for belonging to different pixels position in each frame image It is right, determine at least one dynamic area in the target video data.
23. device according to claim 21, which is characterized in that the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, is used for for each location of pixels in the target video data, according to each frame Described in location of pixels pixel value variation degree, determine recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines the mesh Mark at least one dynamic area in video data.
24. device according to claim 21, which is characterized in that the local image generation module includes:
Subsequence frame determines submodule, for determining the subsequence frame in the corresponding target dynamic region;
Submodule is replaced, for by among the subsequence frame, the background image of the subsequent frame after start frame to replace with described The background image of start frame;The background image is the image except target dynamic region described in every frame image;
First generates submodule, based on the start frame and the subsequent frame that background image is substituted, generates dynamic for the target The local dynamic station image in state region.
25. a kind of image processing apparatus characterized by comprising
Video acquiring module, for obtaining target video data;
Dynamic area analysis module, the pixel value for each frame to the target video data are analyzed, and determine the mesh Mark at least one dynamic area in video data.
26. device according to claim 25, which is characterized in that the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Submodule is analyzed in first dynamic area, for according to the weight between the block of pixels for belonging to different pixels position in each frame image It is right, determine at least one dynamic area in the target video data.
27. device according to claim 25, which is characterized in that the dynamic area analysis module includes:
Video Quality Metric submodule, for target video data to be converted to sequence frame;
Recyclable location of pixels determines submodule, is used for for each location of pixels in the target video data, according to each frame Described in location of pixels pixel value variation degree, determine recyclable location of pixels;
Submodule is analyzed in second dynamic area, in the region for being connected to based on each recyclable location of pixels, determines the mesh Mark at least one dynamic area in video data.
28. device according to claim 27, which is characterized in that analyze submodule and include: in second dynamic area
Parameter of consistency acquiring unit, for obtaining the time consistency parameter and Space Consistency of the recyclable location of pixels Parameter;
Pixel-parameters determination unit, for being joined according to the time consistency parameter and Space Consistency of each recyclable location of pixels Number determines the start frame and circulation duration of each recyclable location of pixels;
Pixel locating area determination unit, for from the region that each recyclable location of pixels is connected to, selection to meet connected domain Pixel locating area of the region of condition as the dynamic area;
Frame parameter unit is determined for start frame and circulation duration based on each location of pixels in each pixel locating area The start frame and circulation duration of the dynamic area.
29. a kind of equipment, which is characterized in that including processor, memory and be stored on the memory and can be at the place The computer program run on reason device, the computer program realize following steps when being executed by the processor:
Obtain the target video data that user uploads;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
It receives user and determines target dynamic region from least one described dynamic area;
Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.
30. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence, the computer program realize following steps when being executed by processor:
Obtain the target video data that user uploads;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
It receives user and determines target dynamic region from least one described dynamic area;
Based on the target dynamic region that user determines, the local dynamic station image for being directed to the target dynamic region is generated.
31. a kind of equipment, which is characterized in that including processor, memory and be stored on the memory and can be at the place The computer program run on reason device, the computer program realize following steps when being executed by the processor:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
From at least one described dynamic area, target dynamic region belonging to target subject object is determined;
Generate the local dynamic station image for being directed to the target dynamic region.
32. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence, the computer program realize following steps when being executed by processor:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region;
From at least one described dynamic area, target dynamic region belonging to target subject object is determined;
Generate the local dynamic station image for being directed to the target dynamic region.
33. a kind of equipment, which is characterized in that including processor, memory and be stored on the memory and can be at the place The computer program run on reason device, the computer program realize following steps when being executed by the processor:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region.
34. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence, the computer program realize following steps when being executed by processor:
Obtain target video data;
The pixel value of each frame of the target video data is analyzed, determines that at least one in the target video data is dynamic State region.
CN201710939457.2A 2017-09-30 2017-09-30 Local dynamic image generation method and device Active CN109600544B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710939457.2A CN109600544B (en) 2017-09-30 2017-09-30 Local dynamic image generation method and device
TW107120687A TW201915946A (en) 2017-09-30 2018-06-15 Local dynamic image generation method and device
PCT/CN2018/106633 WO2019062631A1 (en) 2017-09-30 2018-09-20 Local dynamic image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710939457.2A CN109600544B (en) 2017-09-30 2017-09-30 Local dynamic image generation method and device

Publications (2)

Publication Number Publication Date
CN109600544A true CN109600544A (en) 2019-04-09
CN109600544B CN109600544B (en) 2021-11-23

Family

ID=65900674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710939457.2A Active CN109600544B (en) 2017-09-30 2017-09-30 Local dynamic image generation method and device

Country Status (3)

Country Link
CN (1) CN109600544B (en)
TW (1) TW201915946A (en)
WO (1) WO2019062631A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324663A (en) * 2019-07-01 2019-10-11 北京奇艺世纪科技有限公司 A kind of generation method of dynamic image, device, electronic equipment and storage medium
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device
CN114363697A (en) * 2022-01-06 2022-04-15 上海哔哩哔哩科技有限公司 Video file generation and playing method and device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179159B (en) * 2019-12-31 2024-02-20 北京金山云网络技术有限公司 Method and device for eliminating target image in video, electronic equipment and storage medium
CN111598947B (en) * 2020-04-03 2024-02-20 上海嘉奥信息科技发展有限公司 Method and system for automatically identifying patient position by identification features
CN111753679B (en) * 2020-06-10 2023-11-24 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Micro-motion monitoring method, device, equipment and computer readable storage medium
CN112866669B (en) * 2021-01-15 2023-09-15 聚好看科技股份有限公司 Method and device for determining data switching time

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050276446A1 (en) * 2004-06-10 2005-12-15 Samsung Electronics Co. Ltd. Apparatus and method for extracting moving objects from video
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
US20090219300A1 (en) * 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
US20130101209A1 (en) * 2010-10-29 2013-04-25 Peking University Method and system for extraction and association of object of interest in video
CN103092929A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device for generation of video abstract
CN104023172A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Shooting method and shooting device of dynamic image
CN105025360A (en) * 2015-07-17 2015-11-04 江西洪都航空工业集团有限责任公司 Improved fast video summarization method and system
CN105516610A (en) * 2016-02-19 2016-04-20 深圳新博科技有限公司 Method and device for shooting local dynamic image
CN105654471A (en) * 2015-12-24 2016-06-08 武汉鸿瑞达信息技术有限公司 Augmented reality AR system applied to internet video live broadcast and method thereof
CN106453864A (en) * 2016-09-26 2017-02-22 广东欧珀移动通信有限公司 Image processing method and device and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090219379A1 (en) * 2005-12-30 2009-09-03 Telecom Italia S.P.A. Average Calculation in Color Space, Particularly for Segmentation of Video Sequences

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050276446A1 (en) * 2004-06-10 2005-12-15 Samsung Electronics Co. Ltd. Apparatus and method for extracting moving objects from video
US20090219300A1 (en) * 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
CN101510304A (en) * 2009-03-30 2009-08-19 北京中星微电子有限公司 Method, device and pick-up head for dividing and obtaining foreground image
US20130101209A1 (en) * 2010-10-29 2013-04-25 Peking University Method and system for extraction and association of object of interest in video
CN103092929A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device for generation of video abstract
CN104023172A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Shooting method and shooting device of dynamic image
CN105025360A (en) * 2015-07-17 2015-11-04 江西洪都航空工业集团有限责任公司 Improved fast video summarization method and system
CN105654471A (en) * 2015-12-24 2016-06-08 武汉鸿瑞达信息技术有限公司 Augmented reality AR system applied to internet video live broadcast and method thereof
CN105516610A (en) * 2016-02-19 2016-04-20 深圳新博科技有限公司 Method and device for shooting local dynamic image
CN106453864A (en) * 2016-09-26 2017-02-22 广东欧珀移动通信有限公司 Image processing method and device and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324663A (en) * 2019-07-01 2019-10-11 北京奇艺世纪科技有限公司 A kind of generation method of dynamic image, device, electronic equipment and storage medium
CN112995533A (en) * 2021-02-04 2021-06-18 上海哔哩哔哩科技有限公司 Video production method and device
CN114363697A (en) * 2022-01-06 2022-04-15 上海哔哩哔哩科技有限公司 Video file generation and playing method and device
CN114363697B (en) * 2022-01-06 2024-04-26 上海哔哩哔哩科技有限公司 Video file generation and playing method and device

Also Published As

Publication number Publication date
CN109600544B (en) 2021-11-23
WO2019062631A1 (en) 2019-04-04
TW201915946A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109600544A (en) A kind of local dynamic station image generating method and device
EP3713212B1 (en) Image capture method, terminal, and storage medium
CN108885639B (en) Content collection navigation and automatic forwarding
CN109308469B (en) Method and apparatus for generating information
CN107809591B (en) Shoot method, apparatus, terminal and the storage medium of image
JP2013527947A5 (en)
CN109889724A (en) Image weakening method, device, electronic equipment and readable storage medium storing program for executing
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
KR20220154261A (en) Media collection navigation with opt-out interstitial
CN108551552A (en) Image processing method, device, storage medium and mobile terminal
CN111984803B (en) Multimedia resource processing method and device, computer equipment and storage medium
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN109671051A (en) Picture quality detection model training method and device, electronic equipment and storage medium
CN104113682A (en) Image acquisition method and electronic equipment
CN112906553B (en) Image processing method, apparatus, device and medium
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN115499577B (en) Image processing method and terminal equipment
CN114780181B (en) Resource display method, device, computer equipment and medium
CN111428551A (en) Density detection method, density detection model training method and device
CN114170359A (en) Volume fog rendering method, device and equipment and storage medium
CN114898282A (en) Image processing method and device
CN113705309A (en) Scene type judgment method and device, electronic equipment and storage medium
CN112862977A (en) Management method, device and equipment of digital space
CN116434016B (en) Image information enhancement method, model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant