CN107527041A - Image capture device Real-time Data Processing Method and device, computing device - Google Patents

Image capture device Real-time Data Processing Method and device, computing device Download PDF

Info

Publication number
CN107527041A
CN107527041A CN201710807160.0A CN201710807160A CN107527041A CN 107527041 A CN107527041 A CN 107527041A CN 201710807160 A CN201710807160 A CN 201710807160A CN 107527041 A CN107527041 A CN 107527041A
Authority
CN
China
Prior art keywords
image
special object
effect textures
key
point information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710807160.0A
Other languages
Chinese (zh)
Inventor
眭帆
眭一帆
张望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710807160.0A priority Critical patent/CN107527041A/en
Publication of CN107527041A publication Critical patent/CN107527041A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image capture device Real-time Data Processing Method and device, computing device, its method includes:The first image comprising special object that real-time image acquisition collecting device is caught, scene cut processing is carried out to the first image, obtains being directed to the foreground image of special object;The key message of special object is extracted from the first image, effect textures are drawn at the edge of special object according to key message;Effect textures and foreground image are subjected to fusion treatment, obtain the second image;The second image is shown, the shooting triggered according to user instructs, and preserves the second image.Present invention employs deep learning method, completes scene cut processing with realizing the high accuracy of high efficiency.And user's technical merit is not limited, it is not necessary to which user carries out extra process to image, saves user time, the image after being handled with Real-time Feedback, facilitates user to check.

Description

Image capture device Real-time Data Processing Method and device, computing device
Technical field
The present invention relates to image processing field, and in particular to a kind of image capture device Real-time Data Processing Method and dress Put, computing device.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.The image collected becomes apparent from, differentiated Rate, display effect also greatly improve.But the image that existing image capture device collects can not meet that user proposes more next More individual requirements.Prior art can be handled after image is collected by user is further again to image manually, To meet the individual requirement of user.But so processing needs user to have higher image processing techniques, and in processing The time for spending user more is needed, handles cumbersome, technical sophistication.
Therefore, it is necessary to a kind of image capture device Real-time Data Processing Method, to meet that the personalization of user will in real time Ask.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State the image capture device Real-time Data Processing Method and device, computing device of problem.
According to an aspect of the invention, there is provided a kind of image capture device Real-time Data Processing Method, it includes:
The first image comprising special object that real-time image acquisition collecting device is caught, scene point is carried out to the first image Processing is cut, obtains being directed to the foreground image of special object;
The key message of special object is extracted from the first image, is drawn according to key message at the edge of special object Effect textures;
Effect textures and foreground image are subjected to fusion treatment, obtain the second image;
Show the second image.
Alternatively, the key message of special object is extracted from the first image, according to key message in special object Draw effect textures and further comprise in edge:
Key message is key point information;
The key point information positioned at special object edge is extracted from the first image.
Alternatively, the key message of special object is extracted from the first image, according to key message in special object Draw effect textures and further comprise in edge:
Key message is key point information;
According to the key point information of special object, the distance between at least two key points with symmetric relation are calculated;
According to the distance between at least two key points with symmetric relation, processing is zoomed in and out to effect textures.
Alternatively, the key message of special object is extracted from the first image, according to key message in special object Draw effect textures and further comprise in edge:
Key message is key point information;
According to the key point information of special object, the anglec of rotation between at least two key points with symmetric relation is calculated Degree;
According to the anglec of rotation between at least two key points with symmetric relation, the anglec of rotation is carried out to effect textures Processing.
Alternatively, the key message of special object is extracted from the first image, according to key message in special object Draw effect textures and further comprise in edge:
Key message is key point information;
Judge whether the key point information of the specific region of special object meets preparatory condition;
If so, draw effect textures in the specific region of special object.
Alternatively, effect textures and foreground image are subjected to fusion treatment, obtain the second image and further comprise:
By effect textures and foreground image, the background image that scene cut handles to obtain is carried out to the first image merge Processing, obtains the second image.
Alternatively, effect textures and foreground image are subjected to fusion treatment, obtain the second image and further comprise:
Effect textures and foreground image, default dynamic or static background image are subjected to fusion treatment, obtain the second figure Picture.
Alternatively, effect textures and foreground image are subjected to fusion treatment, obtain the second image and further comprise:
Filtered the part that effect textures in second image are located to foreground image areas.
Alternatively, after the second image is obtained, method also includes:
Either statically or dynamically effect textures are added in the subregion of the second image.
Alternatively, before the image of real-time display second, method also includes:
Tone processing, photo-irradiation treatment and/or brightness processed are carried out to the second image.
Alternatively, key message is key point information;Special object is human body;Key point information includes being located at face edge Key point information and/or the key point information positioned at human body edge.
Alternatively, method also includes:
The shooting triggered according to user instructs, and preserves the second image.
Alternatively, method also includes:
According to user trigger record command, preserve by the second image as group of picture into video.
According to another aspect of the present invention, there is provided a kind of image capture device generating date device, it includes:
Split module, the first image comprising special object caught suitable for real-time image acquisition collecting device, to first Image carries out scene cut processing, obtains being directed to the foreground image of special object;
Drafting module, suitable for extracting the key message of special object from the first image, according to key message specific Draw effect textures in the edge of object;
Fusion Module, suitable for effect textures and foreground image are carried out into fusion treatment, obtain the second image;
Display module, suitable for showing the second image.
Alternatively, key message is key point information;Drafting module is further adapted for:
The key point information positioned at special object edge is extracted from the first image.
Alternatively, key message is key point information;Drafting module further comprises:
First computing module, suitable for the key point information according to special object, calculate at least two with symmetric relation The distance between key point;
Zoom module, suitable for according to the distance between at least two key points with symmetric relation, entering to effect textures The processing of row scaling.
Alternatively, key message is key point information;Drafting module further comprises:
Second computing module, suitable for the key point information according to special object, calculate at least two with symmetric relation The anglec of rotation between key point;
Rotary module, suitable for according to the anglec of rotation between at least two key points with symmetric relation, being pasted to effect Figure carries out anglec of rotation processing.
Alternatively, key message is key point information;Drafting module further comprises:
Judge module, suitable for judging whether the key point information of specific region of special object meets preparatory condition;If so, Effect textures are drawn in the specific region of special object.
Alternatively, Fusion Module is further adapted for:
By effect textures and foreground image, the background image that scene cut handles to obtain is carried out to the first image merge Processing, obtains the second image.
Alternatively, Fusion Module is further adapted for:
Effect textures and foreground image, default dynamic or static background image are subjected to fusion treatment, obtain the second figure Picture.
Alternatively, Fusion Module further comprises:
Filtering module, the part suitable for effect textures in the second image to be located to foreground image areas are filtered.
Alternatively, device also includes:
Region textures module, suitable for adding either statically or dynamically effect textures in the subregion of the second image.
Alternatively, device also includes:
Image processing module, suitable for carrying out tone processing, photo-irradiation treatment and/or brightness processed to the second image.
Alternatively, key message is key point information;Special object is human body;Key point information includes being located at face edge Key point information and/or the key point information positioned at human body edge.
Alternatively, device also includes:
First preserving module, suitable for the shooting instruction triggered according to user, preserve the second image.
Alternatively, device also includes:
Second preserving module, suitable for the record command triggered according to user, preserve by the second image as group of picture into Video.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device above-mentioned Operated corresponding to image capture device Real-time Data Processing Method.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with the storage medium to A few executable instruction, the executable instruction make for example above-mentioned image capture device Real-time Data Processing Method pair of computing device The operation answered.
According to image capture device Real-time Data Processing Method provided by the invention and device, computing device, obtain in real time The first image comprising special object that image capture device is caught, scene cut processing is carried out to the first image, is directed to In the foreground image of special object;The key message of special object is extracted from the first image, according to key message specific Draw effect textures in the edge of object;Effect textures and foreground image are subjected to fusion treatment, obtain the second image;Display second Image.The present invention is partitioned into the prospect of special object after the image of image capture device seizure is got in real time from image Image, effect textures are drawn at its edge, and merged with it, and by the image real-time display after fusion to user, it is convenient User sees the image after being merged with effect textures in real time.Present invention employs deep learning method, realizes high efficiency height The completion scene cut processing of accuracy.And user's technical merit is not limited, it is not necessary to which user is additionally located to image Reason, user time is saved, the image after being handled with Real-time Feedback, facilitates user to check.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the flow chart of image capture device Real-time Data Processing Method according to an embodiment of the invention;
Fig. 2 shows the flow of image capture device Real-time Data Processing Method in accordance with another embodiment of the present invention Figure;
Fig. 3 shows the functional block of image capture device generating date device according to an embodiment of the invention Figure;
Fig. 4 shows the functional block of image capture device generating date device in accordance with another embodiment of the present invention Figure;
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Special object can be any objects such as human body in image, plant, animal in the present invention, with people in embodiment Illustrated exemplified by body, but be not limited only to human body.
Fig. 1 shows the flow chart of image capture device Real-time Data Processing Method according to an embodiment of the invention. As shown in figure 1, image capture device Real-time Data Processing Method specifically comprises the following steps:
Step S101, the first image comprising special object that real-time image acquisition collecting device is caught, to the first image Scene cut processing is carried out, obtains being directed to the foreground image of special object.
Image capture device illustrates by taking mobile terminal as an example in the present embodiment.Get mobile terminal camera in real time The first image captured, wherein, the first image contains special object, such as human body.First image is carried out at scene cut Reason, special object is mainly split, obtain being directed to the foreground image of special object from the first image, the foreground picture As can only include special object.
When carrying out scene cut processing to the first image, deep learning method can be utilized.Deep learning is engineering It is a kind of based on the method that data are carried out with representative learning in habit.Observation (such as piece image) can carry out table using various ways Show, such as the vector of each pixel intensity value, or be more abstractively expressed as a series of sides, the region etc. of given shape.And use Some specific method for expressing are easier from example learning task (for example, recognition of face or human facial expression recognition).As utilized Human body segmentation's method of deep learning can carry out scene cut to the first image, obtain including the foreground image of human body.
Step S102, the key message of special object is extracted from the first image, according to key message in special object Edge draw effect textures.
For convenience of drafting effect textures, it is necessary to extract the key message of special object from the first image.The key is believed Breath can be specially key point information, key area information, and/or key lines information etc..Embodiments of the invention are with key point Illustrated exemplified by information, but the key message of the present invention is not limited to key point information.It can be improved using key point information The processing speed and efficiency of effect textures are drawn according to key point information, effect patch directly can be drawn according to key point information Figure, it is not necessary to carry out the complex operations such as subsequently calculating, analysis to key message again.Meanwhile key point information is easy to extract, and carry It is accurate to take, and the effect for drawing effect textures is more accurate.Effect textures are drawn using at the edge of special object due to general, because This, when being extracted from the first image, extracts the key point information positioned at special object edge.When special object is human body When, the key point information of extraction includes the key point information positioned at face edge, positioned at the key point information at human body edge etc..
Effect textures can be static effect textures, or dynamic effect textures.Effect textures can be such as Flame, the note of bounce, spray and other effects, are configured with specific reference to performance, do not limit herein.According to what is extracted Key point information, it may be determined that each marginal position of special object, and then effect textures can be drawn at the edge of special object.Such as Human body edge draw flame so that human peripheral's flame rings around.
Step S103, effect textures and foreground image are subjected to fusion treatment, obtain the second image.
The foreground image for being directed to special object that effect textures and dividing processing are obtained carries out fusion treatment so that effect Fruit textures are more really merged with foreground image, obtain the second image.To allow effect textures and foreground image more Good fusion, when carrying out dividing processing to the first image, the edge of the perspective process obtained to segmentation carries out translucent processing, The edge of fuzzy special object, preferably to merge.
Step S104, show the second image.
The second obtained image is shown in real time, user can directly be seen that to obtained after the first image procossing Two images.After the second image is obtained, the first image caught is replaced using the second image at once and is shown, typically 1/24 It is replaced within second, for a user, relatively short due to replacing the time, human eye is not discovered significantly, equivalent to real-time The image of display second.
According to image capture device Real-time Data Processing Method provided by the invention, real-time image acquisition collecting device is caught The first image comprising special object, scene cut processing is carried out to the first image, obtains being directed to the prospect of special object Image;The key point information of special object is extracted from the first image, is painted according to key point information at the edge of special object Effect textures processed;Effect textures and foreground image are subjected to fusion treatment, obtain the second image;The image of real-time display second.This Invention is partitioned into the foreground image of special object after the image of image capture device seizure is got in real time from image, Effect textures are drawn at its edge, and are merged with it, and facilitate user real-time to user the image real-time display after fusion The image seen after being merged with effect textures, the present invention user's technical merit is not limited, it is not necessary to user enters to image Row extra process, user time is saved, the image after being handled with Real-time Feedback, facilitates user to check.
Fig. 2 shows the flow of image capture device Real-time Data Processing Method in accordance with another embodiment of the present invention Figure.As shown in Fig. 2 image capture device Real-time Data Processing Method specifically comprises the following steps:
Step S201, the first image comprising special object that real-time image acquisition collecting device is caught, to the first image Scene cut processing is carried out, obtains being directed to the foreground image of special object.
The step will not be repeated here with reference to the description of the step S101 in the embodiment of figure 1.
Step S202, the key point information of special object is extracted from the first image, according to key point information specific Draw effect textures in the edge of object.
The key point information of special object is extracted from the first image, key point information includes the pass at special object edge Key point information, the key point information of the specific region of special object can also be included.
According to the key point information extracted, it may be determined that each marginal position of special object, and then can be in special object Edge draw effect textures.
Step S203, according to the key point information of special object, calculate at least two key points with symmetric relation it Between distance.
Step S204, according to the distance between at least two key points with symmetric relation, effect textures are contracted Put processing.
Because special object is different from the distance of image capture device, cause size of the special object in the first image It is inconsistent.As human body and image capture device it is distant when, human body presents smaller in the first image, and human body is adopted with image Collect equipment it is closer to the distance when, human body presents larger in the first image.According to the key point information of special object, can calculate Go out the distance between at least two key points with symmetric relation.Such as calculate the pass untill face edge Liang Ge canthus place The distance between key point.According to the distance between at least two key points with symmetric relation, with reference to the reality of special object Distance, it can be deduced that the distance of special object and image capture device, according to distance, processing is zoomed in and out to effect textures, with Effect textures are made more to suit the size of special object.Between key point untill such as calculating where face edge Liang Ge canthus Distance, the distant of human body and image capture device is obtained, because human body presents smaller in the first image, split what is obtained Foreground image is also smaller, corresponding effect textures can be carried out into diminution processing, more to suit foreground image.Or calculate people The distance between key point untill where face edge Liang Ge canthus, the closer to the distance of human body and image capture device is obtained, by Presented in human body in the first image larger, the foreground image for splitting to obtain is also larger, can corresponding to effect textures are carried out Enhanced processing, more to suit foreground image.
Step S205, according to the key point information of special object, calculate at least two key points with symmetric relation it Between the anglec of rotation.
Step S206, according to the anglec of rotation between at least two key points with symmetric relation, effect textures are entered The processing of the row anglec of rotation.
In view of special object image capture device to the first image in there may be not positive situation about facing, When being presented such as human body in the form of turning one's head in the first image, to make effect textures more suit foreground image, it is also desirable to effect Textures carry out anglec of rotation processing.
According to the key point information of special object, the anglec of rotation between at least two key points with symmetric relation is calculated Degree.Such as calculate the anglec of rotation at face edge Liang Ge canthus.According between at least two key points with symmetric relation The anglec of rotation, anglec of rotation processing is carried out to effect textures.The line for such as calculating face edge Liang Ge canthus have rotated to the left 15 degree, it is corresponding by effect textures to 15 degree of anticlockwise, more to suit foreground image.
Step S207, judges whether the key point information of the specific region of special object meets preparatory condition.
Step S208, effect textures are drawn in the specific region of special object.
, can also be according to the key point of the specific region of special object in addition to effect textures are drawn at the edge of special object Information draws effect textures in the specific region of special object.The key point information for needing to judge the specific region of special object is It is no to meet preparatory condition, when meeting, step S208 is performed, effect textures are drawn in the specific region of special object.If do not meet When, without drawing.Such as the mouth of human body, the flame sprayed can be drawn in face part when face opens.Now, may be used To calculate the distance of the key point information on both sides above and below face, whether judging distance meets the distance of face opening, when these segmentation symbols match, The flame sprayed is drawn in face part.In addition to face, the key point in multiple regions such as eyes, nose, ear can also be believed Breath is judged.Specific region and preparatory condition are configured according to performance, are not limited herein.
Above step S203-S208 processing sequence does not limit, and can be adjusted according to actual conditions.
Step S209, effect textures and foreground image are subjected to fusion treatment, obtain the second image.
When effect textures and foreground image are carried out into fusion treatment, it can use and the first image is carried out at scene cut Obtained background image (i.e. the original background image of the first image) is managed, effect textures and foreground image, background image are carried out Fusion treatment, obtain the second image.Effect textures and foreground image, default dynamic or static background image can also be carried out Fusion treatment, obtain the second image.The default dynamic or static background image used can mutually echo with effect textures, such as When effect textures are flame, default dynamic or static background image can be big fire stove, flame hill etc. so that the second image reaches To the concord effect of an entirety.
Further, when effect textures have the region overlapping with foreground image, as the flame of effect textures is covered in human body On, when having blocked the display of a part of human body, the part that effect textures in the second image can be located to foreground image areas is entered Row filtering, makes it be merely displayed in the edge of the special object of foreground image, does not influence the display of special object.
Special object, effect textures, background image are followed successively by from front to back in the Layer Order of the second image, when the second figure When also including other figure layers as in, the orders of other figure layers do not influence special object, effect textures, background image it is front and rear suitable Sequence.
Step S210, either statically or dynamically effect textures are added in the subregion of the second image.
The subregion of figure layer before the second image special object, can also add either statically or dynamically effect textures. The effect textures can be consistent with the effect textures for being plotted in special object edge before, or with being plotted in special object The effect textures that the effect textures at edge mutually echo.When the effect textures for being such as plotted in special object edge are flame, second The effect textures added in the subregion of image can be random spark particle.
Step S211, tone processing, photo-irradiation treatment and/or brightness processed are carried out to the second image.
, can be to second to make the effect of the second image more natural true due to containing effect textures in the second image Image carries out image procossing.Image procossing can include carrying out tone processing, photo-irradiation treatment, brightness processed etc. to the second image. When such as effect textures being flame, the inclined yellow of the hue adjustment of the second image can be carried out plus light, lighten processing etc., makes its fire The effect of flame more natural reality.
Step S212, the image of real-time display second.
After the second obtained image, it is shown in real time, user can directly be seen that to the first image procossing The second image obtained afterwards.
Step S213, the shooting triggered according to user instruct, and preserve the second image.
After the second image is shown, the shooting that can also be triggered according to user instructs, and preserves the second image.As user clicks on The shooting push button of camera, triggering shooting instruction, the second image of display is preserved.
Step S214, according to user trigger record command, preserve by the second image as group of picture into video.
When showing the second image, can also be preserved according to the record command of user's triggering by the second image as frame figure As the video of composition.As user clicks on the recording button of camera, triggering record command, using the second image of display as in video Two field picture preserved, so as to preserve multiple second images as group of picture into video.
Step S213 and step S214 is the optional step of the present embodiment, and in the absence of perform sequencing, according to The different instruction selection of family triggering performs corresponding step.
According to image capture device Real-time Data Processing Method provided by the invention, it is contemplated that in the first image got The problems such as distance of special object, anglec of rotation, it is corresponding effect textures are zoomed in and out, rotation processing, it is more suited The display of special object, improve display effect.And effect textures can also be drawn in the specific region of special object, meet to use The different demands at family.Background use added with effect textures to the background of concord, in the subregion of the second image it is static or Dynamic effect textures so that the display effect of the second image more integration.Image procossing is carried out to the second image makes its display effect Fruit is more natural true.Further, the different instruction that can also be triggered according to user, preserve the second image or preserve by the second image As group of picture into video.The present invention is not limited to user's technical merit, it is not necessary to which user is additionally located to image Reason, user time is saved, the image after being handled with Real-time Feedback, facilitates user to check.
Fig. 3 shows the functional block of image capture device generating date device according to an embodiment of the invention Figure.As shown in figure 3, image capture device generating date device includes following module:
Split module 310, the first image comprising special object caught suitable for real-time image acquisition collecting device, to the One image carries out scene cut processing, obtains being directed to the foreground image of special object.
Image capture device illustrates by taking mobile terminal as an example in the present embodiment.Segmentation module 310 gets shifting in real time The first image that dynamic terminal camera captures, wherein, the first image contains special object, such as human body.It is right to split module 310 First image carries out scene cut processing, mainly splits special object from the first image, obtains being directed to specific The foreground image of object, the foreground image can only include special object.
Split module 310 when carrying out scene cut processing to the first image, deep learning method can be utilized.Depth Habit is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use more Kind of mode represents, a series of such as vector of each pixel intensity value, or be more abstractively expressed as sides, the region of given shape Deng.And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or facial expression are known Not).Scene cut can be carried out using human body segmentation's method of deep learning to the first image by such as splitting module 310, be wrapped Foreground image containing human body.
Drafting module 320, suitable for extracting the key message of special object from the first image, according to key message in spy Draw effect textures in the edge for determining object.
For convenience of effect textures are drawn, drafting module 320 needs to extract the crucial letter of special object from the first image Breath.The key message can be specially key point information, key area information, and/or key lines information etc..The implementation of the present invention Example illustrates by taking key point information as an example, but the key message of the present invention is not limited to key point information.Believed using key point Breath can improve the processing speed and efficiency that effect textures are drawn according to key point information, directly can be painted according to key point information Effect textures processed, it is not necessary to carry out the complex operations such as subsequently calculating, analysis to key message again.Meanwhile key point information is easy to Extraction, and extract accurately, the effect of drafting effect textures is more accurate.Due to general effect is drawn using at the edge of special object Textures, therefore, drafting module 320 extract the key point information positioned at special object edge when being extracted from the first image. When special object is human body, the key point information that drafting module 320 extracts includes the key point information positioned at face edge, position In the key point information at human body edge etc..
Effect textures can be static effect textures, or dynamic effect textures.Effect textures can be such as Flame, the note of bounce, spray and other effects, are configured with specific reference to performance, do not limit herein.Drafting module 320 According to the key point information extracted, it may be determined that each marginal position of special object, and then drafting module 320 can be specific right Draw effect textures in the edge of elephant.As drafting module 320 human body edge draw flame so that human peripheral's flame rings around.
Fusion Module 330, suitable for effect textures and foreground image are carried out into fusion treatment, obtain the second image.
Fusion Module 330 is merged effect textures with the foreground image for being directed to special object that dividing processing obtains Processing so that effect textures are more really merged with foreground image, obtain the second image.To make effect textures and prospect Image can be merged preferably, and segmentation module 310 to the first image when carrying out dividing processing, to splitting obtained perspective process Edge carry out translucent processing, obscure the edge of special object, so as to Fusion Module 330 by effect textures and foreground image more Good fusion.
Fusion Module 330 can use and the first image is entered when effect textures and foreground image are carried out into fusion treatment Row scene cut handles obtained background image (i.e. the original background image of the first image), by effect textures and foreground image, Background image carries out fusion treatment, obtains the second image.Fusion Module 330 can also be by effect textures and foreground image, default Dynamic or static background image carry out fusion treatment, obtain the second image.The default dynamic or quiet that Fusion Module 330 uses State background image can mutually echo with effect textures, when such as effect textures being flame, default dynamic or static background image Can be big fire stove, flame hill etc. so that the second image reaches the concord effect of an entirety.
Special object, effect textures, background image are followed successively by from front to back in the Layer Order of the second image, when the second figure When also including other figure layers as in, the orders of other figure layers do not influence special object, effect textures, background image it is front and rear suitable Sequence.
Display module 340, suitable for showing the second image.
The second obtained image is shown that user can directly be seen that at the first image by display module 340 in real time The second image obtained after reason.After Fusion Module 330 obtains the second image, display module 340 is replaced using the second image at once The first image for changing seizure is shown, is typically replaced within 1/24 second, for a user, due to replacing time phase To short, human eye is not discovered significantly, and the second image is shown in real time equivalent to display module 340.
According to image capture device generating date device provided by the invention, real-time image acquisition collecting device is caught The first image comprising special object, scene cut processing is carried out to the first image, obtains being directed to the prospect of special object Image;The key message of special object is extracted from the first image, effect is drawn at the edge of special object according to key message Fruit textures;Effect textures and foreground image are subjected to fusion treatment, obtain the second image;Show the second image.The present invention is in reality When get image capture device seizure image after, the foreground image of special object is partitioned into from image, is painted at its edge Effect textures processed, and being merged with it, and by the image real-time display after fusion to user, facilitate user see in real time with Image after the fusion of effect textures, the present invention are not limited to user's technical merit, it is not necessary to which user is additionally located to image Reason, user time is saved, the image after being handled with Real-time Feedback, facilitates user to check.
Fig. 4 shows the functional block of image capture device generating date device in accordance with another embodiment of the present invention Figure.As shown in figure 4, being with Fig. 3 differences, image capture device generating date device also includes:
Region textures module 350, suitable for adding either statically or dynamically effect textures in the subregion of the second image.
The subregion of figure layer before the second image special object, region textures module 350 can also add static state Or dynamic effect textures.The effect textures can be consistent with the effect textures for being plotted in special object edge before, or The effect textures that effect textures with being plotted in special object edge mutually echo.As drafting module 320 is plotted in special object side When the effect textures of edge are flame, effect textures that region textures module 350 is added in the subregion of the second image can be with For random spark particle.
Image processing module 360, suitable for carrying out tone processing, photo-irradiation treatment and/or brightness processed to the second image.
Due to containing effect textures in the second image, to make the effect of the second image more natural true, image procossing mould Block 360 can carry out image procossing to the second image.Image procossing can be included at the processing of the second image progress tone, illumination Reason, brightness processed etc..When such as effect textures being flame, image processing module 360 can be partially yellow by the hue adjustment of the second image Color, carry out adding light, lighten processing etc., make the effect more natural reality of its flame.
First preserving module 370, suitable for the shooting instruction triggered according to user, preserve the second image.
After the second image is shown, the shooting that the first preserving module 370 can trigger according to user instructs, and preserves the second figure Picture.Such as the shooting push button of user's click camera, triggering shooting instruction, the first preserving module 370 carries out the second image of display Preserve.
Second preserving module 380, suitable for the record command triggered according to user, preserve by the second image as group of picture Into video.
When showing the second image, the second preserving module 380 can be preserved by second according to the record command of user's triggering Image as group of picture into video.As user clicks on the recording button of camera, triggering record command, the second preserving module 380 are preserved the second image of display as the two field picture in video, so as to preserve multiple second images as two field picture The video of composition.
According to the first preserving module 370 and the second preserving module 380 corresponding to the different instruction execution that user triggers.
Drafting module 320 further comprises the first computing module 321, Zoom module 322, the second computing module 323, rotating mould Block 324 and judge module 325.
First computing module 321, suitable for the key point information according to special object, calculate at least two with symmetric relation The distance between individual key point.
Zoom module 322, there are the distance between at least two key points of symmetric relation suitable for basis, to effect textures Zoom in and out processing.
Because special object is different from the distance of image capture device, cause size of the special object in the first image It is inconsistent.As human body and image capture device it is distant when, human body presents smaller in the first image, and human body is adopted with image Collect equipment it is closer to the distance when, human body presents larger in the first image.First computing module 321 is according to the key of special object Point information, can calculate the distance between at least two key points with symmetric relation.As the first computing module 321 calculates The distance between key point untill going out where face edge Liang Ge canthus.Zoom module 322 according to symmetric relation extremely The distance between few two key points, with reference to the actual range of special object, it can be deduced that special object and image capture device Distance, according to distance, Zoom module 322 zooms in and out processing to effect textures, so that effect textures more suit special object Size.The distance between key point untill as where the first computing module 321 calculates face edge Liang Ge canthus, scaling Module 322 obtains the distant of human body and image capture device, because human body presents smaller in the first image, splits module The foreground image that 310 segmentations obtain is also smaller, and effect textures corresponding can be carried out diminution processing by Zoom module 322, with more Suit foreground image.Or first computing module 321 calculate where face edge Liang Ge canthus untill key point between Distance, the closer to the distance of human body and image capture device is obtained, because human body presents larger in the first image, split module The foreground image that 310 segmentations obtain is also larger, and effect textures corresponding can be amplified processing by Zoom module 322, with more Suit foreground image.
Second computing module 323, suitable for the key point information according to special object, calculate at least two with symmetric relation The anglec of rotation between individual key point.
Rotary module 324, there is the anglec of rotation between at least two key points of symmetric relation suitable for basis, to effect Textures carry out anglec of rotation processing.
In view of special object image capture device to the first image in there may be not positive situation about facing, When being presented such as human body in the form of turning one's head in the first image, to make effect textures more suit foreground image, it is also desirable to effect Textures carry out anglec of rotation processing.
Second computing module 323 calculates at least two passes with symmetric relation according to the key point information of special object The anglec of rotation between key point.As the second computing module 323 calculates the anglec of rotation at face edge Liang Ge canthus.Rotary module 324, according to the anglec of rotation between at least two key points with symmetric relation, anglec of rotation processing are carried out to effect textures. The line that face edge Liang Ge canthus is calculated such as the second computing module 323 have rotated 15 degree to the left, corresponding rotary module 324 By effect textures to 15 degree of anticlockwise, more to suit foreground image.
Judge module 325, suitable for judging whether the key point information of specific region of special object meets preparatory condition;If It is to draw effect textures in the specific region of special object.
Drafting module 320, can also be according to the specific of special object in addition to effect textures are drawn at the edge of special object The key point information in region draws effect textures in the specific region of special object.Judge module 325 is needed to judge special object The key point information of specific region whether meet preparatory condition, when meeting, effect patch is drawn in the specific region of special object Figure.If do not meet, without drawing.Such as the mouth of human body, when judge module 325 judges that face opens, painted in face part Make the flame sprayed.Now, judge module 325 can calculate the distance of the key point information on both sides above and below face, judging distance Whether meet the distance of face opening, when these segmentation symbols match, the flame sprayed is drawn in face part.In addition to face, judge module 325 The key point information in multiple regions such as eyes, nose, ear can also be judged.Specific region and preparatory condition according to Performance is configured, and is not limited herein.
Fusion Module 330 further comprises filtering module 331.
Filtering module 331, the part suitable for effect textures in the second image to be located to foreground image areas are filtered.
When effect textures have the region overlapping with foreground image, as the flame of effect textures is covered on human body, block During the display of a part of human body, effect textures in the second image can be located at the portion of foreground image areas by filtering module 331 Divide and filtered, it is merely displayed in the edge of the special object of foreground image, do not influence the display of special object.
In addition the description of other modules is referred to the description in Fig. 3 embodiments, will not be repeated here.
According to image capture device generating date device provided by the invention, it is contemplated that in the first image got The problems such as distance of special object, anglec of rotation, it is corresponding effect textures are zoomed in and out, rotation processing, it is more suited The display of special object, improve display effect.And effect textures can also be drawn in the specific region of special object, meet to use The different demands at family.Background use added with effect textures to the background of concord, in the subregion of the second image it is static or Dynamic effect textures so that the display effect of the second image more integration.Image procossing is carried out to the second image makes its display effect Fruit is more natural true.Further, the different instruction that can also be triggered according to user, preserve the second image or preserve by the second image As group of picture into video.The present invention is not limited to user's technical merit, it is not necessary to which user is additionally located to image Reason, user time is saved, the image after being handled with Real-time Feedback, facilitates user to check.
Present invention also provides a kind of nonvolatile computer storage media, the computer-readable storage medium is stored with least One executable instruction, the image capture device data that the computer executable instructions can perform in above-mentioned any means embodiment are real When processing method.
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention, of the invention specific real Specific implementation of the example not to computing device is applied to limit.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface (Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for configuration processor 510, it can specifically perform above-mentioned image capture device generating date side Correlation step in method embodiment.
Specifically, program 510 can include program code, and the program code includes computer-managed instruction.
Processor 502 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for depositing program 510.Memory 506 may include high-speed RAM memory, it is also possible to also include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that processor 502 performs following operation:
In a kind of optional embodiment, program 510 is used to the real-time image acquisition collecting device of processor 502 is caught The first image comprising special object caught, scene cut processing is carried out to the first image, obtained before being directed to special object Scape image;The key message of special object is extracted from the first image, is drawn according to key message at the edge of special object Effect textures;Effect textures and foreground image are subjected to fusion treatment, obtain the second image;Show the second image.
In a kind of optional embodiment, key message is key point information, and program 510 is used to cause processor 502 The key point information positioned at special object edge is extracted from the first image.
In a kind of optional embodiment, key message is key point information, and program 510 is used to cause processor 502 According to the key point information of special object, the distance between at least two key points with symmetric relation are calculated;According to The distance between at least two key points of symmetric relation, processing is zoomed in and out to effect textures.
In a kind of optional embodiment, key message is key point information, and program 510 is used to cause processor 502 According to the key point information of special object, the anglec of rotation between at least two key points with symmetric relation is calculated;According to The anglec of rotation between at least two key points with symmetric relation, anglec of rotation processing is carried out to effect textures.
In a kind of optional embodiment, key message is key point information, and program 510 is used to cause processor 502 Judge whether the key point information of the specific region of special object meets preparatory condition;If so, in the specific region of special object Draw effect textures.
In a kind of optional embodiment, program 510 be used to causing processor 502 by effect textures and foreground image, The background image that scene cut handles to obtain is carried out to the first image and carries out fusion treatment, obtains the second image.
In a kind of optional embodiment, program 510 be used to causing processor 502 by effect textures and foreground image, Default dynamic or static background image carry out fusion treatment, obtain the second image.
In a kind of optional embodiment, program 510 is used to cause processor 502 by effect textures position in the second image Filtered in the part of foreground image areas.
In a kind of optional embodiment, program 510 is used to cause processor 502 in the subregion of the second image Add either statically or dynamically effect textures.
In a kind of optional embodiment, program 510 is used to processor 502 carries out at tone the second image Reason, photo-irradiation treatment and/or brightness processed.
In a kind of optional embodiment, key message is key point information;Special object is human body;Key point information Including the key point information positioned at face edge and/or the key point information positioned at human body edge.
In a kind of optional embodiment, the shooting that program 510 is used to processor 502 is triggered according to user refers to Order, preserve the second image.
In a kind of optional embodiment, the recording that program 510 is used to processor 502 is triggered according to user refers to Order, preserve by the second image as group of picture into video.
The specific implementation of each step may refer in above-mentioned image capture device generating date embodiment in program 510 Corresponding steps and unit in corresponding description, will not be described here.It is apparent to those skilled in the art that it is The convenience of description the equipment of foregoing description and the specific work process of module, may be referred in preceding method embodiment with succinctly Corresponding process description, will not be repeated here.
The scheme provided by the present embodiment, the first figure comprising special object that real-time image acquisition collecting device is caught Picture, scene cut processing is carried out to the first image, obtains being directed to the foreground image of special object;Extracted from the first image The key message of special object, effect textures are drawn at the edge of special object according to key message;By effect textures and prospect Image carries out fusion treatment, obtains the second image;Show the second image.The present invention is getting image capture device seizure in real time Image after, the foreground image of special object is partitioned into from image, draws effect textures at its edge, and melted with it Close, and facilitate user to see the image after being merged with effect textures in real time, originally to user the image real-time display after fusion Invention is not limited to user's technical merit, it is not necessary to which user carries out extra process to image, saves user time, can also be real When feedback processing after image, facilitate user to check.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) realize that image capture device data according to embodiments of the present invention are located in real time The some or all functions of some or all parts in the device of reason.The present invention is also implemented as being used to perform here The some or all equipment or program of device of described method are (for example, computer program and computer program production Product).Such program for realizing the present invention can store on a computer-readable medium, or can have one or more The form of signal.Such signal can be downloaded from internet website and obtained, and either be provided or on carrier signal to appoint What other forms provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of image capture device Real-time Data Processing Method, it includes:
The first image comprising special object that real-time image acquisition collecting device is caught, scene point is carried out to described first image Processing is cut, obtains being directed to the foreground image of the special object;
The key message of the special object is extracted from described first image, according to the key message described specific right Draw effect textures in the edge of elephant;
The effect textures and the foreground image are subjected to fusion treatment, obtain the second image;
Show second image.
2. the method according to claim 11, wherein, the pass that the special object is extracted from described first image Key information, effect textures are drawn at the edge of the special object according to the key message and further comprised:
The key message is key point information;
The key point information positioned at the special object edge is extracted from described first image.
3. method according to claim 1 or 2, wherein, it is described that the special object is extracted from described first image Key message, according to the key message the special object edge draw effect textures further comprise:
The key message is key point information;
According to the key point information of the special object, the distance between at least two key points with symmetric relation are calculated;
According to the distance between at least two key points with symmetric relation, processing is zoomed in and out to the effect textures.
4. according to the method described in claim any one of 1-3, wherein, it is described that the spy is extracted from described first image Determine the key message of object, drawing effect textures at the edge of the special object according to the key message further comprises:
The key message is key point information;
According to the key point information of the special object, the anglec of rotation between at least two key points with symmetric relation is calculated Degree;
According to the anglec of rotation between at least two key points with symmetric relation, the anglec of rotation is carried out to the effect textures Processing.
5. according to the method described in claim any one of 1-4, wherein, it is described that the spy is extracted from described first image Determine the key message of object, drawing effect textures at the edge of the special object according to the key message further comprises:
The key message is key point information;
Judge whether the key point information of the specific region of the special object meets preparatory condition;
If so, draw effect textures in the specific region of the special object.
6. according to the method described in claim any one of 1-5, wherein, it is described by the effect textures and the foreground image Fusion treatment is carried out, the second image is obtained and further comprises:
By the effect textures and the foreground image, the background image that scene cut handles to obtain is carried out to described first image Fusion treatment is carried out, obtains the second image.
7. according to the method described in claim any one of 1-5, wherein, it is described by the effect textures and the foreground image Fusion treatment is carried out, the second image is obtained and further comprises:
The effect textures and the foreground image, default dynamic or static background image are subjected to fusion treatment, obtain the Two images.
8. a kind of image capture device generating date device, it includes:
Split module, the first image comprising special object caught suitable for real-time image acquisition collecting device, to described first Image carries out scene cut processing, obtains being directed to the foreground image of the special object;
Drafting module, suitable for extracting the key message of the special object from described first image, according to the crucial letter Cease and draw effect textures at the edge of the special object;
Fusion Module, suitable for the effect textures and the foreground image are carried out into fusion treatment, obtain the second image;
Display module, suitable for showing second image.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the image capture device Real-time Data Processing Method any one of 1-7.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make corresponding to image capture device Real-time Data Processing Method of the computing device as any one of claim 1-7 Operation.
CN201710807160.0A 2017-09-08 2017-09-08 Image capture device Real-time Data Processing Method and device, computing device Pending CN107527041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710807160.0A CN107527041A (en) 2017-09-08 2017-09-08 Image capture device Real-time Data Processing Method and device, computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710807160.0A CN107527041A (en) 2017-09-08 2017-09-08 Image capture device Real-time Data Processing Method and device, computing device

Publications (1)

Publication Number Publication Date
CN107527041A true CN107527041A (en) 2017-12-29

Family

ID=60736483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710807160.0A Pending CN107527041A (en) 2017-09-08 2017-09-08 Image capture device Real-time Data Processing Method and device, computing device

Country Status (1)

Country Link
CN (1) CN107527041A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379282A (en) * 2012-04-26 2013-10-30 三星电子株式会社 Apparatus and method for recognizing image
CN104424466A (en) * 2013-08-21 2015-03-18 佳能株式会社 Object detection method, object detection device and image pickup device
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN104966284A (en) * 2015-05-29 2015-10-07 北京旷视科技有限公司 Method and equipment for acquiring object dimension information based on depth data
CN105554348A (en) * 2015-12-25 2016-05-04 北京奇虎科技有限公司 Image display method and device based on video information
CN106803057A (en) * 2015-11-25 2017-06-06 腾讯科技(深圳)有限公司 Image information processing method and device
CN108171719A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video penetration management method and device based on the segmentation of adaptive tracing frame

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379282A (en) * 2012-04-26 2013-10-30 三星电子株式会社 Apparatus and method for recognizing image
CN104424466A (en) * 2013-08-21 2015-03-18 佳能株式会社 Object detection method, object detection device and image pickup device
CN104469155A (en) * 2014-12-04 2015-03-25 中国航空工业集团公司第六三一研究所 On-board figure and image virtual-real superposition method
CN104966284A (en) * 2015-05-29 2015-10-07 北京旷视科技有限公司 Method and equipment for acquiring object dimension information based on depth data
CN106803057A (en) * 2015-11-25 2017-06-06 腾讯科技(深圳)有限公司 Image information processing method and device
CN105554348A (en) * 2015-12-25 2016-05-04 北京奇虎科技有限公司 Image display method and device based on video information
CN108171719A (en) * 2017-12-25 2018-06-15 北京奇虎科技有限公司 Video penetration management method and device based on the segmentation of adaptive tracing frame

Similar Documents

Publication Publication Date Title
CN107820027A (en) Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN107507155A (en) Video segmentation result edge optimization real-time processing method, device and computing device
CN107689075B (en) Generation method, device and the robot of navigation map
CN107977927A (en) Stature method of adjustment and device, computing device based on view data
CN107665482A (en) Realize the video data real-time processing method and device, computing device of double exposure
CN108109161A (en) Video data real-time processing method and device based on adaptive threshold fuzziness
CN107945188A (en) Personage based on scene cut dresss up method and device, computing device
CN108734052A (en) character detecting method, device and system
CN107483892A (en) Video data real-time processing method and device, computing device
CN107613161A (en) Video data handling procedure and device, computing device based on virtual world
CN107610149A (en) Image segmentation result edge optimization processing method, device and computing device
CN108111911A (en) Video data real-time processing method and device based on the segmentation of adaptive tracing frame
CN107613360A (en) Video data real-time processing method and device, computing device
CN107563357A (en) Live dress ornament based on scene cut, which is dressed up, recommends method, apparatus and computing device
CN107705279A (en) Realize the view data real-time processing method and device, computing device of double exposure
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN107563962A (en) Video data real-time processing method and device, computing device
CN107578369A (en) Video data handling procedure and device, computing device
CN107566853A (en) Realize the video data real-time processing method and device, computing device of scene rendering
CN107766803A (en) Video personage based on scene cut dresss up method, apparatus and computing device
CN107633547A (en) Realize the view data real-time processing method and device, computing device of scene rendering
CN107680105B (en) Video data real-time processing method and device based on virtual world and computing equipment
CN108171716A (en) Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN107767391A (en) Landscape image processing method, device, computing device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171229