CN106875433A - Cut control method, control device and the electronic installation of composition - Google Patents
Cut control method, control device and the electronic installation of composition Download PDFInfo
- Publication number
- CN106875433A CN106875433A CN201710138717.6A CN201710138717A CN106875433A CN 106875433 A CN106875433 A CN 106875433A CN 201710138717 A CN201710138717 A CN 201710138717A CN 106875433 A CN106875433 A CN 106875433A
- Authority
- CN
- China
- Prior art keywords
- master image
- depth
- image
- scene
- scene master
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Studio Devices (AREA)
Abstract
The invention discloses a kind of control method for cutting composition.Control method includes:Contextual data is processed to obtain current foreground type;Current composition suggestion corresponding with current foreground type is found in presetting database;Meet the cutting image that current composition is advised to obtain with cutting scene master image.Additionally, the invention also discloses a kind of control device and electronic installation.Control method of the invention, control device and electronic installation determine current foreground type using depth information, so as to obtain the corresponding current composition suggestion of current foreground type and cut scene master image according to current structure suggestion, so, even if in the case where camera lens direction is not changed, still can be by way of cutting image so that prospect position in the scene meets the requirement of current composition suggestion, so that obtaining composition suitably cuts image.
Description
Technical field
The present invention relates to imaging technique, more particularly to a kind of control method for cutting composition, control device and electronic installation.
Background technology
Composition belongs to the professional technical ability of comparing in camera work, and many ordinary consumers do not possess the technical ability of this respect,
Do not know how that adjustment camera lens direction or camera lens are inaccurate towards adjustment, cause the poor visual effect of image.
The content of the invention
It is contemplated that at least solving one of technical problem present in prior art.Therefore, embodiments of the present invention
There is provided a kind of control method for cutting composition, control device and electronic installation.
A kind of control method for cutting composition, for controlling electronic installation, the electronic installation includes imaging device, described
Imaging device is used to gather contextual data, and the contextual data includes scene master image;The control method is comprised the following steps:
The contextual data is processed to obtain current foreground type;
Current composition suggestion corresponding with the current foreground type, the presetting database are found in presetting database
Include various foreground types and the suggestion of corresponding composition;With
Cut the scene master image and meet the cutting image that the current composition is advised obtaining.
A kind of control device for cutting composition, for controlling electronic installation, the electronic installation includes imaging device, described
Imaging device is used to gather contextual data, and the contextual data includes scene master image;The control device include processing module,
Find module and cut module.
The processing module is used to process the contextual data to obtain current foreground type.
The searching module is used to finding current composition corresponding with the current foreground type in presetting database builds
View, the presetting database includes various foreground types and the suggestion of corresponding composition.
It is described to cut the cutting figure that module meets the current composition suggestion for cutting the scene master image to obtain
Picture.
A kind of electronic installation includes imaging device and the control device.
The imaging device is used to gather contextual data, and the contextual data includes scene master image.
Control method of the invention, control device and electronic installation determine current foreground type using depth information, so that
Obtain the corresponding current composition suggestion of current foreground type and scene master image is cut according to current structure suggestion, even if in this way,
In the case where camera lens direction is not changed, still can be by way of cutting image so that prospect position in the scene meets
The requirement of current composition suggestion, so that obtaining composition suitably cuts image.
Additional aspect of the invention and advantage will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by practice of the invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from description of the accompanying drawings below to implementation method is combined
Obtain substantially and be readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the control method of the cutting composition of embodiment of the present invention.
Fig. 2 is the floor map of the electronic installation of embodiment of the present invention.
Fig. 3 is the schematic flow sheet of the control method of some implementation methods of the invention.
Fig. 4 is the high-level schematic functional block diagram of the processing module of the control device of some implementation methods of the invention.
Fig. 5 is the schematic flow sheet of the control method of some implementation methods of the invention.
Fig. 6 is the high-level schematic functional block diagram of the processing unit of some implementation methods of the invention.
Fig. 7 is the schematic flow sheet of the control method of some implementation methods of the invention.
Fig. 8 is another high-level schematic functional block diagram of the processing unit of some implementation methods of the invention.
Fig. 9 is the schematic flow sheet of the control method of some implementation methods of the invention.
Figure 10 is the high-level schematic functional block diagram of the acquiring unit of some implementation methods of the invention.
Figure 11 is the schematic flow sheet of the control method of some implementation methods of the invention.
Figure 12 is the schematic flow sheet of the control method of some implementation methods of the invention.
Figure 13 is another high-level schematic functional block diagram of the processing module of some implementation methods of the invention.
Figure 14 is the schematic diagram of the scene master image of some implementation methods of the invention.
Figure 15 is the schematic diagram of the current composition suggestion of some implementation methods of the invention.
Figure 16 is the schematic diagram of the cutting image of some implementation methods of the invention.
Figure 17 is the schematic diagram of another scene master image of some implementation methods of the invention.
Figure 18 is the schematic diagram of another current composition suggestion of some implementation methods of the invention.
Figure 19 is the schematic diagram of another cutting image of some implementation methods of the invention.
Main element symbol description:
Electronic installation 100, control device 10, processing module 12, the treatment subelement 1222, second of processing unit 122, first
The treatment treatment of subelement the 1224, the 3rd subelement 1226, fourth process subelement 1228, treatment of acquiring unit the 124, the 5th are single
Unit 1242, searching subelement 1244, determining unit 126, determination subelement 1262, searching module 14, cutting module 16, imaging dress
Put 20, memory 30, communication module 40.
Specific embodiment
Embodiments of the present invention are described below in detail, the implementation method of the implementation method is shown in the drawings, wherein
Same or similar label represents same or similar element or the element with same or like function from start to finish.Lead to below
It is exemplary to cross the implementation method being described with reference to the drawings, and is only used for explaining the present invention, and it is not intended that to limit of the invention
System.
Fig. 1 and Fig. 2 is referred to, the control method of the cutting composition of embodiment of the present invention can be used for controlling electronic installation
100.Electronic installation 100 includes imaging device 20.Imaging device 20 is used to gather contextual data, and contextual data includes scene master map
Picture.Control method is comprised the following steps:
S12:Contextual data is processed to obtain current foreground type;
S14:Current composition suggestion corresponding with current foreground type is found in presetting database, presetting database includes
There are various foreground types and corresponding composition to advise;With
S16:Cutting scene master image meets the cutting image that current composition is advised to obtain.
Referring to Fig. 2, the control device 10 of the cutting composition of embodiment of the present invention can be used for control electronics dress
Put 100.Control device 10 includes processing module 12, finds module 14 and cuts module 16.Processing module 12 is used to process scene
Data are obtaining current foreground type.Finding module 14 is used to find work as corresponding with current foreground type in presetting database
Preceding composition suggestion, presetting database includes various foreground types and the suggestion of corresponding composition.Cutting module 16 is used to cut field
Scape master image meets the cutting image that current composition is advised to obtain.
In other words, the control method of embodiment of the present invention can be real by the control device 10 of embodiment of the present invention
It is existing, wherein, step S12 can be realized by processing module 12, and step S14 can be realized by searching module 14, and step S16 can be by
Module 16 is cut to realize.
In some embodiments, the control device 10 of embodiment of the present invention can apply to embodiment of the present invention
Electronic installation 100, the electronic installation 100 of embodiment of the present invention can the control device including embodiment of the present invention in other words
10。
The cutting control method of composition of embodiment of the present invention, control device 10 and electronic installation 100 are believed using depth
Breath determines current foreground type, so as to obtain the corresponding current composition suggestion of current foreground type and be advised cutting out according to current structure
Scene master image is cut, even if in this way, in the case where camera lens direction is not being changed, still can be caused by way of cutting image
Prospect position in the scene meets the requirement of current composition suggestion, so that obtaining composition suitably cuts image.
In some embodiments, electronic installation 100 includes mobile phone or panel computer.In embodiments of the present invention, electricity
Sub-device 100 is mobile phone.
In some embodiments, imaging device 20 includes Front camera and/or rearmounted camera.In embodiment of the present invention
In, imaging device 20 is Front camera.
Referring to Fig. 2, in some embodiments, electronic installation 100 includes memory 30 and/or is communicated with high in the clouds
Communication module 40, presetting database is stored in memory 30 and/or high in the clouds.
In this way, the corresponding composition suggestion of foreground type can be obtained from number of ways.Presetting database is stored in electronics dress
The memory 30 for putting 100 can save the working time, improve stability;Presetting database is stored in into high in the clouds can save hardware
Resource, it is to avoid the substantial amounts of memory space of consumption, and can be with the corresponding presetting database of real-time update.
Fig. 3 is referred to, in some embodiments, step S12 is comprised the following steps:
S122:Contextual data is processed to obtain the depth information of scene master image;
S124:According to the foreground part of Depth Information Acquistion scene master image;With
S126:Current foreground type is determined according to foreground part.
Refer to Fig. 4, in some embodiments, processing module 12 include processing unit 122, acquiring unit 124 and really
Order unit 126.Processing unit 122 is used to process contextual data to obtain the depth information of scene master image.Acquiring unit 124 is used
In the foreground part according to Depth Information Acquistion scene master image.Before determining unit 126 is used to be determined currently according to foreground part
Scape type.
In other words, step S122 can be realized that step S124 can be realized by acquiring unit 124 by processing unit 122,
Step S126 can be realized by determining unit 126.
In this way, the foreground part of scene master image can be obtained and based on foreground part, so as to obtain current prospect
Type.
Fig. 5 is referred to, in some embodiments, contextual data includes depth image corresponding with scene master image, step
Rapid S122 is comprised the following steps:
S1222:Depth image is processed to obtain the depth data of scene master image;With
S1224:Depth data is processed to obtain depth information.
Fig. 6 is referred to, in some embodiments, contextual data includes depth image corresponding with scene master image, place
Reason unit 122 includes the first treatment subelement 1222 and second processing subelement 1224.First treatment subelement 1222 is used to locate
Depth image is managed to obtain the depth data of scene master image.Second processing subelement 1224 is used to process depth data to obtain
Depth information.
In other words, step S1222 can realize that step S1224 can be by second by the first treatment subelement 1222
Reason subelement 1224 is realized.
So, it is possible to use depth image quickly obtains the depth information of scene master image.
It is appreciated that scene master image is RGB color image, depth of the depth image comprising each personal or object in scene
Information.Because the color information of scene master image and the depth information of depth image are one-to-one relations, therefore, can obtain
The depth information of scene master image.
In some embodiments, the acquisition modes of depth image corresponding with scene master image include using structure optical depth
Degree range finding obtains depth image and obtains depth image two using flight time (time of flight, TOF) depth camera
The mode of kind.
When obtaining depth image using structure light Range finder, imaging device 20 includes camera and the projector.
It is appreciated that structure light Range finder is that the photo structure of certain pattern is projeced into body surface using the projector,
The striation 3-D view modulated by testee shape is formed on surface.Striation 3-D view is detected so as to obtain by camera
Striation two dimension fault image.The distortion degree of striation depends on the relative position and body surface shape between the projector and camera
Wide or height.The displacement shown along striation is highly proportional to body surface, and kink illustrates the change of plane, discontinuously
The physical clearance of display surface.When the timing of relative position one between the projector and camera, by the two-dimentional optical strip image for distorting
The three-D profile of coordinate just reproducible body surface, such that it is able to obtain depth information.Structure light Range finder has higher
Resolution ratio and certainty of measurement.
When obtaining depth image using TOF depth cameras, imaging device 20 includes TOF depth cameras.
It is appreciated that the modulation infrared light emission that is sent from luminescence unit by sensor record of TOF depth cameras to
Object, then the phase place change reflected from object, according to the light velocity in the range of a wavelength, can in real time obtain whole
Scene depth distance.TOF depth cameras are not influenceed when calculating depth information by the gray scale and feature on object surface, and can
Rapidly to calculate depth information, with real-time very high.
Fig. 7 is referred to, in some embodiments, contextual data includes scene sub-picture corresponding with scene master image,
Step S122 is comprised the following steps:
S1226:Scene master image and scene sub-picture is processed to obtain the depth data of scene master image;With
S1228:Depth data is processed to obtain depth information.
Fig. 8 is referred to, in some embodiments, contextual data includes scene sub-picture corresponding with scene master image,
Processing unit 122 includes the 3rd treatment subelement 1226 and fourth process subelement 1228.3rd treatment subelement 1226 is used for
Scene master image and scene sub-picture is processed to obtain the depth data of scene master image.Fourth process subelement 1228 is used to locate
Depth data is managed to obtain depth information.
In other words, step S1226 can be realized that step S1228 can be by the everywhere by the 3rd treatment subelement 1226
Reason subelement 1228 is realized.
In this way, the depth information of scene master image can be obtained by processing scene master image and scene sub-picture.
In some embodiments, imaging device 20 includes main camera and secondary camera.
It is appreciated that depth information can be obtained by binocular stereo vision distance-finding method, now contextual data bag
Include scene master image and scene sub-picture.Wherein, scene master image is shot by main camera and obtained, and scene sub-picture is imaged by pair
Head shoots and obtains.Binocular stereo vision range finding is that same object is imaged from different positions with two identical cameras
To obtain the stereo pairs of object, then go out the corresponding picture point of stereo pairs by algorithmic match, so as to calculate parallax,
Depth information is finally recovered using the method based on triangulation.In this way, by scene master image and scene sub-picture this
Stereo pairs are matched the depth information that just can obtain scene master image.
Fig. 9 is referred to, in some embodiments, step S124 is comprised the following steps:
S1242:The First Point of scene master image is obtained according to depth information;With
S1244:The region with First Point adjoining and depth consecutive variations is found as foreground part.
Figure 10 is referred to, in some embodiments, acquiring unit 124 includes the 5th treatment subelement 1242 and finds son
Unit 1244.5th treatment subelement 1242 is used to be obtained according to depth information the First Point of scene master image.Find subelement
1244 are used to find the region with First Point adjoining and depth consecutive variations as foreground part.
In other words, step S1242 can realize that step S1244 can be by finding son by the 5th treatment subelement 1242
Unit 1244 is realized.
In this way, the foreground part of scene master image physical link can be obtained, i.e., in reality scene, foreground part is to connect
It is connected together.Using the foreground part of physical link as main body, the relation of foreground part can be intuitively obtained.
Specifically, the First Point of scene master image is first obtained according to depth information, First Point is opened equivalent to foreground part
End, is diffused from First Point, obtains the region with First Point adjoining and depth consecutive variations, and these regions and First Point are returned
And be foreground area.
It should be noted that First Point refer to the minimum corresponding pixel of object of depth, i.e. object distance it is minimum or from
The corresponding pixel of the nearest object of imaging device 20.Adjoining refers to that two pixels link together.Depth consecutive variations are
Refer to that the depth difference of two adjacent pixels is less than predetermined difference value, two adjoinings of the difference of depth less than predetermined difference value in other words
Pixel depth consecutive variations.
Figure 11 is referred to, in some embodiments, step S124 may comprise steps of:
S1246:The First Point of scene master image is obtained according to depth information;With
S1248:The difference of the depth of searching and First Point is less than the region of predetermined threshold as foreground part.
In this way, the foreground part of scene master image logical communication link can be obtained, i.e., in reality scene, foreground part may
It is not attached to together, but meets certain logical relation, such as eagle dives and grabs the scene of chicken, eagle and chicken thing
May not linked together in reason, but logically, it can be determined that they are connected.
Specifically, the First Point of scene master image is first obtained according to depth information, First Point is opened equivalent to foreground part
End, is diffused from First Point, and the difference of the depth of acquisition and First Point is less than the region of predetermined threshold, these regions and First Point
Merger is foreground area.
In some embodiments, predetermined threshold can be the value set by user.In this way, user can be according to itself
Demand determine the scope of foreground part, so as to obtain preferable composition suggestion, realize preferable composition.
In some embodiments, predetermined threshold can be the value that control device 10 determines, not do any limit herein
System.The predetermined threshold that control device 10 determines can be a fixed value of storage inside, or according to different situations, example
Such as the depth of First Point, the numerical value for calculating.
In some embodiments, step S124 may comprise steps of:
Find depth and be in the region of predetermined interval as foreground part.
In this way, the foreground part that depth is in OK range can be obtained.
It is appreciated that in the case of some shootings, foreground part is not the part of foremost, but foremost part is somewhat
Part more rearward, for example, people is sitting in behind computer, computer is earlier, but the talent is main part, so by depth
Region in predetermined interval can be effectively prevented from the incorrect problem of main body'choice as foreground part.
Figure 12 is referred to, in some embodiments, step S126 is comprised the following steps:
S1262:The size of the background parts of size, shape and scene master image according to foreground part, shape and/or position
Put matching relationship and determine foreground type.
Figure 13 is referred to, in some embodiments, determining unit 126 includes determination subelement 1262.Determination subelement
1262 size, shape and/or the positions for being used for the background parts according to the size, shape and scene master image of foreground part coordinate
Relation determines foreground type.
In other words, step S1262 can be realized by determination subelement 1262.
In this way, foreground type can be determined with the matching relationship of background parts by foreground part or foreground part.
It is appreciated that main part of the foreground part as image, it is determined that during foreground type, can be as topmost
Determinant, the i.e. feature such as the size of Utilization prospects part, shape, content determine foreground type.
In some embodiments, in order to improve the quality of composition, it is determined that during foreground type, background portion can also be referred to
Point or foreground part and background parts matching relationship, so as to obtain more accurate foreground type, and then obtain more satisfactory
Composition suggestion.
In some embodiments, control method is comprised the following steps:
Treatment cuts the quality that image cuts image to improve.
So, it is possible to use the method treatment of image procossing cuts image, so as to improve the image quality for cutting image.
It is appreciated that image is cut by after cutting, image resolution ratio there occurs change, in order to obtain and scene master image
Corresponding resolution ratio, may be by certain image stretch, so as to cause to cut image quality variation, institute to adapt to display screen
To need to use the treatment of the image processing methods such as sharpening or contrast adjustment to cut the image figure higher to obtain image quality
Picture.
Also referring to Figure 14-16, in one embodiment, the control imaging device 20 of control device 10 is imaged to obtain such as
Scene master image and depth image (not shown) shown in Figure 14, the treatment according to depth information can obtain scene master image
Foreground image is portrait part, and current foreground type is determined according to portrait part, such as the portrait part in Figure 14 is relative to whole
Individual image is too small, causes main part not protrude.According to this foreground type, nine grids composition can be selected, as shown in figure 15,
So that face part is located on the node of JiuGongTu, then by cutting, removes redundant image, so that portrait part protrudes,
Cutting image as shown in figure 16 can be obtained after eventually passing the image procossings such as sharpening, contrast adjustment.
Also referring to Figure 17-19, in another embodiment, the control imaging device 20 of control device 10 is imaged to obtain
Scene master image and depth image (not shown) as shown in figure 19, the treatment according to depth information can obtain scene master image
Foreground image be flowing water part, current foreground type is determined according to flowing water part, such as the flowing water part in Figure 19 relative to
Whole image is too small, causes main part not protrude.According to this foreground type, close-coupled composition can be selected, such as Figure 18 institutes
Show, select main part, then by cutting, remove redundant image, so that flowing water part protrudes, eventually pass sharpening, right
Than cutting image as shown in figure 19 can be obtained after the image procossings such as degree regulation.
In the description of embodiments of the present invention, term " first ", " second " are only used for describing purpose, without being understood that
To indicate or implying relative importance or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ",
One or more feature can be expressed or be implicitly included to the feature of " second ".In embodiments of the present invention
In description, " multiple " is meant that two or more, unless otherwise expressly limited specifically.
, it is necessary to illustrate in the description of embodiments of the present invention, unless otherwise clearly defined and limited, term
" installation ", " connected ", " connection " should be interpreted broadly, for example, it may be fixedly connected, or be detachably connected, or one
The connection of body ground;Can mechanically connect, or electrically connect or can mutually communicate;Can be joined directly together, it is also possible to logical
Intermediary is crossed to be indirectly connected to, can be two element internals connection or two interaction relationships of element.For ability
For the those of ordinary skill in domain, above-mentioned term specifically containing in embodiments of the present invention can be as the case may be understood
Justice.
In the description of this specification, reference term " implementation method ", " some implementation methods ", " schematically implementation
The description of mode ", " example ", " specific example " or " some examples " etc. means to combine the tool that the implementation method or example are described
Body characteristicses, structure, material or feature are contained at least one implementation method of the invention or example.In this manual,
Schematic representation to above-mentioned term is not necessarily referring to identical implementation method or example.And, the specific features of description, knot
Structure, material or feature can in an appropriate manner be combined in one or more any implementation methods or example.
Any process described otherwise above or method description in flow chart or herein is construed as, and expression includes
It is one or more for realizing specific logical function or process the step of the module of code of executable instruction, fragment or portion
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussion suitable
Sequence, including function involved by basis by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Represent in flow charts or logic and/or step described otherwise above herein, for example, being considered use
In the order list of the executable instruction for realizing logic function, in may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processing module or other can be from instruction
The system of execution system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or
Equipment and use.For the purpose of this specification, " computer-readable medium " can be it is any can include, store, communicating, propagating or
Transmission procedure is used for instruction execution system, device or equipment or with reference to these instruction execution systems, device or equipment
Device.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Connected up with one or more
Electrical connection section (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can thereon print described program or other are suitable
Medium, because optical scanner for example can be carried out by paper or other media, then enters edlin, interpretation or if necessary with it
His suitable method is processed electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of embodiments of the present invention can be with hardware, software, firmware or combinations thereof come real
It is existing.In the above-described embodiment, multiple steps or method can be with storages in memory and by suitable instruction execution system
The software or firmware of execution is realized.If for example, being realized with hardware, with another embodiment, ability can be used
Any one of following technology known to domain or their combination are realized:With for realizing logic function to data-signal
The discrete logic of logic gates, the application specific integrated circuit with suitable combinational logic gate circuit, programmable gate array
(PGA), field programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method is carried
The rapid hardware that can be by program to instruct correlation is completed, and described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
Additionally, each functional unit in various embodiments of the present invention can be integrated in a processing module, also may be used
Being that unit is individually physically present, it is also possible to which two or more units are integrated in a module.It is above-mentioned integrated
Module can both be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.The integrated module
If realized in the form of using software function module and as independent production marketing or when using, it is also possible to which storage is in a calculating
In machine read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned
Implementation method is changed, changes, replacing and modification.
Claims (18)
1. it is a kind of cut composition control method, for controlling electronic installation, it is characterised in that the electronic installation include imaging
Device, the imaging device is used to gather contextual data, and the contextual data includes scene master image;The control method includes
Following steps:
The contextual data is processed to obtain current foreground type;
Current composition suggestion corresponding with the current foreground type is found in presetting database, the presetting database includes
There are various foreground types and corresponding composition to advise;With
Cut the scene master image and meet the cutting image that the current composition is advised obtaining.
2. control method as claimed in claim 1, it is characterised in that the treatment contextual data is obtaining current prospect
The step of type, comprises the following steps:
The contextual data is processed to obtain the depth information of the scene master image;
The foreground part of scene master image according to the Depth Information Acquistion;With
Current foreground type is determined according to the foreground part.
3. control method as claimed in claim 2, it is characterised in that the contextual data includes and the scene master image pair
The depth image answered, includes the step of the treatment contextual data is with the depth information for obtaining the scene master image following
Step:
The depth image is processed to obtain the depth data of the scene master image;With
The depth data is processed to obtain the depth information.
4. control method as claimed in claim 2, it is characterised in that the contextual data includes and the scene master image pair
The scene sub-picture answered, include the step of the treatment contextual data is with the depth information for obtaining the scene master image with
Lower step:
The scene master image and the scene sub-picture is processed to obtain the depth data of the scene master image;With
The depth data is processed to obtain the depth information.
5. control method as claimed in claim 2, it is characterised in that the scene master according to the Depth Information Acquistion
The step of foreground part of image, comprises the following steps:
The First Point of the scene master image is obtained according to the depth information;With
The region with First Point adjoining and depth consecutive variations is found as the foreground part.
6. control method as claimed in claim 2, it is characterised in that described that current prospect class is determined according to the foreground part
The step of type, comprises the following steps:
The size of size, shape according to the foreground part and the background parts of the scene master image, shape and/or position
Matching relationship determines the foreground type.
7. control method as claimed in claim 1, it is characterised in that the electronic installation includes memory and/or and high in the clouds
The communication module of communication, the presetting database is stored in the memory and/or the high in the clouds.
8. it is a kind of cut composition control device, for controlling electronic installation, it is characterised in that the electronic installation include imaging
Device, the imaging device is used to gather contextual data, and the contextual data includes scene master image;The control device bag
Include:
Processing module, the processing module is used to process the contextual data to obtain current foreground type;
Module is found, the searching module is used to find current structure corresponding with the current foreground type in presetting database
Figure suggestion, the presetting database includes various foreground types and the suggestion of corresponding composition;With
Module is cut, the cutting module meets the sanction of the current composition suggestion for cutting the scene master image to obtain
Cut image.
9. control device as claimed in claim 8, it is characterised in that the processing module includes:
Processing unit, the processing unit is used to process the contextual data to obtain the depth information of the scene master image;
Acquiring unit, the acquiring unit is used for the foreground part of the scene master image according to the Depth Information Acquistion;With
Determining unit, the determining unit is used to determine current foreground type according to the foreground part.
10. control device as claimed in claim 9, it is characterised in that the contextual data includes and the scene master image
Corresponding depth image, the processing unit includes:
First treatment subelement, the first treatment subelement is used to process the depth image to obtain the scene master image
Depth data;With
Second processing subelement, the second processing subelement is used to process the depth data to obtain the depth information.
11. control devices as claimed in claim 9, it is characterised in that the contextual data includes and the scene master image
Corresponding scene sub-picture, the processing unit includes:
3rd treatment subelement, it is described 3rd treatment subelement be used for process the scene master image and the scene sub-picture with
Obtain the depth data of the scene master image;With
Fourth process subelement, the fourth process subelement is used to process the depth data to obtain the depth information.
12. control devices as claimed in claim 9, it is characterised in that the acquiring unit includes:
5th treatment subelement, the 5th treatment subelement is used to obtain the scene master image according to the depth information
First Point;With
Subelement is found, the searching subelement is used to find and First Point adjoining and the region conduct of depth consecutive variations
The foreground part.
13. control devices as claimed in claim 9, it is characterised in that the determining unit includes:
Determination subelement, the determination subelement is for the size according to the foreground part, shape and the scene master image
The size of background parts, shape and/or position matching determine the foreground type.
14. control devices as claimed in claim 8, it is characterised in that the electronic installation includes memory and/or and high in the clouds
The communication module of communication, the presetting database is stored in the memory and/or the high in the clouds.
A kind of 15. electronic installations, it is characterised in that including:
Imaging device, the imaging device is used to gather contextual data, and the contextual data includes scene master image;With
Control device as described in claim 8-14 any one.
16. electronic installations as claimed in claim 15, it is characterised in that the imaging device includes main camera and secondary shooting
Head.
17. electronic installations as claimed in claim 15, it is characterised in that the imaging device includes camera and the projector.
18. electronic installations as claimed in claim 15, it is characterised in that the imaging device includes TOF depth cameras.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710138717.6A CN106875433A (en) | 2017-03-09 | 2017-03-09 | Cut control method, control device and the electronic installation of composition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710138717.6A CN106875433A (en) | 2017-03-09 | 2017-03-09 | Cut control method, control device and the electronic installation of composition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106875433A true CN106875433A (en) | 2017-06-20 |
Family
ID=59171531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710138717.6A Pending CN106875433A (en) | 2017-03-09 | 2017-03-09 | Cut control method, control device and the electronic installation of composition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106875433A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622497A (en) * | 2017-09-29 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image cropping method, apparatus, computer-readable recording medium and computer equipment |
CN107909030A (en) * | 2017-11-14 | 2018-04-13 | 深圳市金立通信设备有限公司 | Processing method, terminal and the computer-readable recording medium of portrait photo |
CN109559489A (en) * | 2018-12-27 | 2019-04-02 | 梁丹红 | Play posture system for prompting |
CN110266970A (en) * | 2019-05-31 | 2019-09-20 | 上海萌鱼网络科技有限公司 | A kind of short video creating method and system |
CN112036319A (en) * | 2020-08-31 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Picture processing method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101221341A (en) * | 2007-01-08 | 2008-07-16 | 华晶科技股份有限公司 | Initialization method for field depth composition |
CN104054332A (en) * | 2012-01-26 | 2014-09-17 | 索尼公司 | Image processing apparatus and image processing method |
CN104660904A (en) * | 2015-03-04 | 2015-05-27 | 深圳市欧珀通信软件有限公司 | Shooting subject recognition method and device |
CN105528757A (en) * | 2015-12-08 | 2016-04-27 | 华南理工大学 | Content-based image aesthetic quality improvement method |
CN105659286A (en) * | 2013-09-18 | 2016-06-08 | 英特尔公司 | Automated image cropping and sharing |
CN105894016A (en) * | 2016-03-29 | 2016-08-24 | 联想(北京)有限公司 | Image processing method and electronic device |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
-
2017
- 2017-03-09 CN CN201710138717.6A patent/CN106875433A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101221341A (en) * | 2007-01-08 | 2008-07-16 | 华晶科技股份有限公司 | Initialization method for field depth composition |
CN104054332A (en) * | 2012-01-26 | 2014-09-17 | 索尼公司 | Image processing apparatus and image processing method |
CN105659286A (en) * | 2013-09-18 | 2016-06-08 | 英特尔公司 | Automated image cropping and sharing |
CN104660904A (en) * | 2015-03-04 | 2015-05-27 | 深圳市欧珀通信软件有限公司 | Shooting subject recognition method and device |
CN105528757A (en) * | 2015-12-08 | 2016-04-27 | 华南理工大学 | Content-based image aesthetic quality improvement method |
CN105894016A (en) * | 2016-03-29 | 2016-08-24 | 联想(北京)有限公司 | Image processing method and electronic device |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622497A (en) * | 2017-09-29 | 2018-01-23 | 广东欧珀移动通信有限公司 | Image cropping method, apparatus, computer-readable recording medium and computer equipment |
CN107622497B (en) * | 2017-09-29 | 2020-03-27 | Oppo广东移动通信有限公司 | Image cropping method and device, computer readable storage medium and computer equipment |
CN107909030A (en) * | 2017-11-14 | 2018-04-13 | 深圳市金立通信设备有限公司 | Processing method, terminal and the computer-readable recording medium of portrait photo |
CN109559489A (en) * | 2018-12-27 | 2019-04-02 | 梁丹红 | Play posture system for prompting |
CN109559489B (en) * | 2018-12-27 | 2020-09-11 | 董官献 | Playing posture reminding system |
CN110266970A (en) * | 2019-05-31 | 2019-09-20 | 上海萌鱼网络科技有限公司 | A kind of short video creating method and system |
CN112036319A (en) * | 2020-08-31 | 2020-12-04 | 北京字节跳动网络技术有限公司 | Picture processing method, device, equipment and storage medium |
CN112036319B (en) * | 2020-08-31 | 2023-04-18 | 北京字节跳动网络技术有限公司 | Picture processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875433A (en) | Cut control method, control device and the electronic installation of composition | |
CN106851123A (en) | Exposal control method, exposure-control device and electronic installation | |
CN106851107A (en) | Switch control method, control device and the electronic installation of camera assisted drawing | |
CN106851238B (en) | Method for controlling white balance, white balance control device and electronic device | |
CN108269279B (en) | Three-dimensional reconstruction method and device based on monocular 3 D scanning system | |
EP3869797B1 (en) | Method for depth detection in images captured using array cameras | |
US8406510B2 (en) | Methods for evaluating distances in a scene and apparatus and machine readable medium using the same | |
US8983176B2 (en) | Image selection and masking using imported depth information | |
US8294762B2 (en) | Three-dimensional shape measurement photographing apparatus, method, and program | |
US8724893B2 (en) | Method and system for color look up table generation | |
US6968084B2 (en) | Specific point detecting method and device | |
CN107025635A (en) | Processing method, processing unit and the electronic installation of image saturation based on the depth of field | |
CN106998389A (en) | Control method, control device and the electronic installation of auto composition | |
CN106851124A (en) | Image processing method, processing unit and electronic installation based on the depth of field | |
CN105627932A (en) | Distance measurement method and device based on binocular vision | |
US20140368615A1 (en) | Sensor fusion for depth estimation | |
CN106937049A (en) | The processing method of the portrait color based on the depth of field, processing unit and electronic installation | |
CN102771128A (en) | A method and systems for obtaining an improved stereo image of an object | |
WO2021084530A1 (en) | Method and system for generating a depth map | |
CN107016348A (en) | With reference to the method for detecting human face of depth information, detection means and electronic installation | |
US20150249774A1 (en) | Ghost artifact detection and removal in hdr image creation using graph based selection of local reference | |
CN106991378A (en) | Facial orientation detection method, detection means and electronic installation based on depth | |
CN107590459A (en) | The method and apparatus for delivering evaluation | |
US20220319029A1 (en) | Method for determining depth from images and relative system | |
CN110428461B (en) | Monocular SLAM method and device combined with deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170620 |