CN106973224B - Auxiliary composition control method, control device and electronic device - Google Patents

Auxiliary composition control method, control device and electronic device Download PDF

Info

Publication number
CN106973224B
CN106973224B CN201710138849.9A CN201710138849A CN106973224B CN 106973224 B CN106973224 B CN 106973224B CN 201710138849 A CN201710138849 A CN 201710138849A CN 106973224 B CN106973224 B CN 106973224B
Authority
CN
China
Prior art keywords
image
depth
imaging device
electronic device
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710138849.9A
Other languages
Chinese (zh)
Other versions
CN106973224A (en
Inventor
孙剑波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710138849.9A priority Critical patent/CN106973224B/en
Publication of CN106973224A publication Critical patent/CN106973224A/en
Application granted granted Critical
Publication of CN106973224B publication Critical patent/CN106973224B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The invention discloses a control method for auxiliary composition. The control method comprises the following steps: processing scene data to obtain depth information of a cached main image; acquiring a foreground part of a cached main image according to the depth information; determining the current three-dimensional space structure type according to the foreground part and the orientation; and finding a current composition suggestion in the memory corresponding to the current three-dimensional spatial structure type. In addition, the invention also discloses a control device for assisting composition and an electronic device. The control method, the control device and the electronic device for auxiliary composition determine the current three-dimensional space structure type by using the depth information and the direction sensor information, so that a current composition suggestion corresponding to the current three-dimensional space structure type is obtained, and the composition can be assisted.

Description

Auxiliary composition control method, control device and electronic device
Technical Field
The present invention relates to imaging technologies, and in particular, to a control method, a control device, and an electronic device for auxiliary composition.
Background
Composition in the photographic technology belongs to a relatively professional skill, and many ordinary consumers do not have the skill, so that the visual effect of images is poor.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the present invention is to provide a control method, a control device and an electronic device for auxiliary composition.
A control method of auxiliary composition is used for controlling an electronic device, the electronic device comprises an imaging device, a direction sensor and a memory, the imaging device is used for collecting scene data, the scene data comprises a cache main image, the direction sensor is used for sensing the orientation of the imaging device, and the memory stores a plurality of three-dimensional space structure types and corresponding composition suggestions; the control method comprises the following steps:
processing the scene data to acquire depth information of the cached main image;
acquiring a foreground part of the cached main image according to the depth information;
determining a current three-dimensional space structure type according to the foreground part and the orientation; and
and searching the current composition suggestion corresponding to the current three-dimensional space structure type in the memory.
A control device for assisted composition for controlling an electronic device, the electronic device comprising an imaging device for acquiring scene data including cached main images, an orientation sensor for sensing an orientation of the imaging device, and a memory storing a plurality of three-dimensional spatial structure types and corresponding composition recommendations; the control device comprises a processing module, an obtaining module, a determining module and a searching module.
The processing module is used for processing the scene data to acquire the depth information of the cached main image.
The obtaining module is used for obtaining the foreground part of the cache main image according to the depth information.
The determining module is configured to determine a current three-dimensional spatial structure type based on the foreground portion and the orientation.
The finding module is used for finding the current composition suggestion corresponding to the current three-dimensional space structure type in the memory.
An electronic device includes an imaging device, an orientation sensor, a memory, and the control device.
The imaging device is used for acquiring a scene image, and the scene image comprises a cache main image.
The orientation sensor is used for sensing the orientation of the imaging device.
The memory stores a plurality of three-dimensional spatial structure types and corresponding composition suggestions.
The control method, the control device and the electronic device of the embodiment of the invention determine the current three-dimensional space structure type by using the depth information and the direction sensor information, thereby obtaining the current composition suggestion corresponding to the current three-dimensional space structure type and further assisting composition.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating a control method of auxiliary composition according to an embodiment of the present invention.
Fig. 2 is a functional block diagram of an electronic device according to an embodiment of the invention.
Fig. 3 is a schematic plan view of an electronic device according to an embodiment of the present invention.
Fig. 4 is a flow chart illustrating a control method according to some embodiments of the present invention.
Fig. 5 is a functional block diagram of a processing module of a control device of an electronic device according to some embodiments of the present invention.
FIG. 6 is a flow chart illustrating a control method according to some embodiments of the present invention.
FIG. 7 is another functional block diagram of a processing module according to some embodiments of the invention.
Fig. 8 is a flow chart illustrating a control method according to some embodiments of the present invention.
FIG. 9 is a functional block diagram of an acquisition module of the control device according to some embodiments of the present invention.
FIG. 10 is a flow chart illustrating a control method according to some embodiments of the present invention.
FIG. 11 is a flow chart illustrating a control method according to some embodiments of the present invention.
Fig. 12 is a schematic diagram of another functional block of the control device according to some embodiments of the present invention.
FIG. 13 is a schematic diagram of a cached main image in accordance with certain embodiments of the invention.
FIG. 14 is a depth image schematic of some embodiments of the inventions.
FIG. 15 is a schematic of a three-dimensional spatial system according to some embodiments of the invention.
Fig. 16 is a schematic three-dimensional structure of certain embodiments of the present invention.
Description of the main element symbols:
the electronic device 100, the control device 10, the processing module 11, the first processing unit 112, the second processing unit 114, the third processing unit 116, the fourth processing unit 118, the obtaining module 13, the fifth processing unit 132, the finding unit 134, the determining module 15, the finding module 17, the control module 19, the imaging device 20, the direction sensor 30, the memory 40, and the display 50.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1 and fig. 3 together, the method for controlling an auxiliary composition according to an embodiment of the invention can be used for controlling the electronic device 100. The electronic device 100 includes an imaging device 20, an orientation sensor 30, and a memory 40. The imaging device 20 is used to acquire scene data, including buffered main images. The orientation sensor 30 is used to sense the orientation of the imaging device 20. The memory 40 stores a plurality of three-dimensional spatial structure types and corresponding composition suggestions. The control method comprises the following steps:
s11: processing scene data to obtain depth information of a cached main image;
s13: acquiring a foreground part of a cached main image according to the depth information;
s15: determining the current three-dimensional space structure type according to the foreground part and the orientation; and
s17: a current composition recommendation corresponding to the current three-dimensional spatial structure type is found in the memory 40.
Specifically, the orientation of the imaging device 20 may be understood as the shooting direction of the imaging device 20, for example, the imaging device 20 is oriented downward, illustrating the scene of the imaging device 20 on the shooting ground; the imaging device 20 is oriented upward, illustrating the imaging device 20 shooting a scene of the sky, etc.
Referring to fig. 2 and fig. 3, the control device 10 for auxiliary patterning according to the embodiment of the invention can be used to control the electronic device 100. The control device 10 comprises a processing module 11, an acquisition module 13, a determination module 15 and a finding module 17. The processing module 11 is configured to process the scene data to obtain depth information of the buffered main image. The obtaining module 13 is configured to obtain a foreground portion of the cached main image according to the depth information. The determination module 15 is configured to determine the current three-dimensional spatial structure type according to the foreground portion and the orientation. The finding module 17 is configured to find a current composition recommendation in the memory 40 corresponding to the current three-dimensional spatial structure type.
That is, the control method according to the embodiment of the present invention may be implemented by the control device 10 according to the embodiment of the present invention, wherein the step S11 may be implemented by the processing module 11, the step S13 may be implemented by the obtaining module 13, the step S15 may be implemented by the determining module 15, and the step S17 may be implemented by the finding module 17.
In some embodiments, the control device 10 according to the embodiment of the present invention may be applied to the electronic device 100 according to the embodiment of the present invention, or the electronic device 100 according to the embodiment of the present invention may include the control device 10 according to the embodiment of the present invention.
The control method, the control device 10 and the electronic device 100 of the embodiment of the invention determine the current three-dimensional space structure type by using the depth information and the information of the direction sensor 30, so as to obtain the current composition suggestion corresponding to the current three-dimensional space structure type, and further assist in composition.
In some embodiments, the electronic device 100 comprises a mobile phone or a tablet computer. In the embodiment of the present invention, the electronic device 100 is a mobile phone.
In some embodiments, the imaging device 20 includes a front camera and/or a rear camera. In the embodiment of the present invention, the imaging device 20 is a front camera.
Referring to fig. 4, in some embodiments, the scene data includes a depth image corresponding to the buffered main image, and step S11 includes the following steps:
s112: processing the depth image to obtain depth data of the cached main image; and
s114: the depth data is processed to obtain depth information.
Referring to fig. 5, in some embodiments, the scene data includes a depth image corresponding to a buffered main image, and the processing module 11 includes a first processing unit 112 and a second processing unit 114. The first processing unit 112 is configured to process the depth image to obtain depth data of the buffered main image. The second processing unit 114 is configured to process the depth data to obtain depth information.
That is, step S112 may be implemented by the first processing unit 112, and step S114 may be implemented by the second processing unit 114.
In this way, the depth information of the cached main image can be quickly obtained by using the depth image.
It will be appreciated that the main image is an RGB color image, and the depth image includes a large amount of depth data, i.e. depth information of each person or object in the scene, and the depth information includes the size and/or range of depth. Since the color information of the cached main image and the depth information of the depth image are in a one-to-one correspondence relationship, the depth information of the cached main image can be obtained.
In some embodiments, the obtaining of the depth image corresponding to the cached main image includes obtaining the depth image by structured light depth ranging and obtaining the depth image by a time of flight (TOF) depth camera.
When structured light depth ranging is used to obtain a depth image, the imaging device 20 includes a camera and a projector.
It is understood that the structured light depth ranging is to project a certain pattern of light structures on the surface of an object by using a projector, and form a three-dimensional image of light bars modulated by the shape of the object to be measured on the surface. The light bar three-dimensional image is detected by the camera so as to obtain a light bar two-dimensional distortion image. The degree of distortion of the light bar depends on the relative position between the projector and the camera and the object surface profile or height. The displacement displayed along the light bar is proportional to the height of the object surface and the kink indicates a change in plane, discontinuing the physical gap of the display surface. When the relative position between the projector and the camera is fixed, the three-dimensional contour of the surface of the object can be reproduced by the distorted two-dimensional light strip image coordinates, so that the depth information can be acquired. The structured light depth ranging has higher resolution and measurement precision.
When a TOF depth camera is used to obtain the depth image, the imaging device 20 includes a TOF depth camera.
It can be understood that the TOF depth camera records the modulated infrared light emitted from the light emitting unit through the sensor to be emitted to the object, and then the phase change reflected from the object can acquire the depth distance of the whole scene in real time within a wavelength range according to the light speed. The TOF depth camera is not influenced by the gray scale and the characteristics of the surface of the shot object when calculating the depth information, can quickly calculate the depth information and has high real-time performance.
Referring to fig. 6, in some embodiments, the scene data includes a buffer secondary image corresponding to the buffer primary image, and step S11 includes the following steps:
s116: processing the main cache image and the auxiliary cache image to obtain depth data of the main cache image; and
s118: the depth data is processed to obtain depth information.
Referring to fig. 7, in some embodiments, the scene data includes a buffered secondary image corresponding to a buffered primary image, and the processing module 11 includes a third processing unit 116 and a fourth processing unit 118. The third processing unit 116 is configured to process the buffered main image and the buffered sub-image to obtain depth data of the buffered main image. The fourth processing unit 118 is configured to process the depth data to obtain depth information.
That is, step S116 may be implemented by the third processing unit 116, and step S118 may be implemented by the fourth processing unit 118.
In this way, the depth information of the buffered main image can be acquired by processing the buffered main image and the buffered sub-image.
In some embodiments, the imaging device 20 includes a primary camera and a secondary camera.
It is understood that the depth information may be obtained by a binocular stereo vision ranging method, in which case the scene data includes a cached main image and a cached sub-image. The main cache image is shot by the main camera, and the auxiliary cache image is shot by the auxiliary camera. The binocular stereo vision ranging is that two identical cameras are used for imaging the same object from different positions to obtain a stereo image pair of the object, corresponding image points of the stereo image pair are matched through an algorithm, accordingly, parallax is calculated, and finally, a method based on triangulation is adopted to recover depth information. In this way, the depth information of the cached main image can be obtained by matching the stereoscopic image pair of the cached main image and the cached sub-image.
Referring to fig. 8, in some embodiments, step S13 includes the following steps:
s132: obtaining the most front point of the cached main image according to the depth information; and
s134: and searching a region which is continuously connected with the most front point and continuously changes in depth as a foreground part.
Referring to fig. 9, in some embodiments, the obtaining module 13 includes a fifth processing unit 132 and a searching unit 134. The fifth processing unit 132 is configured to obtain a foremost point of the cached main image according to the depth information. The finding unit 134 is configured to find a region continuously connected to the frontmost point and continuously changing in depth as the foreground portion.
That is, step S132 may be implemented by the fifth processing unit 132, and step S134 may be implemented by the finding unit 134.
In this way, the foreground portion of the cached main image that is physically associated may be obtained, i.e., the foreground portion is connected in the real scene. The relation of the foreground part can be intuitively obtained by taking the foreground part in physical relation as a main body.
Specifically, the depth of each pixel point in the main image is obtained according to the depth information, the pixel point with the minimum depth is obtained as the foremost point of the main image, the foremost point is equivalent to the beginning of the foreground part, diffusion is performed from the foremost point, areas which are continuously connected with the foremost point and continuously change in depth are obtained, and the areas and the foremost point are merged into the foreground area.
It should be noted that the forefront point refers to a pixel point corresponding to an object with the smallest depth, that is, a pixel point corresponding to an object with the smallest object distance or the object closest to the imaging device 20. Adjacent means that two pixels are connected together. When the depth continuously changes, the depth difference of two adjacent pixels is smaller than the preset difference, or the depth difference of two adjacent pixels is smaller than the preset difference, the depth of the two adjacent pixels is continuously changed.
Referring to fig. 10, in some embodiments, step S13 may include the following steps:
s136: obtaining the most front point of the cached main image according to the depth information; and
s138: an area having a depth difference from the frontmost point smaller than a predetermined threshold is found as the foreground portion.
In this way, the foreground part of the main image may be obtained, i.e. in a real scene, the foreground part may not be linked together, but may be in a certain logical relationship, such as a scene that hawk dives down to grab a chicken, and the hawk and the chicken may not be physically linked together, but logically, it may be judged that they are linked.
Specifically, the foremost point of the cached main image is obtained according to the depth information, the foremost point is equivalent to the beginning of the foreground part, diffusion is performed from the foremost point, areas with the depth difference with the foremost point being smaller than a preset threshold value are obtained, and the areas and the foremost point are merged into the foreground area.
In some embodiments, the predetermined threshold may be a value set by a user. Therefore, the user can determine the range of the foreground part according to the requirement of the user, so that an ideal composition suggestion is obtained, and an ideal composition is realized.
In some embodiments, the predetermined threshold may be a value determined by the control device 10, without limitation. The predetermined threshold determined by the control device 10 may be a fixed value stored internally or may be a value calculated according to different conditions, such as the depth of the forefront point.
In certain embodiments, step S13 may include the steps of:
and searching an area with the depth in a preset interval as a foreground part.
In this way, a foreground portion having a depth in an appropriate range can be obtained.
It can be understood that in some shooting situations, the foreground part is not the foremost part, but is a part slightly behind the foremost part, for example, a person sits behind a computer, the computer is relatively far ahead, but the person is the main part, so that the problem of incorrect selection of the main part can be effectively avoided by taking the area with the depth in the predetermined interval as the foreground part.
In some embodiments, the orientation sensor 30 comprises a gravity sensor by which the orientation of the imaging device 20 can be obtained.
In one embodiment, the method for determining the current three-dimensional spatial structure type from the foreground portion and orientation is as follows: firstly, the orientation of the imaging device 20 is known according to the gravity sensor, the current orientation of the image can be reversely deduced from the orientation of the imaging device 20, for example, when the imaging device 20 is forward, the image can be reversely deduced to be distributed front and back, the image content of the foreground part is analyzed, for example, two objects with special colors exist in the foreground part, and the current spatial structure type can be determined to be in a well-shaped shape.
Referring to fig. 3 and 11 together, in some embodiments, the electronic device 100 includes a display 50, and the control method includes the following steps:
s19: the control display 50 displays the current composition suggestion.
Referring to fig. 12, in some embodiments, the control device 10 includes a control module 19. The control module 19 is for controlling the display 50 to display a current composition recommendation.
That is, step S19 may be implemented by the control module 19.
In this way, it is possible to inform the user of the current composition suggestion and guide the user to operate the electronic apparatus 100, thereby completing the composition following the current composition suggestion.
It is understood that the control device 10 needs to inform the user of the current composition suggestion after obtaining the current composition suggestion, and the user can quickly understand how to operate by displaying the current composition suggestion through the display.
In some embodiments, the electronic device 100 comprises an electroacoustic device, and the control method comprises the steps of:
the electro-acoustic device is controlled to prompt the current composition recommendation.
In this way, the user can be prompted in a voice manner how to implement the composition.
Referring to fig. 13-16 together, in an embodiment, the control device 10 controls the imaging device 20 to obtain scene data, where the scene data includes the cached main image shown in fig. 13 and the depth image shown in fig. 14, a foreground portion can be obtained as an image subject according to the depth image, a three-dimensional space system shown in fig. 15 is established according to the foreground portion and the orientation of the imaging device 20 obtained by the direction sensor 30, a current three-dimensional space structure type shown in fig. 16 is determined according to the three-dimensional space system, and finally a current composition suggestion corresponding to the current three-dimensional space structure type is searched from the memory 40, so as to prompt a user how to complete the current composition.
In the description of the embodiments of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
In the description of the embodiments of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. Specific meanings of the above terms in the embodiments of the present invention can be understood by those of ordinary skill in the art according to specific situations.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processing module-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of embodiments of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (15)

1. A control method of auxiliary composition is used for controlling an electronic device, and is characterized in that the electronic device comprises an imaging device, an orientation sensor and a memory, wherein the imaging device is used for collecting scene data, the scene data comprises a buffered main image, and the orientation sensor is used for sensing the orientation of the imaging device; when the imaging device faces downwards, the imaging device is used for shooting scenes on the ground, and the storage stores a plurality of three-dimensional space structure types and corresponding composition suggestions; the control method comprises the following steps:
processing the scene data to acquire depth information of the cached main image;
acquiring a foreground part of the cached main image according to the depth information;
determining a current three-dimensional space structure type according to the foreground part and the orientation: reversely deducing the current orientation of the main cache image from the orientation, wherein when the imaging device faces forwards, the electronic device is perpendicular to the ground, reversely deducing that objects in the main cache image are distributed front and back, analyzing the image content of the foreground part, and determining that the current three-dimensional space structure type is a Chinese character jing when two objects with different colors from the background area exist in the foreground part; and
and searching the current composition suggestion corresponding to the current three-dimensional space structure type in the memory.
2. The control method according to claim 1, wherein the scene data includes a depth image corresponding to the buffered main image, and the step of processing the scene data to obtain the depth information of the buffered main image includes the steps of:
processing the depth image to obtain depth data of the cached main image; and
processing the depth data to obtain the depth information.
3. The control method according to claim 1, wherein the scene data includes a buffer secondary image corresponding to the buffer primary image, and the step of processing the scene data to acquire the depth information of the buffer primary image includes the steps of:
processing the main cache image and the auxiliary cache image to obtain depth data of the main cache image; and
processing the depth data to obtain the depth information.
4. The control method according to claim 1, wherein the step of obtaining the foreground portion of the cached main image based on the depth information comprises the steps of:
obtaining the foremost point of the cache main image according to the depth information; and
and finding a region which is continuously connected with the most front point and continuously changes in depth as the foreground part.
5. The control method according to claim 1, wherein the electronic device includes a display, the control method comprising the steps of:
controlling the display to display the current composition suggestion.
6. A control device for composition assistance, for controlling an electronic device, wherein the electronic device comprises an imaging device, an orientation sensor and a memory, wherein the imaging device is configured to acquire scene data, wherein the scene data comprises a buffered main image, and the orientation sensor is configured to sense an orientation of the imaging device; when the imaging device faces downwards, the imaging device is used for shooting scenes on the ground, and the storage stores a plurality of three-dimensional space structure types and corresponding composition suggestions; the control device includes:
a processing module for processing the scene data to obtain depth information of the cached main image;
the acquisition module is used for acquiring a foreground part of the cached main image according to the depth information;
a determination module for determining a current three-dimensional spatial structure type from the foreground portion and the orientation: reversely deducing the current orientation of the main cache image from the orientation, wherein when the imaging device faces forwards, the electronic device is perpendicular to the ground, reversely deducing that objects in the main cache image are distributed front and back, analyzing the image content of the foreground part, and determining that the current three-dimensional space structure type is a Chinese character jing when two objects with different colors from the background area exist in the foreground part; and
a finding module to find a current composition suggestion in the memory corresponding to the current three-dimensional spatial structure type.
7. The control apparatus of claim 6, wherein the scene data comprises a depth image corresponding to the cached primary image, the processing module comprising:
a first processing unit for processing the depth image to obtain depth data of the cached main image; and
a second processing unit for processing the depth data to obtain the depth information.
8. The control apparatus according to claim 6, wherein the scene data includes a buffer sub-image corresponding to the buffer main image, the processing module includes:
a third processing unit, configured to process the main cache image and the secondary cache image to obtain depth data of the main cache image; and
a fourth processing unit, configured to process the depth data to obtain the depth information.
9. The control apparatus of claim 6, wherein the obtaining module comprises:
a fifth processing unit, configured to obtain a forefront point of the cached main image according to the depth information; and
a finding unit for finding a region continuously connected with the forefront point and continuously changing in depth as the foreground portion.
10. The control device of claim 6, wherein the electronic device includes a display, the control device comprising:
a control module to control the display to display the current composition suggestion.
11. An electronic device, comprising:
an imaging device for acquiring a scene image, the scene image comprising a cached primary image;
a direction sensor for sensing an orientation of the imaging device;
a memory storing a plurality of three-dimensional spatial structure types and corresponding composition suggestions; and
a control device as claimed in any one of claims 6 to 10.
12. The electronic device of claim 11, wherein the electronic device comprises a cell phone or a tablet computer.
13. The electronic device of claim 11, wherein the imaging device comprises a primary camera and a secondary camera.
14. The electronic device of claim 11, wherein the imaging device comprises a camera and a projector.
15. The electronic device of claim 11, wherein the imaging device comprises a TOF depth camera.
CN201710138849.9A 2017-03-09 2017-03-09 Auxiliary composition control method, control device and electronic device Expired - Fee Related CN106973224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710138849.9A CN106973224B (en) 2017-03-09 2017-03-09 Auxiliary composition control method, control device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710138849.9A CN106973224B (en) 2017-03-09 2017-03-09 Auxiliary composition control method, control device and electronic device

Publications (2)

Publication Number Publication Date
CN106973224A CN106973224A (en) 2017-07-21
CN106973224B true CN106973224B (en) 2020-08-07

Family

ID=59329377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710138849.9A Expired - Fee Related CN106973224B (en) 2017-03-09 2017-03-09 Auxiliary composition control method, control device and electronic device

Country Status (1)

Country Link
CN (1) CN106973224B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110115025B (en) 2017-03-09 2022-05-20 Oppo广东移动通信有限公司 Depth-based control method, control device and electronic device
CN107551551B (en) * 2017-08-09 2021-03-26 Oppo广东移动通信有限公司 Game effect construction method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1551616A (en) * 2003-05-15 2004-12-01 Lg������ʽ���� Portable phone with camera and customizable photographic composition guidelines
CN101540844A (en) * 2008-03-19 2009-09-23 索尼株式会社 Composition determination device, composition determination method, and program
JP2012080236A (en) * 2010-09-30 2012-04-19 Hitachi Solutions Ltd Electronic device, and method and program for displaying captured image area with information
CN103634588A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Image composition method and electronic apparatus
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224078B2 (en) * 2000-11-06 2012-07-17 Nant Holdings Ip, Llc Image capture and identification system and process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1551616A (en) * 2003-05-15 2004-12-01 Lg������ʽ���� Portable phone with camera and customizable photographic composition guidelines
CN101540844A (en) * 2008-03-19 2009-09-23 索尼株式会社 Composition determination device, composition determination method, and program
JP2012080236A (en) * 2010-09-30 2012-04-19 Hitachi Solutions Ltd Electronic device, and method and program for displaying captured image area with information
CN103634588A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Image composition method and electronic apparatus
CN106484086A (en) * 2015-09-01 2017-03-08 北京三星通信技术研究有限公司 The method shooting for auxiliary and its capture apparatus

Also Published As

Publication number Publication date
CN106973224A (en) 2017-07-21

Similar Documents

Publication Publication Date Title
CN106851123B (en) Exposure control method, exposure control device and electronic device
CN106993112B (en) Background blurring method and device based on depth of field and electronic device
CN106851124B (en) Image processing method and device based on depth of field and electronic device
CN106851238B (en) Method for controlling white balance, white balance control device and electronic device
US8724893B2 (en) Method and system for color look up table generation
US10237532B2 (en) Scan colorization with an uncalibrated camera
US10156437B2 (en) Control method of a depth camera
US20130201301A1 (en) Method and System for Automatic 3-D Image Creation
US8294762B2 (en) Three-dimensional shape measurement photographing apparatus, method, and program
CN106991378B (en) Depth-based face orientation detection method and device and electronic device
WO2018111915A1 (en) Foot measuring and sizing application
JP2011060216A (en) Device and method of processing image
TWI744245B (en) Generating a disparity map having reduced over-smoothing
CN106851107A (en) Switch control method, control device and the electronic installation of camera assisted drawing
US20210004614A1 (en) Surround View System Having an Adapted Projection Surface
JP2013168063A (en) Image processing device, image display system, and image processing method
CN112889272B (en) Depth image acquisition method, depth image acquisition device and electronic device
US20180213156A1 (en) Method for displaying on a screen at least one representation of an object, related computer program, electronic display device and apparatus
CN106875433A (en) Cut control method, control device and the electronic installation of composition
CN106973224B (en) Auxiliary composition control method, control device and electronic device
CN107018322B (en) Control method and control device for rotary camera auxiliary composition and electronic device
JPH1021401A (en) Three-dimensional information processor
CN106991696B (en) Backlight image processing method, backlight image processing device and electronic device
CN107025636B (en) Image defogging method and device combined with depth information and electronic device
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200807