CN107360366B - Photographing method and device, storage medium and electronic equipment - Google Patents

Photographing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN107360366B
CN107360366B CN201710527835.6A CN201710527835A CN107360366B CN 107360366 B CN107360366 B CN 107360366B CN 201710527835 A CN201710527835 A CN 201710527835A CN 107360366 B CN107360366 B CN 107360366B
Authority
CN
China
Prior art keywords
sub
viewing
view
shot
areas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710527835.6A
Other languages
Chinese (zh)
Other versions
CN107360366A (en
Inventor
梁昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710527835.6A priority Critical patent/CN107360366B/en
Publication of CN107360366A publication Critical patent/CN107360366A/en
Application granted granted Critical
Publication of CN107360366B publication Critical patent/CN107360366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a photographing method, a photographing device, a storage medium and electronic equipment. The photographing method comprises the following steps: the method comprises the steps of splitting a view finding interface into a plurality of sub view finding areas, respectively collecting local images in the sub view finding areas when view finding conditions are met, wherein the local images are local images of an object to be shot in the view finding interface, and synthesizing the local images in all the sub view finding areas when the view finding conditions are met to obtain an overall image of the object to be shot. The embodiment of the invention obtains the local images meeting the view finding condition through the plurality of sub view finding region blocks to synthesize the whole image, and the invention has the advantages of one-step forming, simple operation and storage space saving.

Description

Photographing method and device, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of photographing, in particular to the technical field of mobile equipment, and specifically relates to a photographing method, a photographing device, a storage medium and electronic equipment.
Background
With the development of electronic technology, it has become a habit of people in their daily lives to take pictures by using mobile devices with a picture taking function. When the scenic spot with larger pedestrian volume is shot, after the user selects the view range of the shot scenery, other people always break into the lens to easily shoot the photos with non-target objects, or when other people go out of the view range of the lens, the quality of the photos shot by the user in hurry is poor. Or in the process of multiple persons taking photos, the user can hardly take photos with ideal states of all persons at one time, and the user can possibly obtain the ideal photos only by repeatedly taking photos for multiple times. Therefore, further improvement is required.
Disclosure of Invention
The embodiment of the invention provides a photographing method, a photographing device, a storage medium and electronic equipment, which can photograph images meeting conditions at one time through simple operation.
The embodiment of the invention provides a photographing method, which is applied to electronic equipment and comprises the following steps:
splitting a view finding interface into a plurality of sub view finding areas;
respectively acquiring local images in the sub-viewing areas when the viewing conditions are met, wherein the local images are local images of an object to be shot in the viewing interface;
and synthesizing the local images meeting the view finding conditions in all the sub-view areas to obtain the whole image of the object to be shot.
An embodiment of the present invention further provides a photographing apparatus, where the apparatus includes:
the splitting module is used for splitting the view finding interface into a plurality of sub view finding areas;
the acquisition module is used for respectively acquiring local images in the sub-viewing areas when the viewing conditions are met, wherein the local images are local images of an object to be shot in the viewing interface;
and the synthesis module is used for synthesizing the local images meeting the view finding conditions in all the sub-view areas so as to obtain the whole image of the object to be shot.
An embodiment of the present invention further provides a storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the photographing method according to any embodiment of the present invention.
The embodiment of the present invention further provides an electronic device, which includes a memory and a processor, and the processor is configured to execute the photographing method according to any embodiment of the present invention by calling the computer program stored in the memory.
The embodiment of the invention also provides electronic equipment which comprises a display screen, a camera, a memory and a processor, wherein the display screen is used for splitting a view interface into a plurality of sub view areas; the cameras are used for respectively acquiring local images in the sub-viewing areas when the viewing conditions are met, wherein the local images are local images of an object to be shot in the viewing interface; the processor calls the computer program stored in the memory and is used for synthesizing the local images meeting the view finding conditions in all the sub-view areas to obtain the whole image of the object to be shot.
According to the embodiment of the invention, the view interface is divided into a plurality of sub-view areas, the local images in the sub-view areas meeting the view conditions are respectively collected, wherein the local images are the local images of the object to be shot in the view interface, and the local images in all the sub-view areas meeting the view conditions are synthesized to obtain the whole image of the object to be shot. The embodiment of the invention obtains the local images meeting the view finding condition through the plurality of sub view finding region blocks to synthesize the whole image, and the invention has the advantages of one-step forming, simple operation and storage space saving.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of an electronic device according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of a photographing method according to an embodiment of the present invention.
Fig. 3 is a schematic flow chart of the process of respectively acquiring the local images in the sub-viewing areas when the viewing conditions are satisfied in fig. 2.
Fig. 4 is a schematic view of a first application scenario of a photographing method according to an embodiment of the present invention.
Fig. 5 is a schematic view of a second application scenario of the photographing method according to the embodiment of the present invention.
Fig. 6 is a schematic view of a third application scenario of the photographing method according to the embodiment of the present invention.
Fig. 7 is a schematic view of a fourth application scenario of the photographing method according to the embodiment of the present invention.
Fig. 8 is a schematic view of a fifth application scenario of the photographing method according to the embodiment of the present invention.
Fig. 9 is a schematic structural diagram of a photographing device according to an embodiment of the present invention.
Fig. 10 is a schematic structural diagram of the acquisition module in fig. 9.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Fig. 12 is another schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second", etc. in the present invention are used for distinguishing different objects, not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The executing body of the photographing method provided by the embodiment of the present invention may be a photographing device provided by the embodiment of the present invention, or an electronic device (such as a palm computer, a tablet computer, a smart phone, a camera, etc.) integrated with the photographing device, where the photographing device may be implemented in a hardware or software manner.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of an electronic device according to an embodiment of the present invention. When the scenic spot with large pedestrian volume is photographed, after the user selects the view range of the scenic spot, the user does not need to wait for all the people running the glasses to go out of the view range of the electronic equipment, and the electronic equipment can photograph an ideal image without other non-target people running the glasses under the condition that the pedestrian flow is surged.
Referring to fig. 2 to 8, fig. 2 is a schematic flowchart of a photographing method according to an embodiment of the present invention, fig. 3 is a schematic flowchart of fig. 2 respectively acquiring local images in a sub-viewing area when a viewing condition is satisfied, and fig. 4 to 8 are schematic diagrams of first to fifth application scenarios of a photographing method according to an embodiment of the present invention. The method is applied to the electronic equipment and comprises the following steps:
and step S101, splitting a view interface into a plurality of sub view areas.
It can be understood that, when the user takes a picture with the electronic device, the user may adopt a normal mode or a block view mode. When the started photographing mode is the blocked view finding mode, a splitting mode may be preset for the view finding interface, for example, the splitting mode is preset to be 4 sub view finding areas, or the splitting mode is preset to be 6 sub view finding areas. The number setting of the plurality of sub-viewing zones may be set to a plurality of splitting modes for the user to select when the device leaves the factory, for example, the selectable splitting modes may include 2 sub-viewing zones, 4 sub-viewing zones, 6 sub-viewing zones, 8 sub-viewing zones, 3 sub-viewing zones, or 9 sub-viewing zones. The number of the plurality of sub-viewing zones can also be set by a user in a self-defined way. After the electronic equipment enters a photographing mode, a viewing interface displayed on a display screen of the electronic equipment can be split into a plurality of sub viewing areas according to a preset splitting mode.
For example, as shown in fig. 4, taking a mobile phone as an example, a preset splitting mode on the mobile phone is split into 6 sub-viewing zones. After the mobile phone enters a photographing mode, a viewing interface displayed on a display screen of the mobile phone can be split into 6 sub viewing areas according to a preset splitting mode, namely a 1 st sub viewing area, a 2 nd sub viewing area, a 3 rd sub viewing area, a 4 th sub viewing area, a 5 th sub viewing area and a 6 th sub viewing area.
Step S102, local images in the sub-viewing areas when the viewing conditions are met are respectively collected, wherein the local images are local images of the object to be shot in the viewing interface.
It can be understood that after the fixed viewing range is selected by using the viewing interface, each sub-viewing zone independently captures a local image within the viewing range, and the process of capturing images in each sub-viewing zone is not interfered with each other.
For example, when a landscape photograph is taken by using the electronic device, the electronic device enters the landscape photograph scene, and each of the sub-viewing areas can identify whether an object with displacement change exists in the sub-viewing area within a preset identification time period. If the object with displacement change exists in the sub-viewing area within the preset identification time period, indicating that a non-target object enters the sub-viewing area, determining that the viewing condition is not met, and continuing to perform second identification; if the object which is subjected to displacement change does not exist in the sub-viewing area within the preset identification time period, the fact that the local image displayed in the sub-viewing area meets the viewing condition is indicated, and the electronic equipment automatically acquires the local image in the sub-viewing area which meets the viewing condition. For example, the preset identification time period is 2 seconds. Further, there may be an object which is rapidly changed in displacement in the landscape, and the viewing time does not need to reach a preset time period, as long as after the object which is sent with the change in displacement is marked by the sub-viewing area, when the object which is detected with the change in displacement moves out of the sub-viewing area, a local image in the sub-viewing area is acquired. For example, when the viewing time does not reach the preset time period, the local images in the sub-viewing area may be acquired according to the received viewing confirmation instruction input by the user.
For example, when the electronic device is used for shooting a multi-person group photo, the electronic device enters a multi-person group photo scene, and each sub-viewing area can identify the facial features of the photographed person in the sub-viewing area within a preset identification time period to judge whether the viewing conditions are met. For example, identifying facial features of the person being photographed includes whether the eyes on the face are open or blinking, whether the expression is smiling, and the like. For example, when the eyes of the face of the person to be photographed are not opened or blinked, if it is indicated that the framing condition is not satisfied, the second recognition is continued; for example, when it is determined that the view conditions are satisfied when it is recognized that the eyes on the face of the photographed person are open and not blinking, the electronic device automatically acquires the partial images within the sub-viewing area that satisfies the view conditions. For example, the preset identification time period is 2 seconds. For example, when the viewing time does not reach the preset time period, the local images in the sub-viewing area may be acquired according to the received viewing confirmation instruction input by the user.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are different.
For example, when a mixed scene picture of a landscape and a multi-person combined picture is shot by using the electronic device, the mixed shooting scene is entered, the viewing condition of the first viewing area is to identify that no object with displacement change exists in the sub-viewing area within a preset identification time period, and the viewing condition of the second viewing area is to identify that the facial features of the photographed person in the sub-viewing area within the preset identification time period meet preset features.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are the same.
It is understood that, as shown in fig. 3, the step S102 can be implemented by performing steps S1021 to S1023, specifically:
and step S1021, judging whether the object to be shot in the sub-scene area reaches a preset condition or not.
For example, when a landscape shot by the electronic device is taken, the electronic device enters a landscape shot scene, and each of the sub-viewing areas can identify whether an object to be shot in the sub-viewing area has an object with displacement change within a preset identification time period. And the preset condition is that no object with displacement change exists in the object to be shot in the sub-viewing area within a preset identification time period.
Further, there may be an object which is rapidly changed in displacement in the landscape, and the viewing time does not need to reach a preset time period, so long as after the object which is changed in displacement is marked in the sub-viewing area, it is detected whether the object which is changed in displacement moves out of the sub-viewing area, where the preset condition is that the object which is changed in displacement moves out of the sub-viewing area.
For example, when the electronic device is used for shooting a multi-person group photo, the electronic device enters a multi-person group photo scene, and each sub-viewing area can identify whether the facial features of the photographed person in the sub-viewing area reach preset conditions or not within a preset identification time period. For example, the preset condition is that the eyes on the face of the person to be photographed are open and not blinking.
Step S1022, when the object to be photographed in the sub-viewing area reaches a preset condition, acquiring a local image of the object to be photographed in the sub-viewing area.
For example, if an object with displacement change exists in the object to be photographed in the sub-viewing area within a preset recognition time period, it is determined that the object to be photographed in the sub-viewing area does not reach a preset condition, it indicates that a non-target object intrudes into the sub-viewing area, and if it is determined that the viewing condition is not met, the second recognition is continued; if the object to be shot in the sub-viewing area is identified to be free of the object which is subjected to displacement change within the preset identification time period, the object to be shot in the sub-viewing area is judged to reach the preset condition, the local image displayed in the sub-viewing area meets the viewing condition, and the electronic equipment automatically collects the local image in the sub-viewing area which meets the viewing condition. For example, the preset identification time period is 2 seconds.
Further, when the object with the changed displacement is detected to move out of the sub-viewing area, the object to be shot in the sub-viewing area is judged to reach a preset condition, and then the local image in the sub-viewing area is collected.
For example, as shown in fig. 5, a user prepares to shoot a landscape, and when there is no object with displacement change in the objects to be shot in the 1 st to 3 rd sub-viewing areas after entering the landscape scene, it is determined that the objects to be shot in the 1 st to 3 rd sub-viewing areas reach the preset condition, and then local images of the objects to be shot in the 1 st to 3 rd sub-viewing areas are acquired. For example, a symbol of "√" may be displayed within the sub-viewport for successful acquisition to indicate to the user that the sub-viewport has been successfully acquired. Further, as shown in fig. 6, in the second recognition process, when there is no object with displacement change in the objects to be photographed in the 4 th sub-viewing area and the 6 th sub-viewing area, it is determined that the objects to be photographed in the 4 th sub-viewing area and the 6 th sub-viewing area reach the preset condition, and then local images of the objects to be photographed in the 4 th sub-viewing area and the 6 th sub-viewing area are continuously acquired. Finally, as shown in fig. 7, all objects that will send displacement changes in the 5 th sub-viewing area may be marked, and when it is detected that the objects that will send displacement changes move outside the 5 th sub-viewing area, it is determined that the object to be photographed in the 5 th sub-viewing area reaches a preset condition, and then the local images in the sub-viewing areas are acquired.
For example, when the eyes of the face of the person to be photographed are not opened or blinked, if the object to be photographed in the sub-viewing area is determined not to reach the preset condition, continuing to perform the second recognition; for example, when it is recognized that the eyes on the face of the person to be photographed are open and not blinking, it is determined that the object to be photographed in the sub-viewing area reaches the preset condition, and it is determined that the viewing condition is satisfied, the electronic device automatically acquires the partial image in the sub-viewing area that satisfies the viewing condition. For example, the preset identification time period is 2 seconds.
In some embodiments, when the viewing conditions are simultaneously satisfied in at least two sub-viewing zones, the local images of the object to be photographed in the at least two sub-viewing zones which reach the preset conditions are simultaneously acquired.
And S1023, locking the acquired local image of the object to be shot in the sub-scene area.
It is understood that, in order to make it more convenient for the user to distinguish which sub-viewing area has acquired the local image, the acquired local image of the object to be photographed in the sub-viewing area may be locked in the sub-viewing area, and the object to be photographed displayed in other sub-viewing areas where no local image is acquired may be dynamically changed.
And step S103, synthesizing local images when the view conditions are met in all the sub-view areas to obtain an overall image of the object to be shot.
It can be understood that, after the local images meeting the viewing conditions are acquired in all the sub-viewing areas, the local images acquired in all the sub-viewing areas are synthesized to obtain an overall image of the object to be photographed.
The local images which meet the view finding conditions and are collected in each sub view finding area are cached in corresponding cache spaces, the local images which meet the view finding conditions in all the sub view finding areas are synthesized to obtain an overall image of an object to be shot, the overall image is stored, the local images cached in the cache spaces are deleted, and the overall image is only stored in a storage space, so that the storage space is saved.
For example, as shown in fig. 8, after the 1 st to 6 th sub-viewing areas all acquire the partial images satisfying the viewing conditions, the partial images in all the sub-viewing areas satisfying the viewing conditions are synthesized to obtain the overall image of the object to be photographed.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
In some embodiments, the electronic device may record the photographing habits of the user in a history period, and may analyze and learn the recorded photographing habits of the user by using a learning algorithm, and the electronic device generates the photographing preference of the user through a self-analysis and learning processing process, for example, a user prefers to photograph a certain scene, and when the user enters a photographing mode, automatically sets a photographing scene according to the photographing preference of the user, thereby saving time for the user to manually set the photographing scene.
According to the embodiment of the invention, the view interface is divided into a plurality of sub-view areas, the local images in the sub-view areas meeting the view conditions are respectively collected, wherein the local images are the local images of the object to be shot in the view interface, and the local images in all the sub-view areas meeting the view conditions are synthesized to obtain the whole image of the object to be shot. The embodiment of the invention obtains the local images meeting the view finding condition through the plurality of sub view finding region blocks to synthesize the whole image, and the invention has the advantages of one-step forming, simple operation and storage space saving.
An embodiment of the present invention further provides a photographing device, as shown in fig. 9, and fig. 9 is a schematic structural diagram of the photographing device provided in the embodiment of the present invention. The photographing apparatus 30 includes a splitting module 31, an acquiring module 32, and a synthesizing module 34.
The splitting module 31 is configured to split the viewing interface into a plurality of sub-viewing areas.
It can be understood that, when the user takes a picture with the electronic device, the user may adopt a normal mode or a block view mode. When the started photographing mode is the blocked view finding mode, a splitting mode may be preset for the view finding interface, for example, the splitting mode is preset to be 4 sub view finding areas, or the splitting mode is preset to be 6 sub view finding areas. The number setting of the plurality of sub-viewing zones may be set to a plurality of splitting modes for the user to select when the device leaves the factory, for example, the selectable splitting modes may include 2 sub-viewing zones, 4 sub-viewing zones, 6 sub-viewing zones, 8 sub-viewing zones, 3 sub-viewing zones, or 9 sub-viewing zones. The number of the plurality of sub-viewing zones can also be set by a user in a self-defined way. After the electronic device enters the photographing mode, the splitting module 31 may split the viewing interface into a plurality of sub viewing zones according to a preset splitting mode.
The acquisition module 32 is configured to acquire local images in the sub-viewing areas when the viewing conditions are met, where the local images are local images of an object to be photographed in the viewing interface.
It can be understood that, after the fixed viewing range is selected by using the viewing interface, the acquisition module 32 respectively and independently acquires the local images in each sub-viewing area, and the process of acquiring the images in each sub-viewing area is not interfered with each other.
For example, when a landscape photograph is taken by using the electronic device, the electronic device enters the landscape photograph scene, and each of the sub-viewing areas can identify whether an object with displacement change exists in the sub-viewing area within a preset identification time period. If the object with displacement change exists in the sub-viewing area within the preset identification time period, indicating that a non-target object enters the sub-viewing area, determining that the viewing condition is not met, and continuing to perform second identification; if the object which is not displaced in the sub-viewing area is identified within the preset identification time period, it indicates that the local image displayed in the sub-viewing area meets the viewing condition, and the acquisition module 32 automatically acquires the local image in the sub-viewing area which meets the viewing condition. For example, the preset identification time period is 2 seconds. Further, there may be an object which is rapidly changed in displacement in the landscape, and the viewing time does not need to reach a preset time period, as long as after the object which is changed in displacement is marked by the sub-viewing area, when the acquisition module 32 detects that the object which is changed in displacement moves out of the sub-viewing area, the local image in the sub-viewing area is acquired. For example, the capturing module 32 may also capture a local image in the sub-viewing area according to a received view confirmation instruction input by the user when the viewing time does not reach the preset time period.
For example, when the electronic device is used for shooting a multi-person group photo, the electronic device enters a multi-person group photo scene, and each sub-viewing area can identify the facial features of the photographed person in the sub-viewing area within a preset identification time period to judge whether the viewing conditions are met. For example, identifying facial features of the person being photographed includes whether the eyes on the face are open or blinking, whether the expression is smiling, and the like. For example, when the eyes of the face of the person to be photographed are not opened or blinked, if it is indicated that the framing condition is not satisfied, the second recognition is continued; for example, when it is determined that the view conditions are satisfied when it is recognized that the eyes on the face of the photographed person are open and not blinking, the acquisition module 32 automatically acquires the partial images in the sub-viewing area that satisfies the view conditions. For example, the preset identification time period is 2 seconds. For example, the capturing module 32 may also capture a local image in the sub-viewing area according to a received view confirmation instruction input by the user when the viewing time does not reach the preset time period.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are different.
For example, when a mixed scene picture of a landscape and a multi-person combined picture is shot by using the electronic device, the mixed shooting scene is entered, the viewing condition of the first viewing area is to identify that no object with displacement change exists in the sub-viewing area within a preset identification time period, and the viewing condition of the second viewing area is to identify that the facial features of the photographed person in the sub-viewing area within the preset identification time period meet preset features.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are the same.
In some embodiments, as shown in fig. 10, the acquisition module 32 further includes a determination submodule 321, an acquisition submodule 322, and a locking submodule 323.
The determining submodule 321 is configured to determine whether an object to be photographed in the sub-viewing area reaches a preset condition.
For example, when a landscape photograph is taken by the electronic device, the electronic device enters a landscape photograph scene, and the determining sub-module 321 is configured to identify whether there is an object with a displacement change in the object to be photographed in the sub-viewing area within a preset identification time period. And the preset condition is that no object with displacement change exists in the object to be shot in the sub-viewing area within a preset identification time period.
Further, there may be an object which is rapidly changed in displacement in the landscape, and the viewing time does not need to reach a preset time period, as long as after the object which is changed in displacement is marked in the sub-viewing area, the determining sub-module 321 is configured to detect whether the object which is changed in displacement moves out of the sub-viewing area, where the preset condition is that the object which is changed in displacement moves out of the sub-viewing area.
For example, when the electronic device is used to shoot a multi-person group photo, the electronic device enters a multi-person group photo scene, and the determining sub-module 321 is configured to identify whether the facial features of the photographed person in the sub-viewing area meet a preset condition within a preset identification time period. For example, the preset condition is that the eyes on the face of the person to be photographed are open and not blinking.
The acquisition submodule 322 is configured to acquire a local image of an object to be photographed in the sub-viewing area when the object to be photographed in the sub-viewing area reaches a preset condition.
For example, if the object to be photographed in the sub-viewing area is recognized within the preset recognition time period, the determining sub-module 321 determines that the object to be photographed in the sub-viewing area does not reach the preset condition, which indicates that a non-target object enters the sub-viewing area, and determines that the viewing condition is not met, the determining sub-module 321 continues to perform the second recognition; if the object to be photographed in the sub-viewing area is identified within the preset identification time period, and the judgment sub-module 321 determines that the object to be photographed in the sub-viewing area has reached the preset condition, which indicates that the local image displayed in the sub-viewing area meets the viewing condition, the acquisition sub-module 322 automatically acquires the local image in the sub-viewing area meeting the viewing condition. For example, the preset identification time period is 2 seconds.
Further, when it is detected that the object with the changed displacement moves out of the sub-viewing area, the determining sub-module 321 determines that the object to be photographed in the sub-viewing area reaches a preset condition, and the acquiring sub-module 322 acquires the local image in the sub-viewing area.
For example, when the eyes on the face of the person to be photographed are not opened or blinked, the judgment sub-module 321 judges that the object to be photographed in the sub-viewing zone does not reach the preset condition, and the judgment sub-module 321 continues the second recognition; for example, when it is recognized that the eyes on the face of the person being photographed are open and not blinking, the determination sub-module 321 determines that the object to be photographed in the sub-viewing area reaches the preset condition, and determines that the viewing condition is satisfied, the determination sub-module 321 automatically acquires the partial image in the sub-viewing area that satisfies the viewing condition. For example, the preset identification time period is 2 seconds.
In some embodiments, the acquiring sub-module 322 is further configured to acquire the local images of the object to be captured in the at least two sub-viewing areas meeting the preset condition simultaneously when the viewing condition is met simultaneously in the at least two sub-viewing areas.
The locking submodule 323 is configured to lock the acquired local image of the object to be photographed in the sub-viewing area.
It is understood that, in order to facilitate the user to distinguish which sub-viewing area has acquired the partial image, the locking sub-module 323 may lock the acquired partial image of the object to be photographed in the sub-viewing area, while the object to be photographed displayed in other sub-viewing areas that do not acquire the partial image may be dynamically changed.
And the synthesis module 33 is configured to synthesize the local images in all the sub-viewing areas meeting the viewing conditions to obtain an overall image of the object to be photographed.
It can be understood that, after the partial images meeting the viewing conditions are acquired in all the sub-viewing areas, the synthesis module 33 synthesizes the partial images acquired in all the sub-viewing areas to obtain an overall image of the object to be photographed.
The local images which meet the view finding conditions and are collected in each sub view finding area are cached in corresponding cache spaces, the local images which meet the view finding conditions in all the sub view finding areas are synthesized to obtain an overall image of an object to be shot, the overall image is stored, the local images cached in the cache spaces are deleted, and the overall image is only stored in a storage space, so that the storage space is saved.
The embodiment of the present invention further provides an electronic device, which includes a memory and a processor, and the processor is configured to execute the photographing method according to any embodiment of the present invention by calling the computer program stored in the memory.
The electronic device can be a smart phone, a tablet computer, a palm computer, a camera and other devices. As shown in fig. 11, an electronic device 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer-readable storage media, and a computer program stored on the memory 402 and executable on the processor 401. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or loading an application program stored in the memory 402 and calling data stored in the memory 402, thereby integrally monitoring the electronic device.
In the embodiment of the present invention, the processor 401 in the electronic device 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
splitting a view finding interface into a plurality of sub view finding areas;
respectively acquiring local images in the sub-viewing areas when the viewing conditions are met, wherein the local images are local images of an object to be shot in the viewing interface;
and synthesizing the local images meeting the view finding conditions in all the sub-view areas to obtain the whole image of the object to be shot.
In some embodiments, the processor 401 is configured to separately acquire local images in the sub-viewing areas when the viewing conditions are satisfied, and includes:
judging whether the object to be shot in the sub-view area reaches a preset condition or not;
and when the object to be shot in the sub-view area reaches a preset condition, acquiring a local image of the object to be shot in the sub-view area.
In some embodiments, the processor 401 is configured to separately acquire local images in the sub-viewing areas when the viewing conditions are satisfied, and includes: when the at least two sub-view areas simultaneously meet the view finding conditions, local images of the objects to be shot in the at least two sub-view areas reaching the preset conditions are simultaneously acquired.
In some embodiments, the processor 401 is configured to separately acquire local images in the sub-viewing areas when the viewing conditions are satisfied, and further includes:
and locking the acquired local image of the object to be shot in the sub-view area.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are different.
In some embodiments, the viewing conditions within at least two of the plurality of sub-viewing zones are the same.
In some embodiments, as shown in fig. 12, electronic device 400 further comprises: display screen 403, camera 404. The processor 401 is electrically connected to the display 403 and the camera 404, respectively. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The display screen 403 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. When the display screen 403 is a touch display screen, it may also be used as a part of an input unit to implement an input function.
The camera 404 has basic functions of video shooting, video transmission, still image capturing and the like, and after the camera 404 collects images through a lens, the images are processed and converted into digital signals which can be recognized by a computer through a photosensitive component and a control component in the camera, and then the images are restored through image recognition software.
Although not shown in fig. 12, the electronic device 400 may further include a radio frequency circuit, a wireless fidelity module, a sensor, a power supply, an audio circuit, a bluetooth module, etc., which are not described in detail herein.
In some embodiments, the display 403 is used to split the viewing interface into a plurality of sub-viewing zones; the cameras 404 are configured to respectively acquire local images in the sub-viewing areas when the viewing conditions are met, where the local images are local images of an object to be photographed in the viewing interface; the processor 401 calls a computer program stored in the memory 402 for synthesizing partial images when the viewing conditions are satisfied in all the sub-viewing zones to obtain an overall image of the object to be photographed.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiment of the present invention, the photographing apparatus and the photographing method in the above embodiments belong to the same concept, and any method provided in the photographing method embodiment may be run on the photographing apparatus, and a specific implementation process thereof is described in the photographing method embodiment, and is not described herein again.
It should be noted that, for the photographing method of the present invention, it can be understood by a person skilled in the art that all or part of the process of implementing the photographing method of the embodiment of the present invention can be completed by controlling the relevant hardware through a computer program, where the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and during the execution process, the process of implementing the embodiment of the photographing method can be included as described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the photographing device according to the embodiment of the present invention, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The photographing method, the photographing device, the photographing storage medium and the electronic device provided by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the technical scheme and the core idea of the invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A photographing method applied to electronic equipment is characterized by comprising the following steps:
the split view-finding interface is a plurality of sub view-finding areas, the view-finding range of each sub view-finding area is obtained by splitting the view-finding range of the view-finding interface, an object to be shot exists in the view-finding range of the view-finding interface, each sub view-finding area independently collects local images in the corresponding view-finding range, and the image collection processes of different sub view-finding areas are not interfered with each other;
judging whether the object to be shot in the sub-view area is subjected to displacement change or not;
when the object to be shot in the sub-viewing area is not subjected to displacement change, viewing conditions are met, and image acquisition is carried out on the sub-viewing area meeting the conditions to obtain a local image of the object to be shot;
and synthesizing the local images meeting the view finding conditions in all the sub-view areas to obtain the whole image of the object to be shot.
2. The photographing method according to claim 1, wherein the separately acquiring the partial images in the sub-viewing areas when the viewing conditions are satisfied includes:
when the at least two sub-view areas simultaneously meet the view finding conditions, local images of the objects to be shot in the at least two sub-view areas reaching the preset conditions are simultaneously acquired.
3. The photographing method according to any one of claims 1 to 2, wherein the separately acquiring the partial images in the sub-viewing zones when the viewing conditions are satisfied further comprises:
and locking the acquired local image of the object to be shot in the sub-view area.
4. A photographing method as defined in any one of claims 1-2 wherein the viewing conditions in at least two of the plurality of sub-viewing zones are different.
5. A photographing method as defined in any one of claims 1-2 wherein the viewing conditions in at least two of the plurality of sub-viewing zones are the same.
6. A photographing apparatus, characterized in that the apparatus comprises:
the splitting module is used for splitting a viewing interface into a plurality of sub viewing areas, wherein the viewing range of each sub viewing area is obtained by splitting the viewing range of the viewing interface, an object to be shot exists in the viewing range of the viewing interface, each sub viewing area independently acquires local images in the corresponding viewing range, and the processes of acquiring the images by different sub viewing areas are not interfered with each other;
the acquisition module comprises a judgment submodule and a shooting module, wherein the judgment submodule is used for judging whether the object to be shot in the sub-scene area is subjected to displacement change or not; the acquisition submodule is used for acquiring images of the sub viewing area meeting the viewing condition when the object to be shot in the sub viewing area does not have displacement change, and acquiring local images of the object to be shot;
and the synthesis module is used for synthesizing the local images meeting the view finding conditions in all the sub-view areas so as to obtain the whole image of the object to be shot.
7. The photographing apparatus according to claim 6, wherein the capturing sub-module is further configured to capture the local images of the object to be photographed in at least two sub-viewing zones reaching the preset condition simultaneously when the viewing conditions are satisfied simultaneously in the at least two sub-viewing zones.
8. The photographing apparatus according to any one of claims 6 to 7, wherein the acquisition module further comprises:
and the locking submodule is used for locking the acquired local image of the object to be shot in the sub-viewing area.
9. A storage medium having stored thereon a computer program, characterized in that, when the computer program runs on a computer, the computer is caused to execute the photographing method according to any one of claims 1 to 5.
10. An electronic device comprising a memory and a processor, wherein the processor is configured to execute the photographing method according to any one of claims 1-5 by calling a computer program stored in the memory.
11. An electronic device, comprising a display screen, a camera, a memory, and a processor, characterized in that:
the display screen is used for splitting a viewing interface into a plurality of sub viewing areas, the viewing range of each sub viewing area is obtained by splitting the viewing range of the viewing interface, an object to be shot exists in the viewing range of the viewing interface, each sub viewing area independently collects local images in the corresponding viewing range, and the processes of collecting the images of different sub viewing areas are not interfered with each other;
the camera is used for acquiring images of the sub view finding areas meeting the view finding conditions when the object to be shot in the sub view finding areas does not have displacement change, and obtaining local images of the object to be shot;
the processor calls the computer program stored in the memory and is used for synthesizing the local images meeting the view finding conditions in all the sub-view areas to obtain the whole image of the object to be shot.
CN201710527835.6A 2017-06-30 2017-06-30 Photographing method and device, storage medium and electronic equipment Active CN107360366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710527835.6A CN107360366B (en) 2017-06-30 2017-06-30 Photographing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710527835.6A CN107360366B (en) 2017-06-30 2017-06-30 Photographing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN107360366A CN107360366A (en) 2017-11-17
CN107360366B true CN107360366B (en) 2020-05-12

Family

ID=60274089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710527835.6A Active CN107360366B (en) 2017-06-30 2017-06-30 Photographing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN107360366B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108174081B (en) * 2017-11-29 2019-07-26 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107872623B (en) * 2017-12-22 2019-11-26 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN111263071B (en) * 2020-02-26 2021-12-10 维沃移动通信有限公司 Shooting method and electronic equipment
CN112822406B (en) * 2021-01-13 2023-02-28 广东小天才科技有限公司 Image acquisition method and device, terminal equipment and storage medium
CN115499589A (en) * 2022-09-19 2022-12-20 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101969534A (en) * 2009-07-27 2011-02-09 鸿富锦精密工业(深圳)有限公司 Method and system for realizing regional exposure of picture in photographic equipment
CN102007499A (en) * 2008-01-29 2011-04-06 泰塞拉技术爱尔兰公司 Detecting facial expressions in digital images
CN103685940A (en) * 2013-11-25 2014-03-26 上海斐讯数据通信技术有限公司 Method for recognizing shot photos by facial expressions
CN104079811A (en) * 2014-07-24 2014-10-01 广东欧珀移动通信有限公司 Method and device for filtering out obstacles during photographing
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN105704386A (en) * 2016-03-30 2016-06-22 联想(北京)有限公司 Image acquisition method, electronic equipment and electronic device
CN106572295A (en) * 2015-10-13 2017-04-19 阿里巴巴集团控股有限公司 Image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4917509B2 (en) * 2007-10-16 2012-04-18 ルネサスエレクトロニクス株式会社 Autofocus control circuit, autofocus control method, and imaging apparatus
JP6184189B2 (en) * 2013-06-19 2017-08-23 キヤノン株式会社 SUBJECT DETECTING DEVICE AND ITS CONTROL METHOD, IMAGING DEVICE, SUBJECT DETECTING DEVICE CONTROL PROGRAM, AND STORAGE MEDIUM

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102007499A (en) * 2008-01-29 2011-04-06 泰塞拉技术爱尔兰公司 Detecting facial expressions in digital images
CN101969534A (en) * 2009-07-27 2011-02-09 鸿富锦精密工业(深圳)有限公司 Method and system for realizing regional exposure of picture in photographic equipment
CN103685940A (en) * 2013-11-25 2014-03-26 上海斐讯数据通信技术有限公司 Method for recognizing shot photos by facial expressions
CN104079811A (en) * 2014-07-24 2014-10-01 广东欧珀移动通信有限公司 Method and device for filtering out obstacles during photographing
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN106572295A (en) * 2015-10-13 2017-04-19 阿里巴巴集团控股有限公司 Image processing method and device
CN105704386A (en) * 2016-03-30 2016-06-22 联想(北京)有限公司 Image acquisition method, electronic equipment and electronic device

Also Published As

Publication number Publication date
CN107360366A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN107360366B (en) Photographing method and device, storage medium and electronic equipment
CN107395957B (en) Photographing method and device, storage medium and electronic equipment
US9489564B2 (en) Method and apparatus for prioritizing image quality of a particular subject within an image
JP6134825B2 (en) How to automatically determine the probability of image capture by the terminal using context data
US20170032219A1 (en) Methods and devices for picture processing
CN111083364A (en) Control method, electronic equipment, computer readable storage medium and chip
EP4020967B1 (en) Photographic method in long focal length scenario, and mobile terminal
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
US20230421900A1 (en) Target User Focus Tracking Photographing Method, Electronic Device, and Storage Medium
EP3062513B1 (en) Video apparatus and photography method thereof
JP2015115839A5 (en)
CN113747085A (en) Method and device for shooting video
CN103945109A (en) Image pickup apparatus, remote control apparatus, and methods of controlling image pickup apparatus and remote control apparatus
CN105635614A (en) Recording and photographing method, device and terminal electronic equipment
US20150116471A1 (en) Method, apparatus and storage medium for passerby detection
CN112425156A (en) Method for selecting images based on continuous shooting and electronic equipment
CN105117680B (en) A kind of method and apparatus of the information of ID card
CN112116624A (en) Image processing method and electronic equipment
CN110191324B (en) Image processing method, image processing apparatus, server, and storage medium
CN113411498A (en) Image shooting method, mobile terminal and storage medium
CN110677580B (en) Shooting method, shooting device, storage medium and terminal
CN110267009B (en) Image processing method, image processing apparatus, server, and storage medium
CN110266953B (en) Image processing method, image processing apparatus, server, and storage medium
CN106357978B (en) Image output method, device and terminal
CN104662889A (en) Method and apparatus for photographing in portable terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

GR01 Patent grant
GR01 Patent grant