CN107707824B - Shooting method, shooting device, storage medium and electronic equipment - Google Patents

Shooting method, shooting device, storage medium and electronic equipment Download PDF

Info

Publication number
CN107707824B
CN107707824B CN201711027613.4A CN201711027613A CN107707824B CN 107707824 B CN107707824 B CN 107707824B CN 201711027613 A CN201711027613 A CN 201711027613A CN 107707824 B CN107707824 B CN 107707824B
Authority
CN
China
Prior art keywords
frame image
subject
shooting
region
main body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711027613.4A
Other languages
Chinese (zh)
Other versions
CN107707824A (en
Inventor
何新兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711027613.4A priority Critical patent/CN107707824B/en
Publication of CN107707824A publication Critical patent/CN107707824A/en
Application granted granted Critical
Publication of CN107707824B publication Critical patent/CN107707824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Abstract

The application relates to a shooting method, a shooting device, a storage medium and electronic equipment. The method comprises the following steps: acquiring a frame image obtained by scanning through a camera; identifying subject information of a photographic subject in the frame image; cutting the frame image according to the main body information, and displaying the cut frame image; and generating a shot picture according to the cut frame image. The shooting method, the shooting device, the storage medium and the electronic equipment can improve the picture shooting efficiency.

Description

Shooting method, shooting device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a shooting method, an apparatus, a storage medium, and an electronic device.
Background
With the popularization of intelligent photographing devices, more and more photographing devices beautify the photographed frame image in the photographing process, for example, perform focusing processing to improve the definition of the displayed image.
In the conventional method, although the displayed picture is also beautified, there is still a situation that the displayed picture is inconsistent with the effect that the user wants to capture, in such a situation, the user needs to adjust the related shooting parameters or adjust the shooting position and angle to achieve the desired shooting effect, which results in low image capturing efficiency.
Disclosure of Invention
The embodiment of the application provides a shooting method, a shooting device, a storage medium and electronic equipment, which can improve shooting efficiency.
A photographing method comprising:
acquiring a frame image obtained by scanning through a camera;
identifying subject information of a photographic subject in the frame image;
cutting the frame image according to the main body information to generate a cut frame image;
and displaying the cut frame image.
A camera apparatus, the apparatus comprising:
the frame image acquisition module is used for acquiring a frame image obtained by scanning of the camera;
a subject information identification module for identifying subject information of a photographic subject in the frame image;
the frame image cutting module is used for cutting the frame image according to the main body information to generate a cut frame image; and displaying the cut frame image.
A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the photographing method provided by the above-described embodiments.
An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the shooting method provided by the above embodiments when executing the computer program.
According to the shooting method, the shooting device, the storage medium and the electronic equipment provided by the embodiment, before the picture is shot, the main body information of the shooting main body in the scanned frame image is identified, and the frame image is cut according to the main body information, so that the frame image displayed in a preview mode on a terminal interface is the cut frame image, the adjustment of relevant shooting parameters or the adjustment of the shooting position, the shooting angle and the like by a user can be reduced, the desired shooting effect can be presented, the shot picture can be generated according to the cut frame image, and the picture shooting efficiency can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating an exemplary environment in which a photographing method is applied;
FIG. 2 is a schematic diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 3 is a flow diagram of a method of capturing in one embodiment;
FIG. 4 is a flowchart of a photographing method in another embodiment;
FIG. 5 is a flow diagram of model training in one embodiment;
FIG. 6 is a schematic diagram of obtaining physical distances in one embodiment;
FIG. 7 is a display diagram of image cropping results in one embodiment;
FIG. 8 is a block diagram showing the configuration of a photographing apparatus according to an embodiment;
FIG. 9 is a block diagram showing the construction of a photographing apparatus according to another embodiment;
FIG. 10 is a block diagram of a portion of the structure of a handset associated with an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first camera module may be referred to as a second camera module, and similarly, a second camera module may be referred to as a first camera module, without departing from the scope of the present application. Both the first camera module and the second camera module are camera modules, but they are not the same camera module.
Fig. 1 is a schematic diagram of an application environment of the photographing method in one embodiment. As shown in fig. 1, the electronic device 110 may use a camera thereon to capture images, such as capturing an object 120 in the environment. Acquiring a frame image obtained by scanning the object 120 through a camera; recognizing subject information of a photographic subject in the frame image; cutting the frame image according to the main body information, and displaying the cut frame image; and generating a shot picture according to the cut frame image. Optionally, the camera may include a first camera module and a second camera module therein. The object 120 may be scanned by one or more camera modules to obtain a frame image.
Fig. 2 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 2, the electronic device includes a processor, a memory, a display screen, and a camera connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory and can be executed by the processor to realize the shooting method suitable for the electronic equipment provided by the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a photographing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The camera comprises the first camera module and the second camera module, and both can be used for generating frame images. The display screen may be a touch screen, such as a capacitive screen or an electronic screen, and is used for displaying visual information such as a frame image or a picture, and may also be used for detecting a touch operation applied to the display screen to generate a corresponding instruction. Those skilled in the art will appreciate that the architecture shown in fig. 2 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, as shown in fig. 3, a shooting method is provided, and this embodiment is mainly explained by applying the method to the electronic device shown in fig. 1, where the method includes:
step 302, acquiring a frame image obtained by scanning through a camera.
The frame image is a real-time frame image formed in a shooting state through a camera. When the electronic equipment receives an instruction of starting the camera, the camera can be called for scanning, and the shooting state is entered. Optionally, the camera includes a first camera module and a second camera module. The first camera module and/or the second camera module can be used for scanning objects in the shooting environment to form the frame image.
In step 304, subject information of the photographic subject in the frame image is identified.
The electronic device may determine a photographic subject in the frame image, and acquire subject information of the photographic subject. The subject to be photographed represents the content of the main presentation in the frame image, and may be a subject type such as a person, an animal, a landscape, and a building in the frame image. The subject information is information reflected by the subject in the frame image, and may include, but is not limited to, one or more of a subject type, a subject area, and the like. The main body area is the size and the position of occupied space of the shooting main body in the frame image. Taking the case that the subject information includes a subject type and a subject region, and the subject type is a person, the subject region may be a region surrounded by a boundary of the subject in the frame image, or may be a specific shape region. For example, the main area may be an area where the portrait is located, that is, an area surrounded by boundaries of the portrait, or may be a rectangular area including the area where the portrait is located. The region is centered in the frame image and occupies approximately half of the space within the entire frame image.
The electronic device may analyze image data included in the frame image, recognize a photographic subject therein, and further determine subject information of the photographic subject. Optionally, the image data in the preset area in the frame of image may be analyzed to identify the subject. The preset area can be in the middle area of the frame image, and when the shooting subject is not identified from the preset area, all image data is analyzed to identify the shooting subject. The electronic device can perform feature detection according to the image data in all the areas in the frame image or the image data in the preset area, identify scene information contained in the frame image, and determine the shooting subject according to the scene information. The scene information may include a person or an object in the frame image, and information such as a position and a size of the person or the object in the frame image.
And step 306, cutting the frame image according to the main body information to generate a cut frame image, and displaying the cut frame image.
In this embodiment, the electronic device may cut the captured subject in the frame image according to the subject information, so that the subject information is retained in the cut frame image, cut a partial image of the space occupied by the non-captured subject, move or enlarge the captured subject of the cut frame image compared to the captured subject before cutting, and display the cut frame image on the interface.
In one embodiment, before cropping the frame image, it is detected whether the mode for cropping the frame image is turned on, if so, the step 306 is executed, otherwise, the frame image that is not cropped is displayed.
The user can initiate a cutting instruction, and the user terminal receives a starting instruction for starting the cutting mode and starts the cutting mode according to the starting instruction. The opening instruction can be triggered by touch operation, pressing operation of a physical key, voice control operation and the like. The touch operation includes a touch click operation, a touch long press operation, a touch slide operation, a multi-point touch operation, and the like. The electronic equipment can provide an opening button for triggering the opening instruction, and when the clicking operation of the opening button is detected, the opening instruction for opening the cutting mode is triggered. The electronic equipment can also preset opening voice information for triggering the opening instruction. And receiving corresponding voice information by calling the voice receiving device, and triggering the starting instruction when the voice information is matched with the starting voice information. Through analysis, the voice message can be judged to be matched with the preset opening voice message, so that the opening instruction can be triggered.
In one embodiment, after step 306, further comprising: and generating a shot picture according to the cut frame image.
The electronic device can directly use the cut frame image as a shot picture, and can further perform blurring, defogging or brightness and contrast adjustment on part of display information in the cut frame image to form the shot picture.
According to the shooting method, the main body information of the shot main body in the scanned frame image is identified before the picture is shot, and the frame image is cut according to the main body information, so that the frame image displayed in a previewing mode on a terminal interface is the cut frame image, the adjustment of a user on relevant shooting parameters or the adjustment of shooting position, angle and the like can be reduced, the desired shooting effect can be presented, the shot picture can be generated according to the cut frame image, and the picture shooting efficiency can be improved.
In one embodiment, step 304 includes: detecting a click operation acting on a frame image presentation interface; and determining a shooting subject in the frame image according to the action position of the clicking operation, and identifying subject information of the shooting subject.
In this embodiment, the electronic device may detect a click operation applied to the presentation interface in a shooting mode, acquire an action position of the click operation, acquire image data of the frame image in an area where the action position is located, perform feature detection, and determine that the shooting subject is a subject of a matched type when the detected feature is matched with a feature of a subject of a preset type. For example, when the feature is detected to match one or more of a person, an animal, a building, and the like, the subject is determined to be the matched one or more.
When the photographic subject is determined, subject information of the photographic subject may be further identified, including one or more of a subject type and a subject region identifying the photographic subject.
In this embodiment, by determining the subject of shooting according to the click operation, the accuracy of the determination of the subject of shooting can be improved.
In one embodiment, as shown in fig. 4, there is provided another photographing method including:
step 402, acquiring a frame image obtained by scanning through a camera.
In step 404, subject information of the subject in the frame image is identified according to the subject identification model.
In one embodiment, the electronic device may identify a subject region of the photographic subject in the frame image according to the subject recognition model. The main area refers to an image area that needs to be reserved in the frame image. In the cropping process, a main body area in the frame image is reserved, and areas except the main body area are removed. In general, the subject region may be a region surrounded by a boundary of the subject in the frame image, or may be a specific shape region. Taking an example of shooting a subject as a person, the subject area may be an area where the person is located, that is, an area surrounded by a boundary of the person, or may be a rectangular area including the area where the person is located.
The subject recognition model is an algorithm model for recognizing a subject region in the frame image, and may be, for example, a portrait, an object, or the like in the recognition image. The subject recognition model is obtained by training according to the training image set and the corresponding region label. The training image set is an image set used for training a subject recognition model, the region mark is a unique mark of a region where a shooting subject is located, each image in the training image set corresponds to one or more shooting subjects, and the region where the one or more shooting subjects are located is marked in the image to obtain one or more region marks corresponding to each image. And training to obtain a subject recognition model according to the training image set and the corresponding region marks.
And each image in the training image set has a corresponding area label, and training is carried out according to the training image set and the corresponding area label to obtain a main body recognition model. If the area mark is a mark of the area where the target image in the image is located, the area where the shooting subject is located in each image in the training image set can be extracted according to the mark, and then model training is performed according to the extracted areas where all the shooting subjects are located to obtain a subject recognition model. When an image is acquired, a subject region in the image can be identified according to a subject identification model. Generally, the more images included in the training image set, the more accurate the subject recognition model obtained by training, and the higher the accuracy of recognizing the subject region in the image. For example, in the process of face recognition, the region mark may be a region where a face is located, the face region in the training image set is extracted according to the region mark, then corresponding geometric features are extracted according to the face region, a face template is obtained through the geometric feature training, and the face region in the image can be recognized through the face template.
And step 406, cutting the frame image according to the main body area, generating a cut frame image, and displaying the cut frame image.
In one embodiment, the frame image is composed of a plurality of pixels, and the plurality of pixels are arranged according to a certain rule and can generally form a two-dimensional matrix. Each pixel point has a corresponding pixel value and a corresponding coordinate, the specific position of the pixel point in the image can be represented through the coordinates, and different patterns are formed through the pixel points with different pixel values. The main body region is also composed of a plurality of pixel points, that is, the main body region includes part or all of the pixel points in the frame image. After the body region is acquired, the body region may be marked and then searched by marking. The coordinates of the pixels included in the main body region may be extracted, and the main body region may be searched for by the coordinates. For example, after the main region is obtained, all the edge pixel points of the main region are marked as red, when the main region is searched, the red pixel points traversing each pixel point are edge pixel points, all the edge pixel points in the image are obtained, and the region surrounded by the edge pixel points is the main region. Specifically, the RGB three-channel values of the pixel point may be compared, and if the RGB three-channel values are 255, 0, and 0, the pixel point is an edge pixel point. The frame image is cut according to the main body area, the main body area in the frame image can be extracted firstly, then other areas except the main body area are removed, and only the main body area part is reserved.
It is to be understood that, after the subject region in the frame image is identified, the user may adjust the subject region. Specifically, a region adjustment instruction input by a user is received, and the main body region is adjusted according to the region adjustment instruction. The zone adjustment instructions are for indicating a position and a range of the adjustment subject zone. For example, after the main body area in the frame image is recognized, the main body area is marked by the rectangular frame, and the user can input an area adjustment instruction to arbitrarily adjust the position and size of the rectangular frame. The position of the rectangular frame can be moved by long-pressing the rectangular frame and dragging, and the size of the rectangular frame can be arbitrarily enlarged or reduced by long-pressing the boundary of the rectangular frame and dragging.
And step 408, generating a shot picture according to the cut frame image.
In the shooting method provided in the above embodiment, after the subject information of the shooting subject in the frame image is identified by the subject identification model, the subject area in the frame image is specifically identified, and the frame image is cropped according to the subject area. The main body area is generally an area which is concerned by a user, and only the main body area is reserved when the image is cut, so that the shooting accuracy is improved, and the displayed picture is more accurate.
In one embodiment, the method further comprises the step of model training. As shown in fig. 5, this step includes:
step 502, obtaining a historical clipping image corresponding to the current terminal and a corresponding area mark.
In one embodiment, the history trimming image refers to an original image subjected to the trimming process. The user terminal can name the frame images processed by the history through the same rule, and can know which images are the history clipping images by reading the names of the images. For example, traversing the identifiers of all images in the gallery, if the image identifier contains "T", the image is a history cropping image. The frame images processed by the user terminal in history can also be stored in a fixed folder, and the frame images processed by the history can be acquired by reading the folder.
And step 504, performing model training according to the historical clipping images and the corresponding area marks to obtain a main body recognition model.
In one embodiment, the historical cropped image may be stored locally at the user terminal or at the server. Generally, after the user terminal crops the image, the image before cropping and the image after cropping can be simultaneously stored, and historical cropped images of different user terminals are trained to obtain the subject recognition models corresponding to different user terminals.
It is to be understood that the model training may be performed locally at the user terminal or may be performed at the server. When the main body recognition model is trained in the server, the user terminal can upload the images cut each time to the server, and the server can establish different folders according to the user terminal identification and store the images uploaded by different user terminals in the corresponding folders. The user terminal identifier refers to a unique identifier of the user terminal. For example, the user terminal identifier may be at least one of an IP (Internet Protocol, Protocol for interconnection between networks) address, a MAC (Media Access Control) address, and the like. The server can set a timer, start a task of model training at regular time, train historical clipping images in each folder, and send a subject recognition model obtained by training to a corresponding user terminal. In other embodiments, a condition for triggering the training model may also be set, and when the triggering condition is satisfied, the model training is performed according to the historical clipping image and the corresponding region label. For example, the trigger condition may be: the newly added historical cutting images are larger than the preset number.
After model training is carried out to obtain a main body recognition model, a corresponding model identification is established for the main body recognition model, and the main body recognition model of a new version can cover the main body recognition model of an old version. In one embodiment, the subject recognition model may be named in the form of "terminal id + generation time", and is not limited herein. For example, the model identifier of the subject recognition model may be "MT 170512", which represents the subject recognition model corresponding to the user terminal whose terminal identifier generated on 12 days 5/7/2017 is "MT".
In one embodiment, when the user terminal acquires the new version of the body identification model, the new version of the body identification model is used to cover the old version of the body identification model. When a frame image is acquired, the frame image is identified by using the subject identification model of the latest version. For example, if the user terminal receives the body recognition model with the model identification of "MT 170512", the body recognition model of "MT 170410" is overwritten with the body recognition model of "MT 170512". After the frame image is acquired, the subject region in the frame image is identified according to the subject identification model of the latest version.
It is understood that in other embodiments provided herein, the subject recognition model may be established for different color channels. After the frame image is obtained, the color channels of the frame image are respectively identified through the corresponding main body identification models, and a final main body area is obtained according to the identification result corresponding to each color channel. For example, a body recognition model may be respectively established for three RBG channels, or a body recognition model may be respectively established for three YUV channels, and each color channel of the frame image is recognized through the body recognition model corresponding to each color channel, and the body regions recognized by each color channel are combined to obtain a final body region.
In one embodiment, the subject information of the photographic subject includes a subject type and a subject region; clipping the frame image according to the main body information, comprising: and cropping the frame image according to the type and the area of the main body.
The electronic device can calculate the size and position of the subject matching the subject according to the subject type and subject region of the subject. And cropping the frame image so that the position and the size of the shot subject in the cropped frame image are the same as the calculated position and size of the subject. The frame image is cropped by the calculated subject size and subject position of the photographic subject to keep the subject size and subject position in the cropped frame image of the photographic subject the same as the calculated subject size and subject position.
In one embodiment, cropping the frame image according to the type of the subject and the region of the subject includes: and acquiring a cutting mode corresponding to the main body type, and cutting the frame image according to the cutting mode and the main body area.
The cropping mode refers to a method for cropping an image, and the cropping mode may include an edge cropping mode, a rectangle cropping mode, and the like, and is not limited herein. The cutting modes corresponding to different main body types can be the same or different, for example, if the main body area is a portrait, an edge cutting model is adopted for cutting; if the main body area is a landscape, the main body area is cut in a rectangular cutting mode. The edge cropping mode refers to a mode for cropping according to the edge of the shooting subject, and the rectangular cropping mode refers to a mode for cropping according to the minimum rectangular area where the shooting subject is located. And cutting the frame image according to the cutting mode through the acquired cutting mode, and determining the cutting range of the frame image according to the cutting mode and the main area so that the cut frame image comprises the main area, and the presentation form of the cut frame image is the presentation mode corresponding to the cutting mode.
In one embodiment, after the subject region is acquired, the number of subject regions in the image may be determined by the acquired subject region. Generally, one subject corresponds to one connected region, and if the subject region is composed of a plurality of connected regions, it is described that a plurality of subjects exist in the image. The connected region is a closed region, and the closed region indicates a region where a subject is located. If there are a plurality of main body regions in the image, only a part of the main body regions may be left after the cropping, or all the main body regions may be left.
In one embodiment, cropping a frame image from a subject region comprises: if the frame image comprises two or more main body areas, acquiring the physical distance between the main body areas; and cutting the frame image according to the physical distance.
In one embodiment, the physical distance refers to a distance from an object captured in the image to the image capture device, for example the physical distance may be 1 meter. Generally, an image is composed of a plurality of pixel points, each pixel point corresponds to a certain position of a certain object, and therefore each pixel point has a corresponding physical distance. The main body area is composed of a plurality of pixel points in the image, and each pixel point has a corresponding physical distance. Therefore, the physical distance of the main body region may be an average value of the physical distances corresponding to all the pixels in the main body region, or may be a physical distance corresponding to a certain pixel in the main body region, which is not limited herein.
In one embodiment, in the process of acquiring the image, the physical distance corresponding to each pixel point in the image can be generally acquired through a dual-camera or laser camera module. Specifically, images corresponding to an object are respectively shot through a first camera module and a second camera module; acquiring a first included angle and a second included angle according to the image, wherein the first included angle is an included angle between a horizontal line from the first camera module to the object and a horizontal line from the first camera module to the second camera module, and the second included angle is an included angle between a horizontal line from the second camera module to the object and a horizontal line from the second camera module to the first camera module; and acquiring the physical distance from the image acquisition device to the object according to the first included angle, the second included angle and the distance from the first camera module to the second camera module.
FIG. 6 is a schematic diagram of obtaining physical distances in one embodiment. As shown in fig. 6, the distance T between the first camera module 602 and the second camera module 604 is knowncThe first camera module 602 and the second camera module 604 respectively capture images corresponding to the object 606, and the first included angle a can be obtained according to the images1And a second included angle a2, where the vertical intersection point between the horizontal line from the first camera module 602 to the second camera module 604 and the object 602 is the intersection point 608. Assume that the distance from the first camera module 602 to the intersection point 608 is TxThen the distance from the intersection point 608 to the second camera module 604 is Tc-TxThe physical distance of object 606, i.e., the vertical distance of object 606 from intersection point 608, is Ts. From the triangle formed by the first camera module 602, the object 606 and the intersection point 608, the following formula can be obtained:
Figure BDA0001448719140000111
similarly, according to the triangle formed by the second camera module 604, the object 606 and the intersection point 608, the following formula can be obtained:
Figure BDA0001448719140000112
the physical distance of the object 606 can be obtained from the above formula as:
Figure BDA0001448719140000121
the frame image is cropped according to the physical distance, and after the frame image is cropped, the body region in the same physical distance range may be reserved, or the body region closest to the physical distance may be reserved, which is not limited herein. Specifically, a main body region with a physical distance within a preset distance range is acquired, and a frame image is cut according to the acquired main body region. The preset distance range is a preset value range of the physical distance, for example, the preset distance range may be a distance within 1 to 3 meters.
FIG. 7 is a terminal display diagram of an image cropping result in one embodiment. As shown in fig. 7, a frame image 702 is acquired, a subject region in the frame image 704 is identified by a subject identification model, and the frame image is cropped according to the subject region to obtain a cropped image 704. It is to be understood that the display manner of the frame image 702 and the cropped image 704 is not limited to the display result shown in the figure, and may be displayed in other manners.
In one embodiment, as shown in fig. 8, there is provided a photographing apparatus including:
a frame image obtaining module 802, configured to obtain a frame image obtained by scanning with a camera.
A subject information identifying module 804 for identifying subject information of the photographic subject in the frame image.
And a frame image clipping module 806, configured to clip the frame image according to the body information, generate a clipped frame image, and display the clipped frame image.
In one embodiment, the apparatus further comprises: and a picture generating module 808, configured to generate a shot picture according to the cut frame image.
In one embodiment, the main body information identification module 804 is further configured to detect a click operation applied to the frame image presentation interface; and determining a shooting subject in the frame image according to the action position of the clicking operation, and identifying subject information of the shooting subject.
In one embodiment, the subject information identification module 804 is further configured to identify subject information of the captured subject in the frame image according to a subject identification model, wherein the subject identification model is trained according to the training image set and the corresponding region label.
In one embodiment, as shown in fig. 9, there is provided another photographing apparatus, further including:
the model generation module 810 is configured to obtain a historical clipping image and a corresponding area tag corresponding to the current terminal; and performing model training according to the historical clipping images and the corresponding region marks to obtain a main body recognition model.
In one embodiment, the subject information of the photographic subject includes a subject type and a subject region.
The frame image cropping module 806 is further configured to crop the frame image according to the type of the subject and the region of the subject.
In one embodiment, the frame image cropping module 806 is further configured to obtain a cropping mode corresponding to the type of the subject, and crop the frame image according to the cropping mode and the subject area.
In one embodiment, the frame image cropping module 806 is further configured to obtain a physical distance between each of the body regions if the frame image includes two or more body regions; and cutting the frame image according to the physical distance.
The division of each module in the above-mentioned shooting device is only used for illustration, in other embodiments, the shooting device may be divided into different modules as needed to complete all or part of the functions of the above-mentioned shooting device.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the photographing method provided by the above-described embodiments.
An electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the shooting method provided by the above embodiments when executing the computer program.
The embodiment of the application also provides a computer program product. A computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the photographing method provided by the above embodiments.
The embodiment of the application also provides the electronic equipment. As shown in fig. 10, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the electronic device as the mobile phone as an example:
fig. 10 is a block diagram of a partial structure of a mobile phone related to an electronic device provided in an embodiment of the present application. Referring to fig. 10, the cellular phone includes: radio Frequency (RF) circuitry 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuitry 106, wireless fidelity (WiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
In General, the RF circuit includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a low Noise Amplifier (L ow Noise Amplifier, L NA), a duplexer, etc. in addition, the RF circuit 1010 may communicate with a network and other devices through wireless communication, and the wireless communication may use any communication standard or protocol, including but not limited to Global System for mobile communication (GSM), General Packet Radio Service (General Packet Radio Service, GPRS), Code Division Multiple Access (Code Division Multiple Access, CDMA), Wideband Code Division Multiple Access (Wideband Code Division Multiple Access, WCDMA), long Term Evolution (L g, terminal Service, L)), Short Message Service (SMS), etc.
The memory 1020 can be used for storing software programs and modules, and the processor 1080 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 1000. Specifically, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, which may also be referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near the touch panel 1031 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. In one embodiment, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1080, and can receive and execute commands sent by the processor 1080. In addition, the touch panel 1031 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The Display unit 1040 may be used to Display information input by a user or information provided to a user and various menus of the mobile phone, the Display unit 1040 may include a Display panel 1041. in one embodiment, the Display panel 1041 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), and the like, in one embodiment, the touch panel 1031 may cover the Display panel 1041, and when the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch panel 1031 may be transmitted to the processor 1080 to determine the type of the touch event, and the processor 1080 then provides a corresponding visual output on the Display panel 1041 according to the type of the touch event.
The cell phone 1000 may also include at least one sensor 1050, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 106, speaker 1061, and microphone 1062 may provide an audio interface between a user and a cell phone. The audio circuit 106 may transmit the electrical signal converted from the received audio data to the speaker 1061, and convert the electrical signal into a sound signal for output by the speaker 1061; on the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, which is received by the audio circuit 106 and converted into audio data, and the audio data is then cut by the audio data output processor 1080 and sent to another mobile phone via the RF circuit 1010, or the audio data is output to the memory 1020 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help the user to send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 1070, which provides wireless broadband internet access for the user. Although fig. 10 shows the WiFi module 1070, it is to be understood that it does not belong to the essential constitution of the handset 1000 and may be omitted as needed.
The processor 1080 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020, thereby integrally monitoring the mobile phone. In one embodiment, processor 1080 may include one or more processing units. In one embodiment, processor 1080 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset 1000 also includes a power supply 1090 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1080 via a power management system that may be configured to manage charging, discharging, and power consumption.
In one embodiment, the cell phone 1000 may also include a camera, a bluetooth module, and the like.
In the present embodiment, the processor 1080 included in the mobile terminal implements the steps of the above-described photographing method when executing the computer program stored on the memory.
Suitable non-volatile memory may include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory volatile memory may include Random Access Memory (RAM), which acts as external cache memory, by way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (S L DRAM), Rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A photographing method comprising:
acquiring a frame image obtained by scanning through a camera; the frame image is a real-time frame image formed in a shooting state through the camera;
recognizing subject information of the shooting subjects in all the areas in the frame image according to the subject recognition model; the subject identification model is obtained by training according to a training image set and corresponding region marks, the shooting subject represents the main displayed content in the frame image, the subject information of the shooting subject comprises a subject type and a subject region, and the region marks refer to the unique marks of the region where the shooting subject is located;
if the frame image comprises two or more main body areas, acquiring the physical distance between the main body areas; cutting the frame image according to the physical distance to generate a cut frame image;
and displaying the cut frame image.
2. The method of claim 1, further comprising:
acquiring a historical cutting image and a corresponding area mark corresponding to a current terminal;
and performing model training according to the historical clipping images and the corresponding region marks to obtain a main body recognition model.
3. The method of claim 1, further comprising:
and acquiring a cutting mode corresponding to the main body type, and cutting the frame image according to the cutting mode and the main body area.
4. A camera, characterized in that the camera comprises:
the frame image acquisition module is used for acquiring a frame image obtained by scanning of the camera; the frame image is a real-time frame image formed in a shooting state through the camera;
a subject information identification module for identifying subject information of the subject photographed in all regions in the frame image according to a subject identification model; the subject identification model is obtained by training according to a training image set and corresponding region marks, the shooting subject represents the main displayed content in the frame image, the subject information of the shooting subject comprises a subject type and a subject region, and the region marks refer to the unique marks of the region where the shooting subject is located;
the frame image cutting module is used for acquiring the physical distance between each main body area if the frame image comprises two or more main body areas; and cutting the frame image according to the physical distance, generating a cut frame image, and displaying the cut frame image.
5. The apparatus of claim 4, further comprising:
the model generation module is used for acquiring a historical cutting image corresponding to the current terminal and a corresponding area mark; and performing model training according to the historical clipping images and the corresponding region marks to obtain a main body recognition model.
6. The apparatus of claim 4, wherein the frame image cropping module is further configured to obtain a cropping mode corresponding to the body type, and crop the frame image according to the cropping mode and the body region.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 3 are implemented when the computer program is executed by the processor.
CN201711027613.4A 2017-10-27 2017-10-27 Shooting method, shooting device, storage medium and electronic equipment Active CN107707824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711027613.4A CN107707824B (en) 2017-10-27 2017-10-27 Shooting method, shooting device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711027613.4A CN107707824B (en) 2017-10-27 2017-10-27 Shooting method, shooting device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN107707824A CN107707824A (en) 2018-02-16
CN107707824B true CN107707824B (en) 2020-07-31

Family

ID=61176444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711027613.4A Active CN107707824B (en) 2017-10-27 2017-10-27 Shooting method, shooting device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN107707824B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986110A (en) * 2018-07-02 2018-12-11 Oppo(重庆)智能科技有限公司 Image processing method, device, mobile terminal and storage medium
CN110785995A (en) * 2018-09-04 2020-02-11 深圳市大疆创新科技有限公司 Shooting control method, device, equipment and storage medium
CN112087579B (en) * 2020-09-17 2022-08-12 维沃移动通信有限公司 Video shooting method and device and electronic equipment
CN112565589B (en) * 2020-11-13 2023-03-31 北京爱芯科技有限公司 Photographing preview method and device, storage medium and electronic equipment
CN112700381A (en) * 2020-12-22 2021-04-23 努比亚技术有限公司 Image processing method, terminal and computer readable storage medium
CN113824793A (en) * 2021-09-28 2021-12-21 深圳前海微众银行股份有限公司 Certificate uploading identification method and device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681627A (en) * 2016-03-03 2016-06-15 联想(北京)有限公司 Image shooting method and electronic equipment
CN106648361A (en) * 2016-12-13 2017-05-10 深圳市金立通信设备有限公司 Photographing method and terminal
CN106713773A (en) * 2017-03-31 2017-05-24 联想(北京)有限公司 Shooting control method and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009069185A (en) * 2007-09-10 2009-04-02 Toshiba Corp Video processing apparatus and method
US9760768B2 (en) * 2014-03-04 2017-09-12 Gopro, Inc. Generation of video from spherical content using edit maps
CN104836956A (en) * 2015-05-09 2015-08-12 陈包容 Processing method and device for cellphone video
CN107155064B (en) * 2017-06-23 2019-11-05 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681627A (en) * 2016-03-03 2016-06-15 联想(北京)有限公司 Image shooting method and electronic equipment
CN106648361A (en) * 2016-12-13 2017-05-10 深圳市金立通信设备有限公司 Photographing method and terminal
CN106713773A (en) * 2017-03-31 2017-05-24 联想(北京)有限公司 Shooting control method and electronic device

Also Published As

Publication number Publication date
CN107707824A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107635101B (en) Shooting method, shooting device, storage medium and electronic equipment
CN107707824B (en) Shooting method, shooting device, storage medium and electronic equipment
CN108848308B (en) Shooting method and mobile terminal
CN107566742B (en) Shooting method, shooting device, storage medium and electronic equipment
CN113132618B (en) Auxiliary photographing method and device, terminal equipment and storage medium
CN107977674B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN109040643B (en) Mobile terminal and remote group photo method and device
CN108366207B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
KR101839569B1 (en) Method and terminal for acquiring panoramic image
CN107995422B (en) Image shooting method and device, computer equipment and computer readable storage medium
US20170032219A1 (en) Methods and devices for picture processing
CN108022274B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
CN111416940A (en) Shooting parameter processing method and electronic equipment
CN107124556B (en) Focusing method, focusing device, computer readable storage medium and mobile terminal
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR20140104753A (en) Image preview using detection of body parts
CN107749046B (en) Image processing method and mobile terminal
WO2020048392A1 (en) Application virus detection method, apparatus, computer device, and storage medium
CN109495616B (en) Photographing method and terminal equipment
WO2018184260A1 (en) Correcting method and device for document image
CN109684277B (en) Image display method and terminal
CN109448069B (en) Template generation method and mobile terminal
CN109086761A (en) Image processing method and device, storage medium, electronic equipment
EP4047549A1 (en) Method and device for image detection, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant