CN113141461A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113141461A
CN113141461A CN202110390914.3A CN202110390914A CN113141461A CN 113141461 A CN113141461 A CN 113141461A CN 202110390914 A CN202110390914 A CN 202110390914A CN 113141461 A CN113141461 A CN 113141461A
Authority
CN
China
Prior art keywords
image
working mode
image sensor
target image
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110390914.3A
Other languages
Chinese (zh)
Inventor
肖旭
覃保恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110390914.3A priority Critical patent/CN113141461A/en
Publication of CN113141461A publication Critical patent/CN113141461A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method and device and electronic equipment, and belongs to the field of images. The shooting method comprises the following steps: acquiring a first image through a camera; determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor; under the condition that the first sensing area is in a first working mode and the second sensing area is in a second working mode, acquiring a second image through the target image sensor; the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the field of images, and particularly relates to a shooting method and device, electronic equipment and a storage medium.
Background
With the continuous increase of camera shooting assemblies, users have become a part of daily use by shooting images or videos.
In the prior art, when image shooting is performed, shooting parameters such as exposure, focal length and the like are mostly automatically set according to fixed parameters such as current ambient brightness, color temperature and the like to assist in imaging, and the quality of shot images is poor. For an ordinary user without the technology of the related photography knowledge, it is difficult to obtain an image with a prominent subject from an image photographed by an existing apparatus without using a blurring or the like.
Content of application
An object of an embodiment of the present application is to provide a photographing method and apparatus, an electronic device, and a storage medium, which can solve a problem that it is difficult to obtain an image in which a subject is highlighted.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, including:
acquiring a first image through a camera;
determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor;
under the condition that the first sensing area is in a first working mode and the second sensing area is in a second working mode, acquiring a second image through the target image sensor;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including:
the first image acquisition module is used for acquiring a first image through a camera;
the first image processing module is used for determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor;
the second image acquisition module is used for acquiring a second image through the target image sensor under the condition that the first induction area is in the first working mode and the second induction area is in the second working mode;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after the first image is acquired through the camera, the shooting main body in the first image and the first induction area corresponding to the shooting main body in the target image sensor are determined, and the second image is acquired through the target image sensor under the conditions that the first induction area is in the first working mode and the second induction area is in the second working mode.
Drawings
Fig. 1 is one of the flow diagrams of the photographing method according to the embodiment of the present application;
FIG. 2 is a second flowchart of a photographing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a display mode according to an embodiment of the present application;
FIG. 4 is a second schematic diagram of a display mode according to an embodiment of the present application;
fig. 5 is a third schematic flowchart of a photographing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating operation of the first embodiment of the present application;
FIG. 7 is a second schematic diagram illustrating operation of the first embodiment of the present application;
FIG. 8 is a schematic diagram of the operation of embodiment two of the present application;
FIG. 9 is a second schematic diagram illustrating operation of the second embodiment of the present application;
fig. 10 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method and apparatus, the electronic device, and the storage medium provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
The embodiment of the application discloses a shooting method, which is shown in fig. 1 and comprises the following steps:
step 101, acquiring a first image through a camera.
In this embodiment, the first image may be obtained by a front camera or a rear camera of the electronic device. For example, the first image may be a preview image obtained after the camera is turned on.
The preview image refers to an image that is displayed in the screen of the device but has not been photographed yet. Under a common use scene, a camera is started by opening a shooting application in equipment, and a preview image shot by the camera is displayed in a touch screen of the equipment so as to be viewed by a user.
Step 102, determining a shooting subject in the first image and a first sensing area corresponding to the shooting subject in the target image sensor.
It should be explained that, the subject to be photographed is a subject image that needs to be focused on in the first image to be photographed, and the subject to be photographed may include: a person image, an animal image, a still image, or the like.
For example, in one usage scenario, where a user takes a group photo in a scenic spot, then the subject of the shot may be the head portrait of the group photo. For another example, in another usage scenario, the user takes a landscape image of a building, and then the subject of the photographing may be the building.
In one case, the method of identifying a photographic subject may include: and detecting and recognizing the first image through the recognition model so as to determine the shooting subject in the first image.
The recognition model needs to be obtained by pre-training, and may be, for example, a CNN (Convolutional Neural Networks) model, a DNN (Deep Neural Networks) model, a Feature CNN model, or the like. After training, the recognition model is stored in the device and called when needed.
After the position of the subject is determined, a display frame surrounding the subject is generated, and an area corresponding to the subject in the target image sensor is set as a first sensing area. Further, in order to facilitate recognition of the first sensing region, a display frame surrounding the photographic subject may be further generated in the display screen. The display frame may be of various shapes, such as square, circular, irregular, etc.
In another case, referring to fig. 2, the method of identifying the photographic subject may include:
step 201, displaying the first image.
Step 202, receiving a first input of the first image displayed by the user.
Step 203, responding to the first input, and determining the shooting subject in the first image.
The first input may be various, such as a click operation, a drag operation, a slide operation, etc. by the user.
Specifically, taking the first input as an example of the pointing operation, the shooting subject in the corresponding first image is determined by the pointing operation in the display screen.
Further, in the case where the photographic subject is determined, a display frame including the photographic subject is generated, and an area corresponding to the field display frame in the target image sensor is taken as the first sensing area.
The display frame may be regular square, circular, or the like, or may be irregular.
In this way, the user can be allowed to freely select a photographing subject desired to be highlighted by determining the photographing subject through the user's input, and the user is given more creative space while helping the user easily obtain an image in which the photographing subject is highlighted.
In addition, one or more subjects may be taken; correspondingly, the number of the first display areas can be one or more.
Further, in a case where a photographic subject desired by the user is not obtained, or the user is to replace the photographic subject, or the user is to add the photographic subject, the method further includes:
and step 11, receiving a second input of the user to the displayed first image.
The second input may be various, such as a click operation, a drag operation, a slide operation, and the like of the user.
And step 12, responding to the second input, detecting and identifying the first image again to obtain an updated shooting subject and a first induction area corresponding to the shooting subject in the target image sensor.
Taking the second input as an example of the dragging operation, dragging the display frame in the image display area to an area where the shooting subject that the user needs to highlight is located, then re-identifying the image in the display frame to obtain an updated shooting subject, and taking the area surrounded by the dragged display frame as an updated first induction area.
And 103, acquiring a second image through the target image sensor under the condition that the first sensing area is in a first working mode and the second sensing area is in a second working mode.
The second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
In this embodiment, optionally, the target image sensor is a mosaic rearrangement sensor, the first operating mode is an image reading mode, and the second operating mode is an actual size mode.
Alternatively, the mosaic rearrangement sensor may be specifically an N-in-one pixel sensor, and is not limited to the four-in-one pixel sensor shown in the drawings, and may be other sizes such as nine-in-one and sixteen-in-one, as long as it can switch between the following two operation modes. The embodiment of the present application does not specifically limit the specific size of the mosaic rearrangement sensor.
In the first working mode, a plurality of pixel points are output through the target image sensor, and then the output pixel points are rearranged through a remosaic algorithm to obtain a final second image.
In the first operation mode, also called full size mode, as shown in fig. 3, the resolution of the second image obtained by rearranging the pixel points output by the target image sensor and the surrounding associated pixel points by using a remosaic algorithm is not changed, for example, the resolution of the first image is 64M, and then the resolution of the second image is also 64M.
And in a second working mode, combining every adjacent n pixel points of the target image sensor into one pixel point, and then outputting the combined pixel points. Wherein n is an integer greater than 2.
The second operating mode, also called binning mode, as shown in fig. 4, can combine four adjacent pixels output by the sensor into 1, so as to reduce the resolution of the image, for example, the resolution of the first image is 64M, the resolution of the second image is reduced to 16M, and the brightness of each pixel is increased.
In this embodiment, the display mode of the first sensing region may be set to a full size mode, and the display mode of the second sensing region may be set to a binning mode, so that the resolution of the first display region including the shooting subject is higher than that of the second display region, thereby helping a user easily obtain an image in which the shooting subject is protruded.
Further, in addition to taking a picture to acquire an image, highlighting of the subject may be performed during video recording.
In this embodiment, acquiring the second image through the target image sensor specifically includes:
and acquiring a second image through the target image sensor in the video recording process.
After the acquiring of the second image by the target image sensor, referring to fig. 5, the method further comprises:
step 501, in the process of recording the video, tracking the shooting subject to adjust the first sensing area corresponding to the shooting subject in the target image sensor.
The tracking process may be camera movement tracking or tracking through a tracking model.
Specifically, the method comprises the following steps: and inputting the second image into the tracking model for detection to obtain a shooting subject of the second image, and further determining a first induction area corresponding to the shooting subject.
Step 502, acquiring a third image through the target image sensor under the condition that the adjusted first sensing area is in the first working mode and the adjusted second sensing area is in the second working mode.
For the explanation of the first operation mode and the second operation mode, refer to the foregoing content of the embodiment, and are not described herein again.
And 503, obtaining a target video according to the second image and the third image.
That is, the finally obtained target video includes the second image and the third image. Since the second image and the third image dynamically highlight the shooting subject, the finally obtained target video can also highlight the shooting subject, so that the effect of the video is enhanced.
Through steps 501-503, the first sensing area can move along with the movement of the shooting subject by adjusting the first sensing area and the second sensing area, so that the display effect of the shot image is better.
According to the shooting method, after the first image is obtained through the camera, the shooting main body in the first image and the first induction area corresponding to the shooting main body in the target image sensor are determined, and the second image is obtained through the target image sensor under the conditions that the first induction area is in the first working mode and the second induction area is in the second working mode.
Detailed description of the preferred embodiment
Referring to fig. 6 to 7, taking the example of photographing a person by a user, the photographing method includes:
and step 21, acquiring a first image through a camera.
And step 22, detecting and identifying the first image through an identification model so as to determine a shooting subject in the first image and a first sensing area corresponding to the shooting subject in the target image sensor.
And 23, acquiring a second image through the target image sensor under the condition that the first sensing area is in the first working mode and the second sensing area is in the second working mode.
Referring to fig. 5, the first sensing area is an area in the display frame and includes a photographing subject; the second sensing area is an area outside the display frame.
And 24, receiving a second input of the first image by the user.
And step 25, responding to the second input, detecting and recognizing the first image again to obtain the updated shooting subject and the first sensing area corresponding to the shooting subject in the target image sensor, as shown in fig. 6.
Specifically, the first sensing area is in a full size mode, and the resolution of the obtained second image is unchanged by rearranging the pixel points output by the target image sensor.
The second sensing area is in a binding mode, four adjacent pixel points output by the target image sensor can be combined into 1, and the image resolution is reduced, so that the first display area is highlighted.
Specific example II
Referring to fig. 8 to 9, taking the example of recording a landscape video by a user, the shooting method of the present embodiment includes:
and step 31, acquiring a first image through a camera.
And step 32, receiving a first input of the first image displayed by the user.
And step 33, responding to the first input, determining the shooting subject in the first image and a first sensing area corresponding to the shooting subject in the target image sensor.
In this example, the first input may be a sliding operation of the first image by the user.
Step 34, in the process of recording the video, under the condition that the first sensing area is in the first working mode and the second sensing area is in the second working mode, acquiring a second image through the target image sensor, as shown in fig. 8.
Step 35, performing tracking processing on the shooting subject to adjust the first sensing area corresponding to the shooting subject in the target image sensor.
Step 36, acquiring a third image through the target image sensor under the condition that the adjusted first sensing area is in the first working mode and the adjusted second sensing area is in the second working mode, as shown in fig. 9.
And step 37, obtaining a target video according to the second image and the third image.
In this example, the first sensing area can be moved along with the movement of the shooting subject by adjusting the first sensing area and the second sensing area, so that the display effect of the shot image is better.
It should be noted that, in the shooting method provided in the embodiment of the present application, the execution subject may be a shooting device, or alternatively, a control module in the shooting device for executing the loading shooting method. In the embodiment of the present application, a shooting device executes a loading shooting method as an example, and the shooting method provided in the embodiment of the present application is described.
The embodiment of the application discloses a shooting device, refer to fig. 10, including:
a first image obtaining module 1001 configured to obtain a first image through a camera;
a first image processing module 1002, configured to determine a shooting subject in the first image and a first sensing region corresponding to the shooting subject in a target image sensor;
a second image obtaining module 1003, configured to obtain a second image through the target image sensor when the first sensing area is in the first working mode and the second sensing area is in the second working mode;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
Optionally, the first image processing module 1002 is specifically configured to: detecting and identifying the first image through an identification model so as to determine a shooting subject in the first image;
or, displaying the first image;
receiving a first input of a user to the displayed first image;
in response to the first input, a photographic subject in the first image is determined.
Optionally, the second image obtaining module 1003 is specifically configured to: in the video recording process, a second image is obtained through the target image sensor;
the device further comprises:
the tracking module is used for tracking the shooting subject in the video recording process after a second image is acquired through the target image sensor so as to adjust the first induction area corresponding to the shooting subject in the target image sensor;
a third image obtaining module, configured to obtain a third image through the target image sensor when the adjusted first sensing area is in the first working mode and the adjusted second sensing area is in the second working mode;
and the video generation module is used for obtaining a target video according to the second image and the third image.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiments of fig. 1 to 9, and is not described herein again to avoid repetition.
The shooting device of the embodiment of the application determines the shooting main body in the first image and the first induction area corresponding to the shooting main body in the target image sensor after the first image is obtained through the camera, and obtains the second image through the target image sensor under the conditions that the first induction area is in the first working mode and the second induction area is in the second working mode.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 1110, a memory 1109, and a program or an instruction stored in the memory 1109 and executable on the processor 1110, where the program or the instruction is executed by the processor 1110 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 1110 is configured to:
acquiring a first image through a camera;
determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor;
under the condition that the first sensing area is in a first working mode and the second sensing area is in a second working mode, acquiring a second image through the target image sensor;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
In this embodiment, after the first image is acquired by the camera, the shooting subject in the first image and the first sensing area corresponding to the shooting subject in the target image sensor are determined, and the second image is acquired by the target image sensor under the condition that the first sensing area is in the first working mode and the second sensing area is in the second working mode.
Optionally, the processor 1110 is further configured to: detecting and identifying the first image through an identification model so as to determine a shooting subject in the first image;
or,
a display unit 1106 for displaying the first image;
a user input unit 1107 for receiving a first input of the first image displayed by a user;
processor 1110 is further configured to: in response to the first input, a photographic subject in the first image is determined.
Optionally, the processor 1110 is further configured to: in the video recording process, a second image is obtained through the target image sensor;
optionally, the processor 1110 is further configured to: after a second image is obtained through the target image sensor, tracking the shooting subject in the video recording process so as to adjust the first induction area corresponding to the shooting subject in the target image sensor;
under the condition that the adjusted first induction area is in the first working mode and the adjusted second induction area is in the second working mode, acquiring a third image through the target image sensor;
and obtaining a target video according to the second image and the third image.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1109 may be used for storing software programs and various data including, but not limited to, application programs and an operating system. Processor 1110 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A photographing method, characterized by comprising:
acquiring a first image through a camera;
determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor;
under the condition that the first sensing area is in a first working mode and the second sensing area is in a second working mode, acquiring a second image through the target image sensor;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
2. The photographing method according to claim 1, wherein determining the photographic subject in the first image includes:
detecting and identifying the first image through an identification model so as to determine a shooting subject in the first image;
or, displaying the first image;
receiving a first input of a user to the displayed first image;
in response to the first input, a photographic subject in the first image is determined.
3. The shooting method according to claim 1, wherein the acquiring of the second image by the target image sensor specifically includes:
in the video recording process, a second image is obtained through the target image sensor;
after the acquiring of the second image by the target image sensor, the method further comprises:
in the video recording process, tracking the shooting subject to adjust the first induction area corresponding to the shooting subject in the target image sensor;
under the condition that the adjusted first induction area is in the first working mode and the adjusted second induction area is in the second working mode, acquiring a third image through the target image sensor;
and obtaining a target video according to the second image and the third image.
4. The photographing method according to claim 1,
the target image sensor is a mosaic rearrangement sensor, the first working mode is an image reading mode, and the second working mode is an actual size mode.
5. A camera, comprising:
the first image acquisition module is used for acquiring a first image through a camera;
the first image processing module is used for determining a shooting subject in the first image and a first induction area corresponding to the shooting subject in a target image sensor;
the second image acquisition module is used for acquiring a second image through the target image sensor under the condition that the first induction area is in the first working mode and the second induction area is in the second working mode;
the second sensing area is an area outside the first sensing area in the target image sensor, and the first working mode is different from the second working mode.
6. The camera according to claim 5, wherein the first image processing module is specifically configured to: detecting and identifying the first image through an identification model so as to determine a shooting subject in the first image;
or, displaying the first image;
receiving a first input of a user to the displayed first image;
in response to the first input, a photographic subject in the first image is determined.
7. The camera according to claim 5, wherein the second image acquisition module is specifically configured to: in the video recording process, a second image is obtained through the target image sensor;
the device further comprises:
the tracking module is used for tracking the shooting subject in the video recording process after a second image is acquired through the target image sensor so as to adjust the first induction area corresponding to the shooting subject in the target image sensor;
a third image obtaining module, configured to obtain a third image through the target image sensor when the adjusted first sensing area is in the first working mode and the adjusted second sensing area is in the second working mode;
and the video generation module is used for obtaining a target video according to the second image and the third image.
8. The camera according to claim 5,
the target image sensor is a mosaic rearrangement sensor;
the first operating mode is an image reading mode, and the second operating mode is an actual size mode.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the photographing method according to any one of claims 1-4.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1 to 4.
CN202110390914.3A 2021-04-12 2021-04-12 Shooting method and device and electronic equipment Pending CN113141461A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110390914.3A CN113141461A (en) 2021-04-12 2021-04-12 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110390914.3A CN113141461A (en) 2021-04-12 2021-04-12 Shooting method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113141461A true CN113141461A (en) 2021-07-20

Family

ID=76811174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110390914.3A Pending CN113141461A (en) 2021-04-12 2021-04-12 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113141461A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130076968A1 (en) * 2011-09-27 2013-03-28 Sanyo Electric Co., Ltd. Image sensing device
CN103888679A (en) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 Image collection method and device
CN104660899A (en) * 2014-09-29 2015-05-27 上海华勤通讯技术有限公司 Picture processing method
CN107211089A (en) * 2015-02-02 2017-09-26 富士胶片株式会社 Tracking system, terminal installation, camera apparatus, follow shot method and program
CN112437237A (en) * 2020-12-16 2021-03-02 维沃移动通信有限公司 Shooting method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130076968A1 (en) * 2011-09-27 2013-03-28 Sanyo Electric Co., Ltd. Image sensing device
CN103888679A (en) * 2014-03-13 2014-06-25 北京智谷睿拓技术服务有限公司 Image collection method and device
CN104660899A (en) * 2014-09-29 2015-05-27 上海华勤通讯技术有限公司 Picture processing method
CN107211089A (en) * 2015-02-02 2017-09-26 富士胶片株式会社 Tracking system, terminal installation, camera apparatus, follow shot method and program
CN112437237A (en) * 2020-12-16 2021-03-02 维沃移动通信有限公司 Shooting method and device

Similar Documents

Publication Publication Date Title
WO2022166944A1 (en) Photographing method and apparatus, electronic device, and medium
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
CN112532881A (en) Image processing method and device and electronic equipment
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113709368A (en) Image display method, device and equipment
CN113329172A (en) Shooting method and device and electronic equipment
CN114390197B (en) Shooting method and device, electronic equipment and readable storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN112437237B (en) Shooting method and device
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN114143455B (en) Shooting method and device and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN113866782A (en) Image processing method and device and electronic equipment
CN113141461A (en) Shooting method and device and electronic equipment
CN113873147A (en) Video recording method and device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment
CN112367464A (en) Image output method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210720

WD01 Invention patent application deemed withdrawn after publication