CN113542605A - Camera shooting control method, mobile terminal and storage medium - Google Patents

Camera shooting control method, mobile terminal and storage medium Download PDF

Info

Publication number
CN113542605A
CN113542605A CN202110791430.XA CN202110791430A CN113542605A CN 113542605 A CN113542605 A CN 113542605A CN 202110791430 A CN202110791430 A CN 202110791430A CN 113542605 A CN113542605 A CN 113542605A
Authority
CN
China
Prior art keywords
image
definition
state
shooting
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110791430.XA
Other languages
Chinese (zh)
Inventor
黄朝远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN202110791430.XA priority Critical patent/CN113542605A/en
Publication of CN113542605A publication Critical patent/CN113542605A/en
Priority to PCT/CN2021/132163 priority patent/WO2023284218A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Abstract

The application provides a camera shooting control method, a mobile terminal and a storage medium, wherein the method comprises the following steps: acquiring at least two images continuously acquired by a camera device on a target object; acquiring the definition of the at least two images; and controlling the camera equipment to shoot the target object according to the definition of the at least two images. The shooting effect is improved.

Description

Camera shooting control method, mobile terminal and storage medium
Technical Field
The present application relates to the field of camera technologies, and in particular, to a camera control method, a mobile terminal, and a storage medium.
Background
At present, when a user uses a mobile terminal provided with a camera to shoot, the user often can only judge the definition of a shot image through experience. For example, the user observes the definition of an image in a screen of a mobile phone by adjusting the object distance between the mobile phone and a shooting object, and shoots the image when the definition is high.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: the user observes the definition of the image through experience, so that the definition of the shot image is low, and the shooting effect is poor.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides a camera control method, a mobile terminal and a storage medium, which are used for solving the technical problem of poor shooting effect.
In order to solve the above technical problem, the present application provides a camera control method applied to a mobile terminal, including:
acquiring at least two images continuously acquired by a camera device on a target object, wherein optionally, a camera lens of the camera device is a fixed focus lens;
acquiring the definition of the at least two images;
and controlling the camera equipment to shoot the target object according to the definition of the at least two images.
Optionally, the step S13 includes:
determining the definition state of the camera equipment according to the definition of the at least two images, wherein the definition state is optionally a rising state, a falling state and a state in the highest range;
and controlling the image pickup equipment to carry out image pickup on the target object according to the definition state of the image pickup equipment.
Optionally, controlling the image capturing apparatus to capture an image of the target object according to the sharpness state of the image capturing apparatus includes:
judging whether a target image exists in the at least two images according to the definition state of the camera equipment, wherein the definition of the target image is preset definition;
and when determining that the target image exists in the at least two images, storing the target image to the image pickup equipment.
Optionally, the determining whether a target image exists in the at least two images according to the sharpness state of the image capturing apparatus includes:
and if the definition state of the camera equipment is a reduced state and the state of the camera equipment in the highest range exists in the historical definition states, determining that the target image exists in the at least two images.
Optionally, the method further comprises:
and displaying indication information corresponding to the definition state of the image pickup equipment in a shooting interface of the image pickup equipment.
Optionally, the indication information corresponding to the sharpness state of the image capturing apparatus includes at least one of:
if the definition state of the camera equipment is a rising state, determining that the indication information is a rising arrow;
if the definition state of the camera equipment is a reduction state, determining that the indication information is a reduction arrow;
and if the definition state of the camera equipment is in the highest range state, determining the indication information as a shooting frame.
Optionally, the method further comprises:
responding to a shooting instruction, and acquiring a shooting image corresponding to the shooting instruction;
and processing the shot image according to the definition of the target image and the definition of the shot image.
Optionally, processing the captured image according to the definition of the target image and the definition of the captured image includes:
if the definition of the target image is the same as that of the shot image, deleting the shot image; and/or the presence of a gas in the gas,
and if the definition of the target image is different from that of the shot image, storing the shot image in the camera equipment.
The present application further provides a camera control device, including a first obtaining module, a second obtaining module and a control module, wherein:
the first acquisition module is used for acquiring at least two images continuously acquired by the camera equipment on a target object, and optionally, a camera lens of the camera equipment is a fixed focus lens;
the second acquisition module is used for acquiring the definition of the at least two images;
the control module is used for controlling the camera equipment to shoot the target object according to the definition of the at least two images.
Optionally, the control module is specifically configured to:
determining the definition state of the camera equipment according to the definition of the at least two images, wherein the definition state is optionally a rising state, a falling state and a state in the highest range;
and controlling the image pickup equipment to carry out image pickup on the target object according to the definition state of the image pickup equipment.
Optionally, the control module is specifically configured to:
judging whether a target image exists in the at least two images according to the definition state of the camera equipment, wherein the definition of the target image is preset definition;
and when determining that the target image exists in the at least two images, storing the target image to the image pickup equipment.
Optionally, the control module is specifically configured to:
and if the definition state of the camera equipment is a reduced state and the state of the camera equipment in the highest range exists in the historical definition states, determining that the target image exists in the at least two images.
Optionally, the image pickup control apparatus further includes a display module, and the display module is configured to:
and displaying indication information corresponding to the definition state of the image pickup equipment in a shooting interface of the image pickup equipment.
Optionally, the indication information corresponding to the sharpness state of the image capturing apparatus includes at least one of:
if the definition state of the camera equipment is a rising state, determining that the indication information is a rising arrow;
if the definition state of the camera equipment is a reduction state, determining that the indication information is a reduction arrow;
and if the definition state of the camera equipment is in the highest range state, determining the indication information as a shooting frame.
Optionally, the image capturing control apparatus further includes a third acquiring module, where the third acquiring module is configured to:
responding to a shooting instruction, and acquiring a shooting image corresponding to the shooting instruction;
and processing the shot image according to the definition of the target image and the definition of the shot image.
Optionally, the third obtaining module is specifically configured to:
if the definition of the target image is the same as that of the shot image, deleting the shot image; and/or the presence of a gas in the gas,
and if the definition of the target image is different from that of the shot image, storing the shot image in the camera equipment.
The present application further provides a mobile terminal, including: a memory, and a processor, wherein the memory stores an imaging control program, and the imaging control program realizes the steps of the method when executed by the processor.
The present application also provides a computer storage medium having a computer program stored thereon, which, when being executed by a processor, carries out the steps of the method as described above.
As described above, the present application provides a camera control method, a mobile terminal, and a storage medium, which acquire at least two images continuously acquired by a camera device for a target object, acquire the definitions of the at least two images, and control the camera device to perform image shooting on the target object according to the definitions of the at least two images. According to the method, the image pickup equipment can accurately determine the definition of the plurality of continuously acquired images, and the image pickup equipment is controlled to carry out image shooting on the target object according to the definition, so that the definition of the image shot by the image pickup equipment is higher, and the shooting effect is further improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a camera shooting control method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a relationship between image sharpness and object distance according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a sharpness state in a highest range state according to an embodiment of the present disclosure;
fig. 7 is a schematic flow chart illustrating a definition state of an image capturing apparatus according to an embodiment of the present disclosure;
fig. 8A is a schematic diagram of a process for displaying indication information according to an embodiment of the present application;
fig. 8B is a schematic diagram of another process for displaying indication information according to an embodiment of the present application;
fig. 8C is a schematic diagram of another process for displaying indication information according to an embodiment of the present application;
fig. 9 is a schematic diagram of a process of processing a captured image according to an embodiment of the present application;
fig. 10 is a schematic diagram of acquiring a captured image according to an embodiment of the present application;
fig. 11 is a schematic process diagram of a camera shooting control method according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an imaging control apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of another image capture control apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S11 and S12 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S12 first and then S11 in specific implementation, which should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The mobile terminal may be implemented in various forms. For example, the mobile terminal described in the present application may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present disclosure, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
In the related art, a fixed focus lens is used as an image pickup lens in a terminal device to reduce the cost of the terminal device, and the terminal device mounted with the fixed focus lens does not have an auto-focusing function, so that when a user uses the terminal device mounted with the fixed focus lens to shoot, the user can judge the definition of a shot image through experience. For example, the user can change the definition of an image in the screen by adjusting the object distance between the mobile phone and the shooting object, and when the user observes that the definition is proper, the user clicks the shooting key to obtain the image. However, the user observes the sharpness of the image through experience, so that the sharpness of the photographed image is low (the sharpness of the image observed by the user may not be the best sharpness image in macro photography), and the photographing effect is poor.
In order to solve the technical problem of poor shooting effect in the related art, an embodiment of the present application provides a shooting control method, where at least two images continuously acquired by a shooting device for a target object are acquired, optionally, a shooting lens of the shooting device is a fixed focus lens, the definitions of the at least two images are acquired, a current definition state of the shooting device is determined according to the definitions of the at least two images, optionally, the definition state is a rising state, a falling state, and a state in a highest range, and the shooting device is controlled to shoot an image of the target object according to the current definition state of the shooting device. In the method, when the image shooting equipment using the fixed-focus lens shoots the image, the image shooting equipment can accurately acquire the definition of the plurality of images, so that the image with better definition can be accurately determined through the definition of the plurality of images, and the shooting effect is further improved.
For ease of understanding, an application scenario of the present application is described below in conjunction with fig. 3.
Fig. 3 is a schematic view of an application scenario provided in an embodiment of the present application. Referring to fig. 3, an image pickup apparatus and a target object are included. Alternatively, the image capturing apparatus shooting function is already turned on. When the image pickup apparatus is distant from the target object, the image of the target object displayed in the screen of the image pickup apparatus is blurred. When the image pickup apparatus moves toward the target object, the object distance between the image pickup apparatus and the target object changes, and the sharpness of the image of the target object displayed in the screen of the image pickup apparatus changes. In the process that the image pickup apparatus moves to the target object, the image pickup apparatus acquires images of a plurality of target objects, and determines an image with the highest definition as a captured image among the images of the plurality of target objects. Therefore, when the lens of the camera device is a fixed-focus lens, the camera device can determine the shot image according to the definition of the image, and further improve the shooting effect.
The technical means shown in the present application will be described in detail below with reference to specific examples. Alternatively, the following embodiments may exist alone or in combination with each other, and the description thereof will not be repeated in different embodiments for the same or similar contents.
Fig. 4 is a flowchart illustrating an image capture control method according to an embodiment of the present application. Referring to fig. 4, the method may include:
and S11, acquiring at least two images continuously acquired by the camera equipment for the target object.
The execution main body of the embodiment of the application can be a mobile terminal, and can also be a camera control device arranged in the mobile terminal. Alternatively, the imaging control apparatus may be implemented by software, or may be implemented by a combination of software and hardware.
Alternatively, the image pickup apparatus is any apparatus having an image pickup function. For example, the image capturing apparatus may be a camera, a mobile phone, a tablet computer, or the like. The image pickup lens of the image pickup apparatus is a fixed focus lens. Optionally, the fixed focus lens is a lens with only one fixed focal length. In the practical application process, if the lens of the image pickup device is a fixed-focus lens, the image pickup device cannot automatically focus.
The target object is a photographic object of the image pickup apparatus. For example, if the camera device shoots a cup, the target object is the cup; if the image pickup apparatus photographs a table, the target object is the table.
Alternatively, the image pickup apparatus may acquire an image including the target object in real time when the image pickup apparatus turns on the photographing function. For example, when the mobile phone turns on the shooting function, the mobile phone may acquire an image of each frame shot by the lens. Alternatively, when the image pickup apparatus turns on the photographing function, the image pickup apparatus may continuously acquire a plurality of images including the target object,
and S12, acquiring the definition of at least two images.
Sharpness is used to indicate the magnitude of the focus value of an image. For example, the larger the focus value of an image, the higher the sharpness of the image, and the smaller the focus value of the image, the lower the sharpness of the image. Alternatively, the sharpness of the at least two images may be obtained by presetting the model. Optionally, the preset model is obtained by learning a plurality of groups of samples, and each group of samples includes a sample image and a sample definition.
The sets of samples may be pre-labeled samples. For example, for a sample image 1, a sample definition 1 corresponding to the sample image 1 is obtained, and a group of samples is obtained, where the group of samples includes the sample image 1 and the sample definition 1. In this way, multiple sets of samples can be obtained. For example, the sets of samples may be as shown in table 1:
TABLE 1
Multiple sets of samples Sample image Sample definition
First set of samples Sample image 1 Sample definition 1
Second set of samples Sample image 2 Sample definition 2
Third group of samples Sample image 3 Sample definition 3
…… …… ……
It should be noted that table 1 illustrates the sets of samples by way of example only, and does not limit the sets of samples.
For example, if the image input into the preset model is the same as the sample image 1, the corresponding definition of the image is the sample definition 1; if the image input into the preset model is the same as the sample image 2, the definition corresponding to the image is the sample definition 2; if the image input into the preset model is the same as the sample image 3, the corresponding definition of the image is the sample definition 3.
Optionally, the definition of the displayed image in real time in the shooting page of the image capturing apparatus corresponds to the resolution. For example, when an image is displayed in a shooting page, the definition corresponding to the image is displayed in a preset area of the shooting page, so that a user can accurately determine the definition corresponding to the image when the image is shot currently.
Alternatively, when the lens of the image pickup apparatus is a fixed focus lens, the sharpness of an image captured by the image pickup apparatus is correlated with the object distance between the image pickup apparatus and the target object.
Next, the relationship between the sharpness of the image and the object distance will be described with reference to fig. 5.
Fig. 5 is a schematic diagram of a relationship between image sharpness and object distance provided in an embodiment of the present application. See fig. 5, which includes a coordinate system of the relationship between sharpness and object distance. Alternatively, the horizontal axis of the coordinate system is the object distance between the image capturing apparatus and the target object, and the vertical axis of the coordinate system is the sharpness corresponding to the image acquired by the image capturing apparatus.
Referring to fig. 5, in the process of increasing the object distance between the image capturing apparatus and the target object, the definition of the image acquired by the image capturing apparatus first increases and then decreases, and the highest point in the coordinate system is the highest definition of the image acquired by the image capturing apparatus. In the practical application process, the camera device can acquire each frame of image acquired by the lens and determine the corresponding definition of each frame of image.
And S13, controlling the image pickup equipment to carry out image pickup on the target object according to the definition of the at least two images.
The image capturing device may be controlled to capture an image of a target object according to the following feasible implementation manners: and determining the definition state of the camera equipment according to the definition of the at least two images. Optionally, the sharpness state is a raised state, a lowered state, and a state at the highest range.
The raised state is used to indicate that the sharpness of an image currently captured by the image capturing apparatus is being raised. For example, when the user moves the image pickup apparatus, if the sharpness of an image displayed in the screen of the image pickup apparatus is increasing, the current sharpness state of the image pickup apparatus is an increasing state.
The lowered state is used to indicate that the sharpness of an image currently captured by the image capturing apparatus is being lowered. For example, when the user moves the image pickup apparatus, the object distance between the image pickup apparatus and the target object to be photographed changes, and if the sharpness of an image displayed in the screen of the image pickup apparatus is decreasing, the current sharpness state of the image pickup apparatus is a decreasing state.
The image capturing device is located in the highest range state and used for indicating that the definition of an image currently captured by the image capturing device is within a preset interval of the highest definition. Optionally, the preset interval is a preset numerical value. For example, if the maximum resolution of an image is 100 and the preset interval is 10, the state of the image pickup apparatus is in the maximum range state when the resolution of an image captured by the image pickup apparatus is 90 to 100. For example, when the user moves the image pickup apparatus, the object distance between the image pickup apparatus and the target object to be photographed is changed, and if the sharpness of the image displayed on the screen of the image pickup apparatus is within a preset range of the highest sharpness, the current sharpness state of the image pickup apparatus is in the highest range state.
Alternatively, the image capturing apparatus may include a plurality of sharpness states. For example, the image pickup apparatus may simultaneously appear in the raised state and in the highest range state, or the image pickup apparatus may simultaneously appear in the lowered state and in the highest range state.
Alternatively, taking four images continuously acquired by the image pickup apparatus as an example, there are 2 cases when determining the sharpness state:
case 1: the image pickup apparatus moves in one direction.
When the camera equipment continuously moves towards one direction, if the definition of the first image is lower than that of the second image, the definition of the second image is lower than that of the third image, and the definition of the third image is lower than that of the fourth image, the definition state of the camera equipment is determined to be a rising state. For example, when the image capturing apparatus moves toward the target object, the sharpness of four images successively acquired by the image capturing apparatus sequentially increases, indicating that the current sharpness state of the image capturing apparatus is an increased state.
When the camera equipment continuously moves towards one direction, if the definition of the first image is higher than that of the second image, the definition of the second image is higher than that of the third image, and the definition of the third image is higher than that of the fourth image, the definition state of the camera equipment is determined to be a reduced state. For example, when the image capturing apparatus moves toward the target object, the sharpness of four images continuously acquired by the image capturing apparatus decreases in sequence, which indicates that the current sharpness state of the image capturing apparatus is in a decreased state.
The definition of the image acquired by the camera device is increased firstly and then decreased, and the definition state of the camera device corresponding to the definition in the preset interval of the definition highest value is in the highest range state in the process that the camera device moves to the target object. For example, when the image capturing apparatus moves away from the target object or toward the target object, if the sharpness of the first image is smaller than the sharpness of the second image, the sharpness state of the image capturing apparatus is in a raised state when the image capturing apparatus acquires the first image and the second image, and if the sharpness of the second image is greater than the sharpness of the third image and the sharpness of the third image is greater than the sharpness of the fourth image, the sharpness state of the image capturing apparatus is in a lowered state when the image capturing apparatus acquires the third image and the fourth image, at this time, it may be determined that the sharpness of the third image acquired by the image capturing apparatus is the highest, and the sharpness state of the image capturing apparatus corresponding to the sharpness in the preset interval of the sharpness of the third image is in the highest range state.
Case 2: the image pickup apparatus moves in a plurality of directions.
When the image pickup apparatus moves in a plurality of directions, if the sharpness continues to increase when the image pickup apparatus moves toward the target object, the sharpness decreases when the image pickup apparatus moves away from the target object, and if the sharpness continues to decrease when the image pickup apparatus moves toward the target object, the sharpness increases when the image pickup apparatus moves away from the target object. Therefore, when the sharpness state of the image pickup apparatus changes from the rising state to the falling state when the image pickup apparatus moves in a plurality of directions, it cannot be determined that the sharpness state of the image pickup apparatus is in the highest range state.
Next, with reference to fig. 6, a detailed description will be given of the state in which the sharpness state of the image pickup apparatus is in the highest range.
Fig. 6 is a schematic diagram of a sharpness state in a highest range state according to an embodiment of the present disclosure. See fig. 6, which includes a coordinate system of the relationship between sharpness and object distance. Alternatively, the horizontal axis of the coordinate system is the object distance between the image capturing apparatus and the target object, and the vertical axis of the coordinate system is the sharpness corresponding to the image acquired by the image capturing apparatus.
Referring to fig. 6, in the process of increasing the object distance between the image capturing apparatus and the target object, the definition of the image acquired by the image capturing apparatus increases, when the definition of the image acquired by the image capturing apparatus reaches the highest point, the definition of the image acquired by the image capturing apparatus decreases, within a preset range of the highest point, the definition states of the image capturing apparatus are all in the highest range state, and when the definition state of the image capturing apparatus is in the highest range state, the definition of the image captured by the image capturing apparatus is higher.
And controlling the image pickup equipment to shoot the target object according to the current definition state of the image pickup equipment. Alternatively, the image capturing apparatus may be controlled to capture an image of the target object according to a feasible implementation as follows: and judging whether the target image exists in the at least two images according to the current definition state of the camera equipment. Wherein the definition of the target image is a preset definition. Alternatively, the preset definition may be the highest definition. For example, the sharpness of the target image is the highest sharpness that the image pickup apparatus can capture.
Alternatively, whether the target image exists in the at least two images may be determined according to the following feasible implementation manners: and if the current definition state of the image pickup equipment is a reduced state and the state which is positioned in the highest range exists in the historical definition states of the image pickup equipment, determining that the target image exists in the at least two images. Alternatively, the historical sharpness state is a sharpness state of the image pickup apparatus before the current time. For example, the sharpness state of the image pickup apparatus before the current time includes a raised state and a state at the highest range, and the history sharpness state includes a raised state and a state at the highest range.
In the practical application process, the current definition state of the image pickup device is a descending state, which indicates that the definition of an image acquired by the image pickup device at the current moment is lower than that of an image acquired at the previous moment, and if the definition state of the image pickup device in the image pickup process is in the highest range state, which indicates that the image pickup device includes a target image with the highest definition in a plurality of images acquired by the image pickup process at this time.
When it is determined that the target image exists in the at least two images, the target image is stored to the image pickup apparatus. For example, the image capturing apparatus acquires a plurality of images during the current shooting, and stores the target image in the image capturing apparatus if the plurality of images include the target image with the highest definition.
The embodiment of the application provides a camera shooting control method, optionally, a camera shooting lens of camera shooting equipment is a fixed-focus lens, at least two images continuously collected by the camera shooting equipment to a target object are obtained, the definition of the at least two images is obtained according to a preset model, the current definition state of the camera shooting equipment is determined according to the definition of the at least two images, optionally, the definition states are a rising state, a reducing state and a state in the highest range, whether a target image with the definition being the preset definition exists in the at least two images is judged according to the current definition state of the camera shooting equipment, and when the target image exists in the at least two images, the target image is stored in the camera shooting equipment. In the method, when the image is shot by the camera equipment using the fixed-focus lens, the definition of the plurality of images can be accurately obtained by the camera equipment, and the definition state of the camera equipment is determined according to the definition of the plurality of images, so that whether the image with the highest definition exists in the plurality of images can be accurately judged according to the definition state of the camera equipment, the image with the highest definition can be stored in the camera equipment, and the shooting effect of the fixed-focus camera equipment is improved.
On the basis of the embodiment shown in fig. 4, the present application also includes a method of displaying the sharpness state of the image pickup apparatus in a shooting page of the image pickup apparatus. Next, a procedure of displaying the sharpness state of the image pickup apparatus in a shooting page of the image pickup apparatus will be described with reference to fig. 7.
Fig. 7 is a schematic flow chart illustrating a sharpness state of an image capturing apparatus according to an embodiment of the present application. Referring to fig. 7, the method includes:
and S71, determining the definition state of the image pickup device.
The sharpness state may be an increasing state, a decreasing state, or a state at the highest range.
It should be noted that the execution process of step S71 may refer to the execution step of step S13, and the embodiment of the present application is not described herein again.
And S72, displaying indication information corresponding to the current definition state of the image pickup device in a shooting page of the image pickup device.
The indication information indicates a sharpness state of the image pickup apparatus. Alternatively, the indication information may be an up arrow, a down arrow, or a shooting box.
Alternatively, the indication information may be displayed in a shooting page of the image capturing apparatus according to a possible implementation as follows: if the current sharpness state of the image pickup apparatus is an up state, the indication information is determined to be an up arrow. For example, if the definition state of the image capturing apparatus is an elevated state, an elevated arrow may be generated in a preset window in a shooting page of the image capturing apparatus, so that a user determines that the current definition state of the image capturing apparatus is the elevated state through the elevated arrow.
If the current sharpness state of the image pickup apparatus is a reduction state, the indication information is determined to be a falling arrow. For example, if the definition state of the image capturing apparatus is a reduced state, a falling arrow may be generated in a preset window in a shooting page of the image capturing apparatus, so that a user determines that the current definition state of the image capturing apparatus is the reduced state through the falling arrow.
And if the current definition state of the image pickup device is the state in the highest range, determining the indication information as a shooting frame. For example, if the definition state of the image capturing apparatus is in the highest range state, a green shooting frame may be generated in a preset window in a shooting page of the image capturing apparatus, so that a user determines that the current definition state of the image capturing apparatus is in the highest range state through the green shooting frame.
Next, a process of displaying the instruction information by the image pickup apparatus will be described in detail with reference to fig. 8A to 8C.
Fig. 8A is a schematic process diagram for displaying indication information according to an embodiment of the present application. Referring to fig. 8A, an image pickup apparatus and a target object are included. An image including the target object may be displayed in a shooting page of the image pickup apparatus. When the image pickup apparatus moves toward the target object, the sharpness state of the image pickup apparatus is a raised state.
Referring to fig. 8A, when the definition state of the image capturing apparatus is in the raised state, a raised arrow is generated in a preset area of a shooting page of the image capturing apparatus, so that a user can accurately determine that the current definition state of the image capturing apparatus is in the raised state through the raised arrow, and the shooting effect is improved.
Fig. 8B is a schematic view of another process for displaying indication information according to an embodiment of the present application. Referring to fig. 8B, an image pickup apparatus and a target object are included. An image including the target object may be displayed in a shooting page of the image pickup apparatus. When the image pickup apparatus moves in a direction away from the target object, the sharpness state of the image pickup apparatus is a reduced state.
Referring to fig. 8B, when the resolution state of the image capturing apparatus is in the reduced state, a descending arrow is generated in a preset area of a shooting page of the image capturing apparatus, so that a user can accurately determine that the current resolution state of the image capturing apparatus is in the reduced state through the descending arrow, and the shooting effect is improved.
Fig. 8C is a schematic view of another process for displaying indication information according to an embodiment of the present application. Referring to fig. 8C, an image pickup apparatus and a target object are included. An image including the target object may be displayed in a shooting page of the image pickup apparatus. When the image pickup apparatus moves in the direction of the target object, the sharpness state existence of the image pickup apparatus is in the highest range state.
Referring to fig. 8C, when the definition state of the image capturing apparatus is in the highest range state, a shooting frame is generated in a preset area of a shooting page of the image capturing apparatus, so that a user can accurately determine that the current definition state of the image capturing apparatus is in the highest range state through the shooting frame, and the shooting effect is improved.
The embodiment of the application provides a camera shooting control method, which includes determining a current definition state of a camera shooting device, and displaying indication information corresponding to the current definition state of the camera shooting device in a shooting page of the camera shooting device, wherein the indication information optionally includes an ascending arrow, a descending arrow and a shooting frame. In the method, when the definition state of the camera device changes, the corresponding indication information can be displayed in the preset area of the shooting page of the camera device, so that a user can accurately determine the definition state of the camera device according to the indication information in the shooting page, and the shooting effect is further improved.
On the basis of any one of the above embodiments, after the image pickup apparatus acquires the target image, the user may also actively take the image, and the present application further includes a process of taking the image. Next, a process of processing the captured image will be described in detail with reference to fig. 9.
Fig. 9 is a schematic diagram of a process of processing a captured image according to an embodiment of the present application. Referring to fig. 9, the method includes:
and S91, responding to the shooting instruction, and acquiring the shot image corresponding to the shooting instruction.
Alternatively, the shooting instruction may be a click operation of a shooting key in the shooting page by the user. The shot image corresponding to the shooting instruction may be an image displayed in a shooting page when the shooting instruction is received by the image capturing apparatus. For example, the captured image corresponding to the capturing instruction is an image displayed in a capturing page of the image capturing apparatus when the user clicks a capturing key in the capturing page.
Next, with reference to fig. 10, a process of the image pickup apparatus acquiring a captured image corresponding to a capturing instruction will be described in detail.
Fig. 10 is a schematic diagram of acquiring a captured image according to an embodiment of the present application. Referring to fig. 10, an image pickup apparatus is included. Optionally, the image capturing apparatus includes a shooting page including an image of the target object and a shooting key. When the image of the target object is displayed in the shooting page, the user clicks the shooting key, and the shooting image acquired by the image pickup device is the image displayed in the shooting page.
And S92, processing the shot image according to the definition of the target image and the definition of the shot image.
Alternatively, the captured image may be processed according to the following feasible implementation:
and if the definition of the target image is the same as that of the shot image, deleting the shot image. For example, if the definition of the target image acquired by the image capturing device is the same as the definition of the image acquired by clicking the shooting key by the user, the image capturing device only needs to store one image, and at this time, the image capturing device may delete the shot image corresponding to the shooting instruction, and the image capturing device may also delete the target image.
And/or storing the shot image in the image pickup device if the definition of the target image is different from the definition of the shot image. For example, when the definition of the target image acquired by the camera device is different from the definition of the image acquired by clicking the shooting key by the user, the definition of the target image is the highest definition, so that the definition of the shot image corresponding to the shooting instruction is smaller than the definition of the target image, and the shot image corresponding to the shooting instruction can be stored in the camera device, so that the function of shooting by the user using the camera device can be ensured, the shooting flexibility of the camera device is improved, and the shooting effect of the camera device is improved.
The embodiment of the application provides a camera shooting control method, which is characterized in that after a camera shooting device obtains a target image, a shot image corresponding to a shooting instruction is obtained in response to the shooting instruction, and the shot image is processed according to the definition of the target image and the definition of the shot image. Therefore, when the definition of the target image is the same as that of the shot image, the shooting equipment deletes the shot image, so that the memory space is saved, when the definition of the shot image is different from that of the target image, the shot image is stored in the shooting equipment, so that the shooting flexibility is improved and the shooting effect is improved on the premise that the active shooting function is not influenced.
In addition to any of the above embodiments, the following describes a procedure of the above imaging control method with reference to fig. 11.
Fig. 11 is a process schematic diagram of an image capture control method according to an embodiment of the present application. Referring to fig. 11, an image pickup apparatus and a target object are included. When the camera device moves towards the target object, the camera device continuously acquires at least two images, and determines the definition state of the camera device according to the definition of the at least two images.
Referring to fig. 11, when the sharpness state of the image pickup apparatus is in the raised state, a rising arrow is generated in a preset window of the image pickup apparatus. When the camera equipment continues to move towards the target object, the definition of the image is acquired by the camera equipment and is increased to the preset interval range of the highest definition, at the moment, the definition state of the camera equipment is in the highest range state, a shooting frame is generated in a preset window of a shooting page of the camera equipment, and when the definition of the image acquired by the camera equipment is increased to the highest definition, the camera equipment stores the image with the highest definition.
Referring to fig. 11, when the image capturing apparatus continues to move toward the target object, the sharpness of the image acquired by the image capturing apparatus decreases, at this time, the sharpness state of the image capturing apparatus is a decreasing state, and a decreasing arrow is generated in a preset window of a shooting page of the image capturing apparatus. Like this, when the camera equipment of installation fixed focus camera shoots the image, at camera equipment removal in-process, camera equipment can acquire the definition of image to confirm camera equipment's definition state according to the definition of image, and then according to camera equipment's definition state, confirm the highest target image of definition, and save the highest target image of definition, can improve camera equipment's shooting effect like this.
Fig. 12 is a schematic structural diagram of an imaging control apparatus according to an embodiment of the present application. Referring to fig. 12, the camera control device 10 may be disposed in a mobile terminal, and the camera control device 10 includes a first obtaining module 11, a second obtaining module 12 and a control module 13, where:
the first obtaining module 11 is configured to obtain at least two images continuously collected by a camera device on a target object, and optionally, a camera lens of the camera device is a fixed focus lens;
the second obtaining module 12 is configured to obtain the definitions of the at least two images;
the control module 13 is configured to control the image capturing device to capture an image of the target object according to the definitions of the at least two images.
Optionally, the control module 13 is specifically configured to:
determining the current definition state of the camera equipment according to the definition of the at least two images, wherein the definition state is a rising state, a falling state and a highest range state;
and controlling the image pickup equipment to carry out image pickup on the target object according to the current definition state of the image pickup equipment.
Optionally, the control module 13 is specifically configured to:
judging whether a target image exists in the at least two images according to the current definition state of the camera equipment, wherein the definition of the target image is preset definition;
and when determining that the target image exists in the at least two images, storing the target image to the image pickup equipment.
Optionally, the control module 13 is specifically configured to:
and if the current definition state of the camera equipment is a reduced state and the state which is positioned in the highest range exists in the historical definition states of the camera equipment, determining that a target image exists in the at least two images.
The camera control device provided in the embodiment of the present application can implement the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
Fig. 13 is a schematic structural diagram of another image capture control apparatus according to an embodiment of the present application. In addition to the embodiment shown in fig. 12, referring to fig. 13, the image capture control apparatus 10 further includes a display module 14, where the display module 14 is configured to:
and displaying indication information corresponding to the current definition state of the image pickup equipment in a shooting interface of the image pickup equipment.
Optionally, if the current definition state of the image pickup apparatus is a rising state, determining that the indication information is a rising arrow;
if the current definition state of the camera equipment is a reduction state, determining the indication information as a reduction arrow;
and if the current definition state of the camera equipment is in the highest range state, determining the indication information as a shooting frame.
Optionally, the image capturing control apparatus further includes a third acquiring module 15, where the third acquiring module 15 is configured to:
responding to a shooting instruction, and acquiring a shooting image corresponding to the shooting instruction;
and processing the shot image according to the definition of the target image and the definition of the shot image.
Optionally, the third obtaining module 15 is specifically configured to:
if the definition of the target image is the same as that of the shot image, deleting the shot image; and/or the presence of a gas in the gas,
and if the definition of the target image is different from that of the shot image, storing the shot image in the camera equipment.
The camera control device provided in the embodiment of the present application can implement the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar, and are not described herein again.
The application also provides a mobile terminal device, the terminal device includes a memory and a processor, the memory stores a camera control program, and the camera control program implements the steps of the camera control method in any of the above embodiments when executed by the processor.
The present application further provides a computer-readable storage medium, on which an imaging control program is stored, and when executed by a processor, the imaging control program implements the steps of the imaging control method in any of the above embodiments.
In the embodiments of the mobile terminal and the computer-readable storage medium provided in the present application, all technical features of the embodiments of the above-mentioned camera control method are included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the above-mentioned method, and are not described again here.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. An imaging control method, comprising:
s11, acquiring at least two images continuously acquired by the camera equipment on the target object;
s12, acquiring the definition of the at least two images;
and S13, controlling the camera to shoot the target object according to the definition of the at least two images.
2. The method according to claim 1, wherein the S13 includes:
determining the definition state of the camera equipment according to the definitions of the at least two images;
and controlling the image pickup equipment to carry out image pickup on the target object according to the definition state of the image pickup equipment.
3. The method according to claim 2, wherein controlling the image capturing apparatus to image-capture the target object according to the image capturing apparatus sharpness state comprises:
judging whether a target image exists in the at least two images according to the definition state of the camera equipment, wherein the definition of the target image is preset definition;
and when determining that the target image exists in the at least two images, storing the target image to the image pickup equipment.
4. The method according to claim 3, wherein determining whether a target image exists in the at least two images according to the sharpness state of the image pickup apparatus comprises:
and if the definition state of the camera equipment is a reduced state and the state of the camera equipment in the highest range exists in the historical definition states, determining that the target image exists in the at least two images.
5. The method according to claim 3 or 4, characterized in that the method further comprises:
and displaying indication information corresponding to the definition state of the image pickup equipment in a shooting interface of the image pickup equipment.
6. The method according to claim 5, wherein the indication information corresponding to the definition state of the image capturing apparatus comprises at least one of:
if the definition state of the camera equipment is a rising state, determining that the indication information is a rising arrow;
if the definition state of the camera equipment is a reduction state, determining that the indication information is a reduction arrow;
and if the definition state of the camera equipment is in the highest range state, determining the indication information as a shooting frame.
7. The method of claim 5 or 6, further comprising:
responding to a shooting instruction, and acquiring a shooting image corresponding to the shooting instruction;
and processing the shot image according to the definition of the target image and the definition of the shot image.
8. The method according to claim 7, wherein the processing the captured image according to the sharpness of the target image and the sharpness of the captured image comprises:
if the definition of the target image is the same as that of the shot image, deleting the shot image; and/or the presence of a gas in the gas,
and if the definition of the target image is different from that of the shot image, storing the shot image in the camera equipment.
9. A mobile terminal, characterized in that the mobile terminal comprises: memory, a processor, wherein the memory has stored thereon an imaging control program which, when executed by the processor, implements the steps of the imaging control method according to any one of claims 1 to 8.
10. A readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of the imaging control method according to any one of claims 1 to 8.
CN202110791430.XA 2021-07-13 2021-07-13 Camera shooting control method, mobile terminal and storage medium Pending CN113542605A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110791430.XA CN113542605A (en) 2021-07-13 2021-07-13 Camera shooting control method, mobile terminal and storage medium
PCT/CN2021/132163 WO2023284218A1 (en) 2021-07-13 2021-11-22 Photographing control method, and mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110791430.XA CN113542605A (en) 2021-07-13 2021-07-13 Camera shooting control method, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113542605A true CN113542605A (en) 2021-10-22

Family

ID=78098913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110791430.XA Pending CN113542605A (en) 2021-07-13 2021-07-13 Camera shooting control method, mobile terminal and storage medium

Country Status (2)

Country Link
CN (1) CN113542605A (en)
WO (1) WO2023284218A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284218A1 (en) * 2021-07-13 2023-01-19 深圳传音控股股份有限公司 Photographing control method, and mobile terminal and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120086847A1 (en) * 2010-10-12 2012-04-12 Research In Motion Limited Convergence feedback indicator, provided when taking a picture in a camera application
US9549125B1 (en) * 2015-09-01 2017-01-17 Amazon Technologies, Inc. Focus specification and focus stabilization
CN106550183A (en) * 2015-09-18 2017-03-29 维沃移动通信有限公司 A kind of image pickup method and device
CN106851112A (en) * 2017-03-21 2017-06-13 惠州Tcl移动通信有限公司 The photographic method and system of a kind of mobile terminal
CN110035216A (en) * 2018-01-11 2019-07-19 浙江宇视科技有限公司 A kind of semi-automatic focusing method of visualization and device of manual zoom camera lens
CN111314608A (en) * 2020-02-24 2020-06-19 珠海市它物云科技有限公司 Image focusing prompting method, computer device and computer readable storage medium
CN111970435A (en) * 2020-08-03 2020-11-20 广东小天才科技有限公司 Method and device for macro photography

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542605A (en) * 2021-07-13 2021-10-22 深圳传音控股股份有限公司 Camera shooting control method, mobile terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120086847A1 (en) * 2010-10-12 2012-04-12 Research In Motion Limited Convergence feedback indicator, provided when taking a picture in a camera application
US9549125B1 (en) * 2015-09-01 2017-01-17 Amazon Technologies, Inc. Focus specification and focus stabilization
CN106550183A (en) * 2015-09-18 2017-03-29 维沃移动通信有限公司 A kind of image pickup method and device
CN106851112A (en) * 2017-03-21 2017-06-13 惠州Tcl移动通信有限公司 The photographic method and system of a kind of mobile terminal
CN110035216A (en) * 2018-01-11 2019-07-19 浙江宇视科技有限公司 A kind of semi-automatic focusing method of visualization and device of manual zoom camera lens
CN111314608A (en) * 2020-02-24 2020-06-19 珠海市它物云科技有限公司 Image focusing prompting method, computer device and computer readable storage medium
CN111970435A (en) * 2020-08-03 2020-11-20 广东小天才科技有限公司 Method and device for macro photography

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284218A1 (en) * 2021-07-13 2023-01-19 深圳传音控股股份有限公司 Photographing control method, and mobile terminal and storage medium

Also Published As

Publication number Publication date
WO2023284218A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
CN108038834B (en) Method, terminal and computer readable storage medium for reducing noise
CN113179369B (en) Shot picture display method, mobile terminal and storage medium
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN111885307A (en) Depth-of-field shooting method and device and computer readable storage medium
CN107241504B (en) Image processing method, mobile terminal and computer readable storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium
CN112135060B (en) Focusing processing method, mobile terminal and computer storage medium
CN108282608B (en) Multi-region focusing method, mobile terminal and computer readable storage medium
CN113347372A (en) Shooting light supplement method, mobile terminal and readable storage medium
CN112153305A (en) Camera starting method, mobile terminal and computer storage medium
CN112437472A (en) Network switching method, equipment and computer readable storage medium
CN111970738A (en) Network switching control method, equipment and computer readable storage medium
CN109510941B (en) Shooting processing method and device and computer readable storage medium
CN113542605A (en) Camera shooting control method, mobile terminal and storage medium
CN107743204B (en) Exposure processing method, terminal, and computer-readable storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN112040134B (en) Micro-holder shooting control method and device and computer readable storage medium
CN114900613A (en) Control method, intelligent terminal and storage medium
CN113572916A (en) Shooting method, terminal device and storage medium
CN109215004B (en) Image synthesis method, mobile terminal and computer readable storage medium
CN113676658A (en) Photographing method, mobile terminal and readable storage medium
CN109495683B (en) Interval shooting method and device and computer readable storage medium
CN113572964A (en) Image processing method, mobile terminal and storage medium
CN108335301B (en) Photographing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022