CN115866389A - Processing method and electronic equipment - Google Patents

Processing method and electronic equipment Download PDF

Info

Publication number
CN115866389A
CN115866389A CN202211530950.6A CN202211530950A CN115866389A CN 115866389 A CN115866389 A CN 115866389A CN 202211530950 A CN202211530950 A CN 202211530950A CN 115866389 A CN115866389 A CN 115866389A
Authority
CN
China
Prior art keywords
target
target object
determining
nth
pixel information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211530950.6A
Other languages
Chinese (zh)
Inventor
柳小芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202211530950.6A priority Critical patent/CN115866389A/en
Publication of CN115866389A publication Critical patent/CN115866389A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a processing method and electronic equipment, wherein the method comprises the following steps: acquiring an Nth acquired image; determining target parameters based on a first mode and a second mode aiming at the Nth acquired image; before determining the target parameter, the method further comprises: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.

Description

Processing method and electronic equipment
Technical Field
The present application relates to the field of image acquisition technologies, and in particular, to a processing method and an electronic device.
Background
When the camera takes a picture, the face detection mechanism is helpful for better imaging effect, because the face detection mechanism provides Automatic Exposure (AE) or Automatic White Balance (AWB) information of the face during the picture taking, which is used for adjusting AE, AWB or Automatic Focus (AF) of the picture, so that the brightness, color and Focus position of the picture can reach the expectation. However, the portrait detection mechanism also brings negative effects, for example, when a user wants to take a landscape picture, passers-by enter the shot scene, and after the camera recognizes the face, the AE, AWB or AF operation is performed again. The user cannot control and can only passively wait for the passerby to leave.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a processing method and an electronic device to solve at least the above technical problems in the prior art.
According to a first aspect of the present application, an embodiment of the present application provides a processing method, including: acquiring an Nth acquired image; determining target parameters based on a first mode and a second mode aiming at the Nth acquired image;
before determining the target parameter, the method further comprises: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.
Optionally, the determining whether the pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter includes:
if the Nth acquired image is determined to comprise the target object based on the second mode, determining the position of the target object;
if the position of the target object belongs to the target area, determining pixel information of the target object as a factor for determining a target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not a factor in determining the target parameter.
Optionally, the processing method further comprises:
if it is determined that the nth captured image does not include the target object based on the second manner, the target parameter is determined based on the pixel information of the nth captured image determined in the first manner.
Optionally, if the position of the target object does not belong to the target area, the processing method further includes:
if the confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
Optionally, the get confirmation instruction comprises any one of:
the position of the target object in the Nth collected image and the (N + M) th collected image is unchanged, and a confirmation instruction is obtained;
and obtaining confirmation operation aiming at prompt information, and obtaining a confirmation instruction, wherein the prompt information is used for prompting whether the target object is used as a factor for confirming the target parameter.
Optionally, the processing method further comprises:
setting a target area, wherein the target area is smaller than a photosensitive area of the camera module;
the setting of the target area includes any one of:
obtaining an input operation, and setting a target area based on the input operation;
the target area is set based on the shooting scene.
According to a second aspect of the present application, an embodiment of the present application provides an electronic device, including:
an image acquisition device;
the processor is used for obtaining the Nth collected image; determining target parameters based on a first mode and a second mode aiming at the Nth acquired image;
wherein, prior to determining the target parameter, the processor is further configured to: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.
Optionally, the determining whether the pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter includes:
determining the position of the target object if it is determined that the nth captured image includes the target object based on the second manner;
if the position of the target object belongs to the target area, determining pixel information of the target object as a factor for determining a target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not a factor in determining the target parameter.
Optionally, the processor is further configured to determine the target parameter based on the pixel information of the nth captured image determined in the first manner if it is determined based on the second manner that the nth captured image does not include the target object.
Optionally, the method further includes, if the position of the target object does not belong to the target area:
if the confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
FIG. 1 is a schematic flow chart of a processing method in an embodiment of the present application;
FIG. 2 is a schematic flow chart of another processing method in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating that prompt information is sent for an edge portrait in the embodiment of the present application;
FIG. 4 is a schematic flow chart of another processing method in an embodiment of the present application;
fig. 5 is a schematic hardware structure diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, in order to obtain a better imaging effect, photographing parameters are generally determined in real time based on all pixel information and face information in an acquired image. If an unexpected portrait appears in the picture, such as a passerby, the face information of the unexpected portrait is directly used as the determining factor of the photographing parameters, which easily causes the jitter of picture focusing, white balance, and the like.
To this end, an embodiment of the present application provides a processing method, applied to an electronic device, as shown in fig. 1, including:
and S100, acquiring the Nth acquired image.
In this embodiment, N is a natural number greater than or equal to 1. The nth captured image may be obtained through a viewfinder prior to responding to the image capture instruction. The nth captured image may also be acquired by the camera after responding to the image capture instruction.
S200, determining target parameters based on a first mode and a second mode aiming at the Nth collected image; before determining the target parameter, the method further comprises: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.
In this embodiment, the first mode includes, but is not limited to, all pixels in the image, and the second mode includes, but is not limited to, a human face. The target parameters may be photographing parameters including, but not limited to AE, AF, AWB. And when the target parameters are determined according to the first mode and the second mode aiming at the Nth acquired image, determining the target parameters according to the pixel information of all pixels in the Nth acquired image and the pixel information of the target object in the Nth acquired image.
In specific implementation, when the target parameter is determined based on the first mode and the second mode for the nth acquired image, the calculation weights of the first mode and the second mode for the target parameter may be determined first, then preliminary target parameters are determined based on the first mode and the second mode, and finally the final target parameter is determined based on the respective weights of the first mode and the second mode and the corresponding preliminary target parameters. For example, if the first mode is determined to be 80% weighted, the second mode is determined to be 20% weighted, the preliminary target parameter determined based on the first mode is a, and the preliminary target parameter determined based on the second mode is B, the final target parameter = a 80% + B20%.
In this embodiment, although the target parameter is determined based on the first mode and the second mode by default, before determining the target parameter, the method further includes: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.
In a specific implementation, it may be determined whether the target object is included in the nth captured image based on the second mode. If the target object is included, it is then determined whether pixel information of the target object is a factor in determining the target parameter.
In the present embodiment, if it is determined that the nth captured image determined based on the second manner includes the pixel information of the target object as a factor for determining the target parameter, the target parameter is determined based on the first manner and the second manner for the nth captured image. If it is determined that the nth captured image determined based on the second manner includes pixel information of the target object as a factor in determining the target parameter, the target parameter may be determined based on the first manner directly for the nth captured image, or may still be determined based on the first manner and the second manner.
According to the processing method provided by the embodiment of the application, the Nth collected image is obtained; determining target parameters based on a first mode and a second mode aiming at the Nth acquired image; before determining the target parameter, the method further comprises: determining whether pixel information of a target object included in the nth captured image determined based on the second mode is a factor for determining the target parameter; therefore, when the target parameters are determined, a better photographing effect can be achieved based on the first mode and the second mode; before the target parameters are determined, selection is provided for determining whether pixel information of the target object in the Nth acquired image determined based on the second mode is used as a factor for determining the target parameters, so that the pixel information of the peripheral edge object in the Nth acquired image can be selectively used as a factor for determining the target parameters according to actual requirements, and algorithms such as automatic focusing and automatic white balance are realized without adjusting the peripheral edge object.
In an alternative embodiment, as shown in fig. 2, the step S102 of determining whether the pixel information of the target object included in the nth captured image determined based on the second mode is taken as a factor for determining the target parameter includes:
determining the position of the target object if it is determined that the nth captured image includes the target object based on the second manner;
if the position of the target object belongs to the target area, determining pixel information of the target object as a factor for determining a target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not a factor in determining the target parameter.
In the present embodiment, generally, if the subject is a subject, it is prioritized to have the subject in a target area of the screen, for example, a center area, and thus a better shooting effect can be achieved. Therefore, when determining whether or not the pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter, it is possible to automatically determine whether or not the target object is included in the nth captured image and whether or not the position of the target object belongs to the target area based on the second manner.
When the method is specifically implemented, whether a target object exists in the Nth acquired image or not can be judged by detecting whether the face is detected or not. If the face is detected, determining that the Nth acquired image comprises the target object based on a second mode; if no face is detected, it is determined that the Nth captured image does not include the target object based on the second manner. It should be noted that, a face is an example provided in the embodiment of the present application, and the second way may be to detect a face, that is, a target object is a face. Of course, other detection methods may be used to determine the target object as an object other than a human face, such as eyes, a face, or an object.
If it is determined that the Nth captured image includes the target object based on the second manner, the position of the target object is determined. When determining the target object position, the determination may be made based on the position of the detected face. For example, if the face is detected to appear at the center of the screen, the position of the target object is the center of the screen.
After the position of the target object is determined, the position of the target object may be compared with a preset target area to determine whether the position of the target object belongs to the target area. If the position of the target object belongs to the target area, the target object is a subject to be shot, when the target parameters are determined, the pixel information of the target object needs to be referred to, and the shooting effect of the face of the target object in the final imaging is ensured, so that the pixel information of the target object is determined as a factor for determining the target parameters. If the position of the target object does not belong to the target area, it is indicated that the target object is a non-subject, such as a passerby, and the pixel information of the target object is not a factor for determining the target parameter.
In some embodiments, after determining that the position of the target object belongs to the target area, if the position of the target object is unchanged from the nth captured image to the N + K images, pixel information of the target object is determined as a factor for determining the target parameter to prevent the target object from temporarily entering the target area to cause a shake of a picture.
In the present embodiment, if the pixel information of the target object is a factor for determining the target parameter, the target parameter is determined based on the first mode and the second mode for the nth captured image. Therefore, when the target parameters are determined, the pixel information of the target object is referred to, so that the brightness of the face of the target object in the picture can be ensured and the shooting effect can be ensured during final imaging.
In the present embodiment, if the pixel information of the target object is not a factor for determining the target parameter, the target parameter is determined based directly on the first manner for the nth captured image. In this way, it can be ensured that the determination of the target parameter is not participated in using the pixel information of the non-subject, thereby ensuring that the brightness, white balance, and the like of the picture are not changed as the non-subject enters the subject environment.
In this embodiment, if it is determined that the nth captured image includes the target object based on the second manner, and then it is determined whether the pixel information of the target object is used as the factor for determining the target parameter based on the position of the target object, it may be implemented that whether the pixel information of the target object in the nth captured image is used as the factor for determining the target parameter is automatically selected based on the position of the target object, and it is ensured that the pixel information of the target object in the non-target area is not used as the factor for determining the target parameter, so that it is ensured that the target object in the non-target area appears in the picture, and the brightness, white balance, and the like of the picture are not affected.
In an optional embodiment, if the user is only shooting a landscape and no target object enters the screen, the processing method further includes:
if it is determined that the nth captured image does not include the target object based on the second manner, the target parameter is determined based on the pixel information of the nth captured image determined in the first manner.
In specific implementation, the default setting is based on the first mode and the second mode to determine the target parameters. If the target object is not included in the nth captured image, if the target parameter is determined according to the first mode and the second mode, and the target parameter in the scene without the target object is reduced because the first mode and the second mode determine the target parameter according to a certain weight, the target parameter may be determined directly based on the pixel information of the nth captured image determined in the first mode. Thus, when no target object exists in the picture, the brightness, the white balance and the like of the picture are not reduced, and the brightness, the white balance and the like of the picture are not influenced.
In an optional embodiment, if the position of the target object does not belong to the target area, the processing method further includes:
if the confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
In specific implementation, for some specific scenes, although the position of the target object does not belong to the target area, the target object which does not belong to the target area is taken as a subject according to actual needs, and then the pixel information of the target object can be determined as a factor for determining the target parameter by obtaining a confirmation instruction, so that the user experience can be improved.
In one implementation, the confirmation instruction is obtained if the position of the target object in the N-th captured image to the N + M-th captured image is unchanged.
In this implementation, if the position of the target object in the N-th captured image is not changed to the N + M-th captured image, which indicates that the target object stays in the screen for a long time and does not temporarily enter the screen as a passerby, a confirmation instruction may be generated to confirm that the target object whose position does not belong to the target area is a subject, and pixel information of the target object is used as a factor for determining the target parameter. Therefore, the confirmation instruction can be automatically generated, and the general rule of image acquisition is met.
In another implementation, if a confirmation operation for the hint information is obtained, a confirmation instruction is obtained, the hint information being used to hint whether the target object is a factor for confirming the target parameter.
In specific implementation, for example, as shown in fig. 3, the nth captured image includes a plurality of target objects, the target area is a central area, and then some of the target objects, such as the two girls in fig. 3, have positions belonging to the target area, while some of the target objects, such as the boys in fig. 3, have positions not belonging to the target area, and then prompt information may be sent for the target objects in the non-target area, for example, a prompt box shown in fig. 3, to prompt whether the target objects are used as a factor for confirming the target parameters. If the user performs a confirmation operation for confirming that the target object is a factor for confirming the target parameter, a confirmation operation for the prompt information is obtained, a confirmation instruction is obtained, the target object whose position does not belong to the target area is confirmed as a subject, and pixel information of the target object is regarded as a factor for confirming the target parameter.
In this implementation manner, the user may be provided with the option of determining whether to use the target object as a factor for confirming the target parameter for the target object that does not belong to the target area, so that the user experience may be improved.
In an optional embodiment, the processing method further includes:
and setting a target area, wherein the target area is smaller than the photosensitive area of the camera module.
Wherein the setting of the target area includes any one of:
obtaining an input operation, and setting a target area based on the input operation;
the target area is set based on the shooting scene.
In specific implementation, when the target area is set, the user can define the position and the size of the target area in the view finder frame, so that the target area is set, and thus, the manual setting of the target area can be realized.
Alternatively, the target region may be automatically set for the shooting scene by identifying the shooting scene in the nth captured image. Therefore, automatic setting of the target area can be achieved, and the shooting scene can be attached.
In an optional embodiment, before determining whether pixel information of the target object included in the nth captured image determined based on the second mode is a factor for determining the target parameter, the processing method further includes:
if the Nth collected image comprises the target object based on the second mode, sending second prompt information to the user, wherein the second prompt information is used for prompting the user whether to close the second mode for determining the target parameter;
if the confirmation operation aiming at the second prompt message is obtained, determining a target parameter aiming at the Nth collected image based on a first mode;
and if the confirmation operation aiming at the second prompt information is not obtained, determining the target object aiming at the Nth acquired image based on the first mode and the second mode.
In specific implementation, when the camera is powered on, the default setting is to determine the target parameters based on the first mode and the second mode. After the camera is started, second prompt information can be popped up to the user as long as the Nth acquired image is determined to have the face, and the second prompt is used for prompting the user whether to close the second mode. If the user chooses to turn off the second mode, the target parameters can be determined directly based on the first mode, and the camera can no longer identify the target object in the Nth captured image based on the second mode. If the user chooses not to turn off the second mode, the target parameters are still determined based on the first mode and the second mode.
In one implementation, if the user selects not to turn off the second manner, the pixel information of the target object included in the nth captured image determined based on the second manner may be directly determined as a factor for determining the target parameter.
In one implementation, if the user chooses not to turn off the second mode, the position of the target object may be continuously determined, and if the position of the target object belongs to the target area, the pixel information of the target object is determined as a factor for determining the target parameter; if the position of the target object does not belong to the target area, the pixel information of the target object is not taken as a factor for determining the target parameter.
In this embodiment, as long as the nth image includes the target object, second prompt information is sent to the user to prompt the user whether to close the second mode for determining the target parameter, so that the user can autonomously select whether to close the second mode according to actual requirements, so as to improve user experience, and improve accuracy of target parameter calculation.
The treatment method of the present application is further described in a specific embodiment below.
As shown in fig. 4, after the camera recognizes the face, it first determines the position where the portrait appears, if the portrait appears in the central area, it needs to detect whether the portrait is stable, and outputs the pixel information of the face to the platform end to perform AE, AWB and AF operations after the portrait is stable; if the face appears in the edge area of the image, firstly, the pixel information of the face is not sent to the platform end, but prompt information is displayed in a preview interface, a user selects whether the face information of the edge area is needed, if the user clicks a confirmation option, the face information of the edge area is sent to the platform end for the platform end to carry out AE, AWB and AF operation, if the user clicks no option, the face information of the edge area is not sent to the platform end, then the platform end always receives the face information of the face, and therefore the change of the brightness, color and focusing of the picture caused by the appearance of passerby in the picture is avoided.
According to an embodiment of the present application, there is also provided an electronic device including:
an image acquisition device;
the processor is used for obtaining the Nth collected image; determining target parameters based on a first mode and a second mode aiming at the Nth acquired image;
wherein, prior to determining the target parameter, the processor is further configured to: it is determined whether pixel information of the target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter.
In an alternative embodiment, the determining whether the pixel information of the target object included in the nth captured image determined based on the second manner is included as a factor in determining the target parameter includes:
determining the position of the target object if it is determined that the nth captured image includes the target object based on the second manner;
if the position of the target object belongs to the target area, determining pixel information of the target object as a factor for determining a target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not a factor in determining the target parameter.
In an alternative embodiment, the processor is further configured to determine the target parameter based on the pixel information of the nth captured image determined in the first manner if it is determined based on the second manner that the nth captured image does not include the target object.
In an optional embodiment, if the position of the target object does not belong to the target area, the method further comprises:
if the confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
In an alternative embodiment, the get confirmation instruction comprises any of:
the position of the target object in the images from the Nth collected image to the (N + M) th collected image is unchanged, and a confirmation instruction is obtained;
a confirmation operation for the prompt information for prompting whether the target object is a factor for confirming the target parameter is obtained, and a confirmation instruction is obtained.
In an optional embodiment, the processor is further configured to set a target area, where the target area is smaller than a photosensitive area of the camera module;
the setting of the target area includes any one of:
obtaining an input operation, and setting a target area based on the input operation;
the target area is set based on the shooting scene.
It should be noted that, the description of the electronic device in the embodiment of the present application is similar to the description of the processing method embodiment described above, and has similar beneficial effects to the method embodiment, and therefore, the description is omitted here.
An exemplary application of the electronic device provided by the embodiments of the present application is explained below, and referring to fig. 5, fig. 5 shows a schematic block diagram of an exemplary electronic device 800 that may be used to implement the embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 5, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the processing methods. For example, in some embodiments, the processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the processing methods described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of processing, comprising:
acquiring an Nth acquired image;
determining target parameters based on a first mode and a second mode for the Nth acquired image;
before determining the target parameter, the method further includes:
determining whether pixel information of a target object included in the Nth captured image determined based on the second manner is a factor for determining the target parameter.
2. The processing method according to claim 1, the determining whether pixel information of a target object included in the nth captured image determined based on the second manner is a factor for determining the target parameter includes:
determining a position of a target object if it is determined that the nth captured image includes the target object based on a second manner;
if the position of the target object belongs to a target area, determining pixel information of the target object as a factor for determining the target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not taken as a factor for determining the target parameter.
3. The processing method of claim 2, the method further comprising:
determining the target parameter based on the pixel information of the Nth captured image determined in the first manner if it is determined based on the second manner that the Nth captured image does not include the target object.
4. The processing method according to claim 2, further comprising, after the if the position of the target object does not belong to the target area:
if a confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
5. The processing method according to claim 4, wherein the get confirmation instruction comprises any one of:
the position of the target object in the Nth collected image and the (N + M) th collected image is unchanged, and a confirmation instruction is obtained;
and obtaining a confirmation operation aiming at prompt information, and obtaining a confirmation instruction, wherein the prompt information is used for prompting whether the target object is used as a factor for confirming the target parameter.
6. The processing method of claim 2, the method further comprising:
setting the target area, wherein the target area is smaller than the photosensitive area of the camera module;
setting the target area includes any one of:
obtaining an input operation, and setting the target area based on the input operation;
and setting the target area based on the shooting scene.
7. An electronic device, comprising:
an image acquisition device;
the processor is used for obtaining the Nth collected image; determining target parameters based on a first mode and a second mode for the Nth acquired image;
wherein, prior to determining the target parameter, the processor is further configured to: determining whether pixel information of a target object included in the Nth captured image determined based on the second manner is a factor for determining the target parameter.
8. The electronic device of claim 7, the determining whether pixel information of a target object included in the nth captured image determined based on the second manner is factored into determining the target parameter comprising:
determining a position of a target object if it is determined that the nth captured image includes the target object based on a second manner;
if the position of the target object belongs to a target area, determining pixel information of the target object as a factor for determining the target parameter;
if the position of the target object does not belong to the target area, the pixel information of the target object is not taken as a factor for determining the target parameter.
9. The electronic device of claim 8, wherein the electronic device,
the processor is further configured to determine the target parameter based on the pixel information of the nth captured image determined in the first manner if it is determined based on the second manner that the nth captured image does not include a target object.
10. The electronic device of claim 8, further comprising, after the if the location of the target object does not belong to the target area:
if a confirmation instruction is obtained, pixel information of the target object is determined as a factor for determining the target parameter.
CN202211530950.6A 2022-12-01 2022-12-01 Processing method and electronic equipment Pending CN115866389A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211530950.6A CN115866389A (en) 2022-12-01 2022-12-01 Processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211530950.6A CN115866389A (en) 2022-12-01 2022-12-01 Processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115866389A true CN115866389A (en) 2023-03-28

Family

ID=85668957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211530950.6A Pending CN115866389A (en) 2022-12-01 2022-12-01 Processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115866389A (en)

Similar Documents

Publication Publication Date Title
RU2629436C2 (en) Method and scale management device and digital photographic device
JP6899002B2 (en) Image processing methods, devices, computer-readable storage media and electronic devices
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN106791451B (en) Photographing method of intelligent terminal
WO2023071933A1 (en) Camera photographing parameter adjustment method and apparatus and electronic device
EP4297395A1 (en) Photographing exposure method and apparatus for self-walking device
CN113507570B (en) Exposure compensation method and device and electronic equipment
CN112351197B (en) Shooting parameter adjusting method and device, storage medium and electronic equipment
US20230164446A1 (en) Imaging exposure control method and apparatus, device and storage medium
CN112672055A (en) Photographing method, device and equipment
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN112565604A (en) Video recording method and device and electronic equipment
CN115866389A (en) Processing method and electronic equipment
CN114286011B (en) Focusing method and device
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN116668843A (en) Shooting state switching method and device, electronic equipment and storage medium
CN111654623B (en) Photographing method and device and electronic equipment
CN112911132B (en) Photographing control method, photographing control device, electronic equipment and storage medium
CN113938597B (en) Face recognition method, device, computer equipment and storage medium
CN114071024A (en) Image shooting method, neural network training method, device, equipment and medium
CN112446848A (en) Image processing method and device and electronic equipment
CN110545375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114339028A (en) Photographing method, electronic device and computer-readable storage medium
CN113037996A (en) Image processing method and device and electronic equipment
CN114219744B (en) Image generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination