CN118233746A - Focusing method, focusing device, electronic equipment and storage medium - Google Patents

Focusing method, focusing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118233746A
CN118233746A CN202211642431.9A CN202211642431A CN118233746A CN 118233746 A CN118233746 A CN 118233746A CN 202211642431 A CN202211642431 A CN 202211642431A CN 118233746 A CN118233746 A CN 118233746A
Authority
CN
China
Prior art keywords
focusing
sub
frame
target object
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211642431.9A
Other languages
Chinese (zh)
Inventor
梁文俊
吴美芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202211642431.9A priority Critical patent/CN118233746A/en
Publication of CN118233746A publication Critical patent/CN118233746A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application provides a focusing method, a focusing device, electronic equipment and a storage medium, which are applied to the technical field of computers. The method comprises the following steps: the first focusing reference information of the focusing frame is determined based on the color difference between the area where the target object is and the area where the focusing frame of the target object is in the shooting picture, and then the second focusing reference information of each sub focusing frame is determined according to the plurality of sub focusing frames of the focusing frame and the first focusing reference information.

Description

Focusing method, focusing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a focusing method, a focusing device, an electronic device, and a storage medium.
Background
Currently, on electronic devices such as mobile phones and tablet computers, cameras are one of important applications, and users can take pictures, videos, and the like through the cameras on the electronic devices. In general, when a user shoots, an electronic device can automatically focus a shot object to adapt to movement of the shot object and present a clear shooting picture.
In the related art, taking a photographed object as a face as an example, during photographing, an electronic device detects an area where the face is located in a photographing picture to obtain a face detection frame, then adjusts the width and the height of the face detection frame based on preset parameters to obtain a focusing frame, so that the focusing frame is as large as possible and does not contain a background as much as possible, and finally automatic focusing is realized based on the focusing frame.
However, in the above method, the preset parameters are often fixed, that is, when faces with different sizes are photographed, the faces are adjusted by adopting the same adjustment range, so that the focusing frame is easy to deviate or contain too many rear views, and thus a virtual focus phenomenon is easy to occur when focusing is performed based on the focusing frame.
Disclosure of Invention
The embodiment of the application provides a focusing method, a focusing device, electronic equipment and a storage medium, which can avoid the phenomenon of virtual focus during focusing and improve the focusing effect. The technical scheme is as follows:
In one aspect, a focusing method is provided, the method comprising:
Determining first focusing reference information of a focusing frame based on a color difference between a region where a target object is located and a region where a focusing frame of the target object is located in a shot picture, wherein the first focusing reference information indicates whether pictures corresponding to sub-blocks of the focusing frame belong to the target object or not, and the focusing frame is determined based on the region where the target object is located;
determining second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing sub-frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to the target object in the focusing sub-frames, and pictures corresponding to the focusing sub-frames are not overlapped with each other;
And adjusting the focusing frame based on the position of each focusing sub-frame in the focusing frame and the second focusing reference information of each focusing sub-frame, and focusing the target object based on the adjusted focusing sub-frame, wherein the number of sub-blocks belonging to the target object in the adjusted focusing sub-frame meets the target condition.
In some embodiments, the determining the first focusing reference information of the focusing frame based on the color difference between the region of the target object and the region of the focusing frame of the target object in the shot includes:
Acquiring color differences between the region of the target object and each sub-block of the focusing frame based on first color information of the region of the target object and second color information of the region of the focusing frame in the shooting picture, wherein the first color information indicates component values of each color channel in the region of the target object, and the second color information indicates component values of each color channel in each sub-block of the focusing frame;
The first focus reference information is determined based on the color difference.
In some embodiments, based on the first color information of the area where the target object is located and the second color information of the area where the focusing frame is located in the photographing picture, obtaining the color difference between the area where the target object is located and each sub-block of the focusing frame includes:
And determining the color difference between the region where the target object is located and the target sub-block based on the component values of each color channel in the region where the target object is located and the component values of each color channel in the target sub-block, wherein the target sub-block refers to any sub-block of the focusing frame.
In some embodiments, determining the first focus reference information based on the color difference includes:
If the color difference between the area where the target object is located and the target sub-block is smaller than a first threshold value, determining that the picture corresponding to the target sub-block belongs to the target object, wherein the target sub-block refers to any sub-block of the focusing frame;
If the color difference between the area where the target object is located and the target sub-block is greater than or equal to the first threshold value, determining that the picture corresponding to the target sub-block does not belong to the target object.
In some embodiments, determining second focus reference information for each of the sub-focus frames based on the first focus reference information for the focus frame and the plurality of sub-focus frames for the focus frame comprises:
Determining the number of sub-blocks belonging to the target object in a target sub-focusing frame based on the first focusing reference information, wherein the target sub-focusing frame refers to any one of the focusing frames;
and determining second focusing reference information of the target sub-focusing frame based on the ratio of the number of sub-blocks belonging to the target object in the target sub-focusing frame to the total number of sub-blocks of the target sub-focusing frame.
In some embodiments, adjusting the focus frame based on the position of each of the sub focus frames in the focus frame and the second focus reference information of each of the sub focus frames comprises:
determining focusing differences among the sub-focusing frames based on second focusing reference information of the sub-focusing frames;
if the focusing difference among the sub focusing frames meets a first condition, determining a plurality of extended focusing frames at two sides outside the focusing frames;
And adjusting the focusing frame based on the position of each sub focusing frame in the focusing frame and third focusing reference information of each extended focusing frame, wherein the third focusing reference information indicates the number of sub blocks belonging to the target object in the extended focusing frame.
In some embodiments, adjusting the focus frame based on the position of each of the sub focus frames in the focus frame and the third focus reference information of each of the extended focus frames comprises:
Determining a focusing difference among the first sub-focusing frame, the second sub-focusing frame and the first extended focusing frame based on the second focusing reference information of the first sub-focusing frame, the second focusing reference information of the second sub-focusing frame and the third focusing reference information of the first extended focusing frame, wherein the first extended focusing frame is adjacent to the first sub-focusing frame, and the first sub-focusing frame is adjacent to the second sub-focusing frame;
If the focusing difference between the first extended focusing frame and the first sub focusing frame is smaller than a second threshold value, and the focusing difference between the first extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the first extended focusing frame, the first sub focusing frame and the second sub focusing frame.
In some embodiments, the method further comprises:
If the focusing difference between the first extended focusing frame and the first sub focusing frame is greater than or equal to the second threshold value, or the focusing difference between the first extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold value, determining the focusing difference among the third sub focusing frame, the second sub focusing frame and the second extended focusing frame based on the second focusing reference information of the third sub focusing frame, the second focusing reference information of the second sub focusing frame and the third focusing reference information of the second extended focusing frame, wherein the second extended focusing frame is adjacent to the third sub focusing frame, and the third sub focusing frame is adjacent to the second sub focusing frame;
If the focusing difference between the second extended focusing frame and the third sub focusing frame is smaller than a second threshold value, and the focusing difference between the second extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the second extended focusing frame, the second sub focusing frame and the third sub focusing frame.
In some embodiments, the method further comprises:
and if the focusing difference between the second extended focusing frame and the third sub focusing frame is larger than or equal to a second threshold value, or the focusing difference between the second extended focusing frame and the second sub focusing frame is larger than or equal to the second threshold value, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the method further comprises:
and if the focusing difference among the sub focusing frames does not meet the first condition, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the first condition comprises:
The focusing difference between the first focusing sub-frame and the second focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value; or alternatively
The focusing difference between the second focusing sub-frame and the third focusing sub-frame is larger than the second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value;
the first focusing sub-frame is adjacent to the second focusing sub-frame, and the second focusing sub-frame is adjacent to the third focusing sub-frame.
In another aspect, there is provided a focusing device including:
The first determining module is used for determining first focusing reference information of a focusing frame based on a color difference between a region where a target object is located in a shot picture and a region where the focusing frame of the target object is located, wherein the first focusing reference information indicates whether pictures corresponding to all sub-blocks of the focusing frame belong to the target object or not, and the focusing frame is determined based on the region where the target object is located;
The second determining module is used for determining second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to the target object in the focusing frames, and pictures corresponding to the focusing frames are not overlapped with each other;
And the focusing module is used for adjusting the focusing frame based on the position of each sub focusing frame in the focusing frame and the second focusing reference information of each sub focusing frame, and focusing the target object based on the adjusted focusing frame, wherein the number of the sub blocks belonging to the target object in the adjusted focusing frame meets the target condition.
In some embodiments, the first determination module includes:
An obtaining unit, configured to obtain a color difference between an area where the target object is located and each sub-block of the focusing frame based on first color information of the area where the target object is located and second color information of the area where the focusing frame is located in the photographing picture, where the first color information indicates component values of each color channel in the area where the target object is located, and the second color information indicates component values of each color channel in each sub-block of the focusing frame;
and a first determination unit configured to determine the first focus reference information based on the color difference.
In some embodiments, the acquiring unit is configured to:
And determining the color difference between the region where the target object is located and the target sub-block based on the component values of each color channel in the region where the target object is located and the component values of each color channel in the target sub-block, wherein the target sub-block refers to any sub-block of the focusing frame.
In some embodiments, the determining unit is configured to:
If the color difference between the area where the target object is located and the target sub-block is smaller than a first threshold value, determining that the picture corresponding to the target sub-block belongs to the target object, wherein the target sub-block refers to any sub-block of the focusing frame;
If the color difference between the area where the target object is located and the target sub-block is greater than or equal to the first threshold value, determining that the picture corresponding to the target sub-block does not belong to the target object.
In some embodiments, the second determining module is configured to:
Determining the number of sub-blocks belonging to the target object in a target sub-focusing frame based on the first focusing reference information, wherein the target sub-focusing frame refers to any one of the focusing frames;
and determining second focusing reference information of the target sub-focusing frame based on the ratio of the number of sub-blocks belonging to the target object in the target sub-focusing frame to the total number of sub-blocks of the target sub-focusing frame.
In some embodiments, the focusing module comprises:
a second determining unit configured to determine a focus difference between the respective sub-focus frames based on second focus reference information of the respective sub-focus frames;
a third determining unit, configured to determine a plurality of extended focusing frames on both sides outside the focusing frame if a focusing difference between the sub focusing frames satisfies a first condition;
And the adjusting unit is used for adjusting the focusing frame based on the position of each sub focusing frame in the focusing frame and the third focusing reference information of each extended focusing frame, wherein the third focusing reference information indicates the number of sub blocks belonging to the target object in the extended focusing frame.
In some embodiments, the adjusting unit is configured to:
Determining a focusing difference among the first sub-focusing frame, the second sub-focusing frame and the first extended focusing frame based on the second focusing reference information of the first sub-focusing frame, the second focusing reference information of the second sub-focusing frame and the third focusing reference information of the first extended focusing frame, wherein the first extended focusing frame is adjacent to the first sub-focusing frame, and the first sub-focusing frame is adjacent to the second sub-focusing frame;
If the focusing difference between the first extended focusing frame and the first sub focusing frame is smaller than a second threshold value, and the focusing difference between the first extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the first extended focusing frame, the first sub focusing frame and the second sub focusing frame.
In some embodiments, the adjusting unit is further configured to:
If the focusing difference between the first extended focusing frame and the first sub focusing frame is greater than or equal to the second threshold value, or the focusing difference between the first extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold value, determining the focusing difference among the third sub focusing frame, the second sub focusing frame and the second extended focusing frame based on the second focusing reference information of the third sub focusing frame, the second focusing reference information of the second sub focusing frame and the third focusing reference information of the second extended focusing frame, wherein the second extended focusing frame is adjacent to the third sub focusing frame, and the third sub focusing frame is adjacent to the second sub focusing frame;
If the focusing difference between the second extended focusing frame and the third sub focusing frame is smaller than a second threshold value, and the focusing difference between the second extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the second extended focusing frame, the second sub focusing frame and the third sub focusing frame.
In some embodiments, the adjusting unit is further configured to:
and if the focusing difference between the second extended focusing frame and the third sub focusing frame is larger than or equal to a second threshold value, or the focusing difference between the second extended focusing frame and the second sub focusing frame is larger than or equal to the second threshold value, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the focusing module is further configured to:
and if the focusing difference among the sub focusing frames does not meet the first condition, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the first condition comprises:
The focusing difference between the first focusing sub-frame and the second focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value; or alternatively
The focusing difference between the second focusing sub-frame and the third focusing sub-frame is larger than the second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value;
the first focusing sub-frame is adjacent to the second focusing sub-frame, and the second focusing sub-frame is adjacent to the third focusing sub-frame.
In another aspect, an electronic device is provided that includes a processor and a memory for storing at least one computer program that is loaded and executed by the processor to implement a focusing method in an embodiment of the present application.
In another aspect, a computer readable storage medium having at least one computer program stored therein is provided, the at least one computer program loaded and executed by a processor to implement a focusing method in an embodiment of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the electronic device reads the computer program code from the computer readable storage medium, and the processor executes the computer program code so that the electronic device executes to implement the focusing method in the embodiment of the present application.
In the embodiment of the application, the first focusing reference information of the focusing frame is determined based on the color difference between the area where the target object is located and the area where the focusing frame of the target object is located in the shooting picture, and further the second focusing reference information of each sub focusing frame is determined according to the plurality of sub focusing frames of the focusing frame and the first focusing reference information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of a focusing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a focusing method provided according to an embodiment of the present application;
FIG. 3 is a flow chart of another focusing method provided according to an embodiment of the present application;
FIG. 4 is a schematic diagram of acquiring first color information according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another method for acquiring first color information according to an embodiment of the present application;
fig. 6 is a schematic diagram of each sub-block in a shot picture according to an embodiment of the present application;
FIG. 7 is a diagram of second focus reference information according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an extended focus frame provided according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a focusing method according to an embodiment of the present application;
fig. 10 is a schematic structural view of a focusing device according to an embodiment of the present application;
Fig. 11 is a schematic structural view of a terminal according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by the terms.
These terms are only used to distinguish one element from another element. For example, the first focusing frame can be referred to as a second focusing frame, and similarly, the second focusing frame can also be referred to as a first focusing frame, without departing from the scope of the various examples. The first focusing frame and the second focusing frame may both be focusing frames, and in some cases, may be separate and distinct focusing frames.
At least one of the frames refers to one or more frames, for example, at least one frame may be an integer number of frames greater than or equal to one, such as one frame, two frames, three frames, and the like. The plurality of frames may be two or more, for example, an integer number of focusing frames equal to or greater than two, such as two focusing frames and three focusing frames.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the color information and the like related to the embodiment of the application are acquired under the condition of full authorization.
The following describes an implementation environment related to an embodiment of the present application.
Fig. 1 is a schematic diagram of an implementation environment of a focusing method according to an embodiment of the present application. The implementation environment comprises: an electronic device 101 and a server 102. The electronic device 101 and the server 102 can be directly or indirectly connected through a wired network or a wireless network, and the present application is not limited herein.
The electronic device 101 is at least one of a terminal device such as a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality electronic device, an augmented reality electronic device, a wireless electronic device, and a laptop computer. The electronic device 101 may be referred to generally as one of a plurality of electronic devices, with embodiments of the application being illustrated by way of example only with respect to the electronic device 101. Those skilled in the art will appreciate that the number of electronic devices described above may be greater or lesser. The electronic device 101 has a photographing function, and is capable of photographing a subject to be photographed, to obtain a corresponding picture, video, or the like. For example, the electronic device 101 is an electronic device used by a user, and an application program having a photographing function is run on the electronic device 101, and a user account is registered in the application program.
In some embodiments, the electronic device 101 refers broadly to one of a plurality of electronic devices, with the present embodiment being illustrated only with the electronic device 101. Those skilled in the art will appreciate that the number of electronic devices 101 can be greater, and the number and types of electronic devices are not limited in the embodiments of the present application.
The server 102 is an independent physical server, or a server cluster or a distributed file system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. The server 102 is used to provide background services for applications running on the electronic device 101. For example, the user takes a picture through the electronic device 101, uploads the picture to the server 102, and the server 102 further processes the picture, which is not limited. In addition, server 102 may also include other functional servers to provide more comprehensive and diverse services.
In some embodiments, during focus, server 102 takes on secondary computing work and electronic device 101 takes on primary computing work; or the server 102 or the electronic device 101, respectively, can solely undertake computing work, without limitation.
In some embodiments, the wired or wireless network described above uses standard communication techniques and/or protocols. The network is typically the internet, but can be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), a mobile, wired or wireless network, a private network, or any combination of virtual private networks. In some embodiments, the data exchanged over the network is represented using techniques and/or formats including hypertext markup language (Hyper Text Markup Language, HTML), extensible markup language (Extensible Markup Language, XML), and the like. In addition, all or some of the links can be encrypted using conventional encryption techniques such as secure sockets layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet Protocol Security, IPsec), etc. In other embodiments, custom and/or dedicated data communication techniques can also be used in place of or in addition to the data communication techniques described above.
The focusing method provided by the embodiment of the application is described below through several method embodiments.
Fig. 2 is a flowchart of a focusing method according to an embodiment of the present application. As shown in fig. 2, the method is performed by the electronic device shown in fig. 1 described above, and illustratively the method includes steps 201 to 203 described below.
201. The electronic equipment determines first focusing reference information of a focusing frame based on a color difference between a region where a target object is located and a region where a focusing frame of the target object is located in a shot picture, wherein the first focusing reference information indicates whether pictures corresponding to all sub-blocks of the focusing frame belong to the target object or not, and the focusing frame is determined based on the region where the target object is located.
In the implementation of the application, the electronic equipment is provided with the camera application, and the user can realize the shooting function through the camera application. In the process of shooting through the camera application, the electronic equipment can automatically focus a target object in a shooting picture so as to present a clear shooting picture. For example, the target object may be a human face, an animal, an automobile, etc., which is not limited by the embodiment of the present application.
Schematically, in the shooting process, the electronic device detects the area where the target object is located to obtain a detection frame of the target object, and adjusts the detection frame to obtain a focusing frame of the target object. For example, taking a target object as a Face, in a shooting process, an electronic device invokes a Face Detection (FD) algorithm to detect an area where a Face is located in a shot image to obtain a Face Detection frame, and further invokes an Automatic Focus (AF) algorithm to adjust the Face Detection frame to obtain a Face focusing frame, for example, adjusting the width and the height of the Face Detection frame according to preset parameters, so that the shot images in the Face focusing frame belong to the same depth of field. Of course, in some embodiments, the electronic device can directly use the detection frame as the focusing frame of the target object, which is not limited by the embodiment of the present application.
The focusing frame of the target object comprises a plurality of sub-blocks (Child blocks), and the electronic equipment determines whether the picture corresponding to each sub-Block in the focusing frame belongs to the target object or not based on the color difference between the region of the target object in the shot picture and the region of the focusing frame of the target object, so as to obtain first focusing reference information of the focusing frame. For example, the electronic device invokes an automatic white balance (Automatic White Balance, AWB) algorithm to divide the shot frame into a plurality of sub-blocks (for example, 32×24 sub-blocks, which can be set according to the practical application, without limitation), and counts RGB color information of each sub-block to perform white balance processing on the shot frame, based on which, when determining the focus frame, the electronic device determines related information of each sub-block included in the focus frame.
202. The electronic equipment determines second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing sub-frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to a target object in the focusing sub-frames, and pictures corresponding to the focusing sub-frames are not overlapped with each other.
In an implementation of the present application, the focus frame includes a plurality of sub-focus frames, each sub-focus frame including a plurality of sub-blocks. For any one of the sub-focusing frames, the electronic device determines second focusing reference information of the sub-focusing frame based on the first focusing reference information of the focusing frame and the number of sub-blocks belonging to the target object in the plurality of sub-blocks of the sub-focusing frame. In some embodiments, the electronic device divides the focusing frames equally according to the target number to obtain a plurality of sub-focusing frames. For example, the target number is 3, and the electronic device performs an average division on the focusing frames to obtain three sub-focusing frames, i.e., a left sub-focusing frame, a middle sub-focusing frame and a right sub-focusing frame, which is not limited.
203. The electronic device adjusts the focusing frame based on the position of each sub focusing frame in the focusing frame and the second focusing reference information of each sub focusing frame, and focuses the target object based on the adjusted focusing frame, wherein the number of sub blocks belonging to the target object in the adjusted focusing frame meets the target condition.
In the implementation of the application, the number of the sub-blocks belonging to the target object in the adjusted focusing frame meets the target condition, namely, the adjusted focusing frame contains the area where the target object is located as much as possible, so that the phenomenon of virtual focus can be avoided when the target object is focused based on the adjusted focusing frame, and the focusing effect is improved. The electronic device determines, based on the position of each sub-focusing frame in the focusing frame and the second focusing reference information of each sub-focusing frame, a focusing difference between the sub-focusing frames, so as to determine whether the size and/or the position of the focusing frame needs to be adjusted based on the focusing difference between the sub-focusing frames, wherein the focusing difference indicates a difference between the number of sub-blocks belonging to the target object contained in the different sub-focusing frames, in other words, if the focusing difference between the two sub-focusing frames is larger, it indicates that one of the sub-focusing frames contains too many sub-blocks not belonging to the target object, that is, contains too many back views, then a virtual focus phenomenon is easy to generate. It should be noted that this process will be described in detail in the following embodiments, and will not be described here again.
In the focusing method provided by the embodiment of the application, the first focusing reference information of the focusing frame is determined based on the color difference between the area where the target object is located and the area where the focusing frame of the target object is located in the shooting picture, and further the second focusing reference information of each sub focusing frame is determined according to the plurality of sub focusing frames of the focusing frame and the first focusing reference information.
The above-mentioned fig. 2 presents a brief flow of the focusing method according to the embodiment of the present application, and the following describes the method in detail based on the embodiment shown in fig. 3.
Fig. 3 is a flowchart of another focusing method according to an embodiment of the present application. As shown in fig. 3, the method is performed by the electronic device shown in fig. 1 described above, and illustratively the method includes steps 301 to 309 described below.
301. The electronic equipment acquires first color information of an area where a target object is located in a shooting picture, wherein the first color information indicates component values of all color channels in the area where the target object is located.
In the embodiment of the present application, the component values of each color channel refer to the component values of three color channels of R (red), G (green), and B (blue). Schematically, in the shooting process, the electronic device detects an area where a target object is located in a shooting picture to obtain a detection frame of the target object, further, invokes an AWB algorithm, and determines first color information of the area where the target object is located based on RGB information of a picture corresponding to the target object in the detection frame. For example, taking a target object as a face as an example, a process of calling the AWB algorithm by the electronic device is expressed as follows:
AWB_TAG_FACE_MAX_INDICATOR_AVG_R;
AWB_TAG_FACE_MAX_INDICATOR_AVG_G;
AWB_TAG_FACE_MAX_INDICATOR_AVG_B。
For convenience of description, in the subsequent embodiments, the first color information is expressed as (r_fd, g_fd, b_fd).
In some embodiments, the electronic device determines the first color information based on a type to which the target object belongs. For example, taking a target object as a face as an example, the electronic device invokes an AWB algorithm, and determines the first color information based on a skin region in a region where the target object is located, thereby improving accuracy of the first color information.
Referring to fig. 4 schematically, fig. 4 is a schematic diagram of acquiring first color information according to an embodiment of the present application. As shown in fig. 4, taking a target object as a human Face as an example, the electronic device invokes a human Face detection algorithm to determine a human Face detection frame, then invokes an AWB algorithm to determine initial color information of an area where Skin is located based on a Skin area (Skin Map) and Face area statistics (Face-AREA STATISTIC), where the Skin area (Skin Map) is used to further reduce the color information statistics based on the human Face detection frame to the area where Skin of the human Face is located; then, based on a noise removal algorithm (NR Process) and a target parameter (Factor), removing the non-skin color part through frequency straight distribution to obtain first color information of the area where the face is located. Referring specifically to fig. 5, fig. 5 is a schematic diagram illustrating another method for acquiring first color information according to an embodiment of the present application. As shown in fig. 5 (a), taking a target object as a face as an example, if a mask exists in a face area, the statistics of color information based on the face detection frame can be further reduced to an area where the Skin of the face is located through a Skin area (Skin Map), so that the accuracy of the first color information is improved. As shown in fig. 5 (b), in the noise removal algorithm, based on the Average value (Average, avg) of the color information in the graph, the larger the target parameter (Factor), the fewer the removed non-skin color parts, that is, by adjusting the target parameter, the amplitude of noise removal can be controlled, so as to further improve the accuracy of the first color information.
302. The electronic equipment acquires second color information of the area where the focusing frame of the target object is located, wherein the second color information indicates component values of all color channels in all sub-blocks of the focusing frame.
In the embodiment of the present application, the process of determining the focusing frame and the process of determining each sub-block in the focusing frame based on the determination of the region where the target object is located are the same as the above step 301, so that the description thereof will not be repeated. In this step 302, the electronic device invokes the AWB algorithm to determine first color information of an area where the focus frame is located based on RGB information of each sub-block within the focus frame. It should be noted that, the electronic device may recall the AWB algorithm to determine the second color information after determining the focusing frame, or may recall the AWB algorithm to obtain the component values of each color channel in each sub-block of the whole shooting picture before determining the detecting frame and the focusing frame, so that when executing the steps 301 and 302, the component values of each color channel are counted directly based on the area where the target object is located and the area where the focusing frame is located.
Referring to fig. 6, fig. 6 is a schematic diagram of each sub-block in a photographed picture according to an embodiment of the present application. As shown in fig. 6, taking a target object as a face as an example, the electronic device divides a shooting picture into a plurality of sub-blocks, determines a face detection frame 601 by calling a face detection algorithm, and further adjusts the face detection frame 601 based on preset parameters by calling a focusing algorithm to obtain a focusing frame 602. Illustratively, the component values of the respective color channels in the respective sub-blocks are denoted as (r_i, g_i, b_i).
303. The electronic equipment acquires the color difference between the area of the target object and each sub-block of the focusing frame based on the first color information of the area of the target object and the second color information of the area of the focusing frame of the target object in the shooting picture.
In the embodiment of the present application, for any sub-block of a focus frame (hereinafter referred to as a target sub-block), an electronic device determines a color difference between an area where a target object is located and the target sub-block based on a component value of each color channel in the area where the target object is located and a component value of each color channel in the target sub-block. This process is schematically represented by the following formula (1):
In the above formula, i is a positive integer, and represents any sub-block, that is, a target sub-block, where component values of each color channel in the target sub-block are r_i, g_i, and b_i, respectively, and component values of each color channel in an area where the target object is located are r_fd, g_fd, and b_fd, respectively.
304. The electronic device determines first focusing reference information of the focusing frame based on the color difference between the area where the target object is located and each sub-block of the focusing frame, wherein the first focusing reference information indicates whether a picture corresponding to each sub-block of the focusing frame belongs to the target object.
In the embodiment of the present application, taking a target sub-block as an example, if the color difference between the area where the target object is located and the target sub-block is smaller than a first threshold, it is determined that the picture corresponding to the target sub-block belongs to the target object, and if the color difference between the area where the target object is located and the target sub-block is greater than or equal to the first threshold, it is determined that the picture corresponding to the target sub-block does not belong to the target object. In some embodiments, the electronic device represents the first focusing reference information through a preset identifier, so that storage resources are saved. For example, if the picture corresponding to the target sub-block belongs to the target object, the target sub-block is marked as 1, and if the picture corresponding to the target sub-block does not belong to the target sub-block, the target sub-block is marked as 0, which is not limited. In addition, the first threshold can be set according to actual requirements, for example, the range of the first threshold is 0.1-0.2, for example, the first threshold is 0.15, which is not limited in the embodiment of the present application.
It should be understood that, generally, the colors of the areas where the target objects are located are similar, if the difference between the colors of the target sub-blocks in the focusing frame and the areas where the target objects are located is large, it indicates that the picture corresponding to the target sub-block is more likely not to belong to the target object, so that the first focusing reference information of the focusing frame is determined in this way, and technical support is provided for subsequent adjustment of the focusing frame.
305. The electronic equipment determines second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing sub-frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to a target object in the focusing sub-frames, and pictures corresponding to the focusing sub-frames are not overlapped with each other.
In the embodiment of the present application, the process of determining a plurality of sub-focusing frames is the same as that of step 302, so that the description thereof is omitted. In this step, for any one of the sub-focus frames (hereinafter referred to as a target sub-focus frame), the electronic device determines the number of sub-blocks belonging to the target object within the target sub-focus frame based on the first focus reference information, and determines the second focus reference information of the target sub-focus frame based on the ratio of the number of sub-blocks belonging to the target object within the target sub-focus frame to the total number of sub-blocks of the target sub-focus frame. For example, taking the electronic device to represent the first focusing reference information by a preset identifier as an example, in this step, the electronic device calculates a ratio of the number of sub-blocks marked as 1 in the target sub-focusing frame to the total number of sub-blocks in the target sub-focusing frame, so as to obtain the second focusing reference information of the target sub-focusing frame.
Referring to fig. 7, fig. 7 is a schematic diagram of second focusing reference information according to an embodiment of the present application. As shown in fig. 7, taking an example in which the focusing frame includes three sub-focusing frames A, B, C, the electronic device determines the proportion of sub-block marks 1 in each sub-focusing frame, and marks Ra, rb, rc.
Of course, in some embodiments, if the number of sub-blocks in each sub-focusing frame is the same, the electronic device uses the number of sub-blocks belonging to the target object in each sub-focusing frame as the second focusing reference information of each sub-focusing frame, which is not limited in the embodiments of the present application.
306. The electronic device determines a focus difference between the sub focus frames based on the second focus reference information of the sub focus frames.
In the embodiment of the application, taking the target sub-focusing frame as an example, the electronic equipment determines the focusing difference between the target sub-focusing frame and other sub-focusing frames based on the difference between the second focusing reference information of the target sub-focusing frame and other sub-focusing frames. For example, continuing to take fig. 7 as an example, the electronic device determines the focus difference between the respective sub-focus frames based on the second focus reference information Ra, rb, rc of the sub-focus frame A, B, C: diff (Ra, rb), diff (Ra, rc), diff (Rb, rc), wherein Diff (x, y) is denoted as x-y.
307. The electronic device determines whether a focus difference between the sub-focus frames satisfies a first condition based on the focus difference between the sub-focus frames.
In the embodiment of the application, the focusing difference indicates the difference between the number of sub-blocks belonging to the target object contained in different sub-focusing frames, in other words, if the focusing difference between two sub-focusing frames is larger, it indicates that one of the sub-focusing frames contains too many sub-blocks not belonging to the target object, that is, contains too many back views, then the virtual focus phenomenon is easy to generate. In this step, the fact that the focus difference between the sub-focus frames meets the first condition means that the focus difference between the sub-focus frames is large, based on which, if the first condition is met, the electronic device executes the subsequent steps 308 and 309, and adjusts the focus frames by determining the extended focus frames of the focus frames, thereby providing technical support; if not, the focusing frame is not adjusted, and the target object is focused based on the focusing frame.
Illustratively, taking the example that the focusing frame includes three sub-focusing frames, i.e., a first sub-focusing frame is adjacent to a second sub-focusing frame, and a second sub-focusing frame is adjacent to a third sub-focusing frame, the first condition includes any one of:
(1) The focusing difference between the first focusing sub-frame and the second focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value.
(2) The focusing difference between the second focusing sub-frame and the third focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value.
The second threshold is a preset threshold, and can be set according to actual application requirements, for example, the second threshold is 0.3, which is not limited in the embodiment of the present application.
308. If the focusing difference among the sub focusing frames meets the first condition, the electronic equipment determines a plurality of extended focusing frames at two sides outside the focusing frames.
In the embodiment of the application, the electronic device determines a plurality of extended focusing frames on two sides outside the focusing frame, namely, determines one extended focusing frame on the left and right sides outside the focusing frame respectively, wherein the size of the extended focusing frame is the same as that of the sub focusing frame. Referring to fig. 8, fig. 8 is a schematic diagram of an extended focusing frame according to an embodiment of the present application. As shown in fig. 8, taking the focusing frame including three sub-focusing frames A, B, C as an example, the electronic device determines an extended focusing frame a 'and C' respectively at the left and right sides outside the focusing frame.
309. The electronic device adjusts the focusing frame based on the positions of the sub focusing frames in the focusing frame and the third focusing reference information of the extension focusing frames, and focuses the target object based on the adjusted focusing frame, wherein the third focusing reference information indicates the number of sub blocks belonging to the target object in the extension focusing frame.
In the embodiment of the present application, the third focusing reference information of the extended focusing frame is the same as the second focusing reference information of the sub-focusing frame, so that the description thereof is omitted. The electronic device adjusts the focusing frame based on focusing differences between each extended focusing frame and each sub focusing frame and positions of each sub focusing frame in the focusing frame, and focuses the target object based on the adjusted focusing frame.
In the following, taking an example that the focusing frame includes three sub-focusing frames, that is, the first sub-focusing frame is adjacent to the second sub-focusing frame, the second sub-focusing frame is adjacent to the third sub-focusing frame, the process of adjusting the focusing frame of the electronic device is described, including the following steps:
Step 3091, determining a focusing difference among the first sub-focusing frame, the second sub-focusing frame and the first extended focusing frame based on the second focusing reference information of the first sub-focusing frame, the second focusing reference information of the second sub-focusing frame and the third focusing reference information of the first extended focusing frame.
The first extended focusing frame is adjacent to the first sub focusing frame, and the first sub focusing frame is adjacent to the second sub focusing frame. Schematically, with continued reference to fig. 8, the first extended focus frame is a', the first sub-focus frame is a, and the second sub-focus frame is B in fig. 8.
Step 3092, if the focusing difference between the first extended focusing frame and the first sub focusing frame is smaller than the second threshold, and the focusing difference between the first extended focusing frame and the second sub focusing frame is smaller than the second threshold, determining the adjusted focusing frame based on the first extended focusing frame, the first sub focusing frame and the second sub focusing frame.
With continued reference to fig. 8, step 2, that is, if the difference between Ra ' and Ra is smaller than the second threshold and the difference between Ra ' and Rb is smaller than the second threshold, it indicates that the focusing difference between a ', A, B is smaller, in other words, it indicates that the number of sub-blocks belonging to the target object included in a ', A, B is similar, that is, the area where the target object is located is included as much as possible, and the background is included as little as possible, then the virtual focus phenomenon is not easy to occur, and based on this, a ', A, B is taken as the adjusted focusing frame.
If the focus difference among the first extended focus frame, the first sub focus frame, and the second sub focus frame is large, the following steps 3 and 4 are executed.
Step 3093, if the focusing difference between the first extended focusing frame and the first sub focusing frame is greater than or equal to the second threshold, or the focusing difference between the first extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold, determining the focusing difference among the third sub focusing frame, the second sub focusing frame and the second extended focusing frame based on the second focusing reference information of the third sub focusing frame, the second focusing reference information of the second sub focusing frame and the third focusing reference information of the second extended focusing frame.
The second extended focusing frame is adjacent to the third sub focusing frame, and the third sub focusing frame is adjacent to the second sub focusing frame. Schematically, with continued reference to fig. 8, the second extended focus frame is C' and the third sub focus frame is C in fig. 8.
Step 3094, if the focusing difference between the second extended focusing frame and the third sub focusing frame is smaller than the second threshold, and the focusing difference between the second extended focusing frame and the second sub focusing frame is smaller than the second threshold, determining the adjusted focusing frame based on the second extended focusing frame, the second sub focusing frame and the third sub focusing frame.
With continued reference to fig. 8, step 4, that is, if the difference between Rc ' and Rc is smaller than the second threshold and the difference between Rc ' and Rb is smaller than the second threshold, it indicates that the focus difference between C ', C, B is smaller, in other words, the number of sub-blocks belonging to the target object included in C ', C, B is similar, that is, the area where the target object is located is included as much as possible, and the background is included as little as possible, then the virtual focus phenomenon is not easy to generate, and based on this, C ', C, B is taken as the adjusted focus frame.
If the focus difference among the second extended focus frame, the third sub focus frame, and the second sub focus frame is large, the following step 5 is executed.
Step 3095, if the focusing difference between the second extended focusing frame and the third sub focusing frame is greater than or equal to the second threshold, or the focusing difference between the second extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold, the focusing frame is not adjusted, and focusing is performed on the target object based on the focusing frame.
The process of adjusting the focusing frame of the electronic device is described through the steps 3091 to 3095. It should be noted that, the above steps 3091 to 3095 are described by taking the example of determining the focusing differences among the first extended focusing frame, the first sub focusing frame and the second sub focusing frame, and in some embodiments, the electronic device can determine the focusing differences among the second extended focusing frame, the third sub focusing frame and the second sub focusing frame, which is not limited in the embodiments of the present application.
The focusing methods shown in the above steps 301 to 309 are exemplified below with reference to fig. 9.
Fig. 9 is a schematic diagram of a focusing method according to an embodiment of the present application. As shown in fig. 9, the focusing method includes the following steps:
step 1, acquiring first color information of an area where a target object is located in a shooting picture, and marking the first color information as (R_fd, G_fd and B_fd).
Step 2, obtaining second color information of the area where the focusing frame of the target object is located, taking any one sub-block i of the focusing frame as an example, and marking the second color information as (R_i, G_i, B_i).
Step 3, based on the first color information of the area where the target object is located in the shooting picture and the second color information of the area where the focusing frame of the target object is located, the color difference between the area where the target object is located and each sub-block of the focusing frame is obtained, and this process refers to the above formula (1), that is, the difference between (r_fd, g_fd, b_fd) and (r_i, g_i, b_i) is calculated.
And 4, determining first focusing reference information of the focusing frame based on the color difference between the area where the target object is located and each sub-block of the focusing frame, namely taking the target sub-block as an example, marking the target sub-block as 1 if the color difference between the area where the target object is located and the target sub-block is smaller than a first threshold value, marking the target sub-block as 0 if the color difference between the area where the target object is located and the target sub-block is larger than or equal to the first threshold value (if diff < the first threshold value, marking the sub-block child block as 1, otherwise marking as 0).
And 5, determining second focusing reference information of each sub focusing frame based on the first focusing reference information of the focusing frame and the plurality of sub focusing frames of the focusing frame. That is, taking an example in which the focus frame includes three sub focus frames A, B, C, the ratio of the sub block marks 1 in each sub focus frame is determined, and is denoted as Ra, rb, rc.
Step 6, based on the focusing difference between the sub-focusing frames, determining whether the focusing difference between the sub-focusing frames meets a first condition, that is, if Diff (Ra, rc) is greater than a second threshold value and Diff (Rb, rc) is greater than the second threshold value; or if Diff (Ra, rb) is greater than the second threshold and Diff (Ra, rc) is greater than the second threshold, determining that the first condition is satisfied, executing step 7, and if the first condition is not satisfied, not adjusting the focusing frame, and focusing the target object based on the original focusing frame (A, B, C).
And 7, determining a plurality of extended focusing frames on two sides outside the focusing frame, and determining third focusing reference information of each extended focusing frame, namely, taking the case that one extended focusing frame A 'and one extended focusing frame C' are respectively determined on the left side and the right side outside the focusing frame as an example, respectively determining the proportion of 1 of the inner subblocks of each extended focusing frame, and marking the proportion as Ra 'and Rc'.
And 8, if the difference between Ra ' and Ra is smaller than the second threshold and the difference between Ra ' and Rb is smaller than the second threshold, taking a ' A, B as the adjusted focusing frame, otherwise, executing the following step 9.
And 9, if the difference between Rc ' and Rc is smaller than the second threshold and the difference between Rc ' and Rb is smaller than the second threshold, taking C ' C, B as an adjusted focusing frame, otherwise, not adjusting the focusing frame, and focusing the target object based on the original focusing frame (A, B, C).
In summary, in the focusing method provided by the embodiment of the application, based on the color difference between the area where the target object is located and the area where the focusing frame of the target object is located in the shot image, the first focusing reference information of the focusing frame is determined, and then the second focusing reference information of each sub focusing frame is determined according to the plurality of sub focusing frames of the focusing frame and the first focusing reference information. In the anti-observation related technology, taking a photographed object as a face as an example, during photographing, an electronic device detects an area where the face is located in a photographing picture to obtain a face detection frame, then, based on preset parameters, the width and the height of the face detection frame are adjusted to obtain a focusing frame, so that the focusing frame is as large as possible and does not contain a background as much as possible, and finally, automatic focusing is realized based on the focusing frame. In the process, on one hand, preset parameters are always fixed, namely, when faces with different sizes are shot, the faces are adjusted by adopting the same adjustment amplitude, so that a focusing frame is easy to deviate or contain too many rear scenes, and a virtual focus phenomenon is easy to appear when focusing is carried out based on the focusing frame; on the other hand, the accuracy and stability of the face detection algorithm also often affect the accuracy of the final face focusing frame, in other words, when the face detection frame acquired by the electronic device is not accurate enough, the face detection frame obtained by adjusting the preset parameters cannot accurately distinguish the face from the background, so that a virtual focus phenomenon easily occurs, that is, under the condition that the face detection frame is not accurate enough or stable enough, the focusing frame obtained by adjusting the preset parameters often has larger error. By adopting the focusing method provided by the embodiment of the application, the focusing frame can be further adjusted based on the color difference between the area where the target object is located and the area where the focusing frame of the target object is located in the shooting picture, so that the accuracy (the focusing rate can be understood as the focusing rate) of the focusing frame is effectively improved, the virtual focusing phenomenon can not be generated when the target object is focused based on the adjusted focusing frame, and the focusing effect is effectively improved.
Fig. 10 is a schematic structural diagram of a focusing device according to an embodiment of the present application. The device is used for executing the steps when the focusing method is executed, referring to fig. 10, the focusing device comprises: a first determination module 1001, a second determination module 1002, and a focusing module 1003.
A first determining module 1001, configured to determine, based on a color difference between an area where a target object is located in a shot frame and an area where a focusing frame of the target object is located, first focusing reference information of the focusing frame, where the first focusing reference information indicates whether a frame corresponding to each sub-block of the focusing frame belongs to the target object, and the focusing frame is determined based on the area where the target object is located;
A second determining module 1002, configured to determine, based on the first focusing reference information of the focusing frame and a plurality of sub-focusing frames of the focusing frame, second focusing reference information of each of the sub-focusing frames, where the second focusing reference information indicates a number of sub-blocks belonging to the target object in the sub-focusing frame, and pictures corresponding to each of the sub-focusing frames are not overlapped with each other;
And a focusing module 1003, configured to adjust the focusing frame based on the position of each sub-focusing frame in the focusing frame and the second focusing reference information of each sub-focusing frame, and focus the target object based on the adjusted focusing frame, where the number of sub-blocks belonging to the target object in the adjusted focusing frame meets a target condition.
In some embodiments, the first determining module 1001 includes:
An obtaining unit, configured to obtain a color difference between an area where the target object is located and each sub-block of the focusing frame based on first color information of the area where the target object is located and second color information of the area where the focusing frame is located in the photographing picture, where the first color information indicates component values of each color channel in the area where the target object is located, and the second color information indicates component values of each color channel in each sub-block of the focusing frame;
and a first determination unit configured to determine the first focus reference information based on the color difference.
In some embodiments, the acquiring unit is configured to:
And determining the color difference between the region where the target object is located and the target sub-block based on the component values of each color channel in the region where the target object is located and the component values of each color channel in the target sub-block, wherein the target sub-block refers to any sub-block of the focusing frame.
In some embodiments, the determining unit is configured to:
If the color difference between the area where the target object is located and the target sub-block is smaller than a first threshold value, determining that the picture corresponding to the target sub-block belongs to the target object, wherein the target sub-block refers to any sub-block of the focusing frame;
If the color difference between the area where the target object is located and the target sub-block is greater than or equal to the first threshold value, determining that the picture corresponding to the target sub-block does not belong to the target object.
In some embodiments, the second determining module 1002 is configured to:
Determining the number of sub-blocks belonging to the target object in a target sub-focusing frame based on the first focusing reference information, wherein the target sub-focusing frame refers to any one of the focusing frames;
and determining second focusing reference information of the target sub-focusing frame based on the ratio of the number of sub-blocks belonging to the target object in the target sub-focusing frame to the total number of sub-blocks of the target sub-focusing frame.
In some embodiments, the focus module 1003 includes:
a second determining unit configured to determine a focus difference between the respective sub-focus frames based on second focus reference information of the respective sub-focus frames;
a third determining unit, configured to determine a plurality of extended focusing frames on both sides outside the focusing frame if a focusing difference between the sub focusing frames satisfies a first condition;
And the adjusting unit is used for adjusting the focusing frame based on the position of each sub focusing frame in the focusing frame and the third focusing reference information of each extended focusing frame, wherein the third focusing reference information indicates the number of sub blocks belonging to the target object in the extended focusing frame.
In some embodiments, the adjusting unit is configured to:
Determining a focusing difference among the first sub-focusing frame, the second sub-focusing frame and the first extended focusing frame based on the second focusing reference information of the first sub-focusing frame, the second focusing reference information of the second sub-focusing frame and the third focusing reference information of the first extended focusing frame, wherein the first extended focusing frame is adjacent to the first sub-focusing frame, and the first sub-focusing frame is adjacent to the second sub-focusing frame;
If the focusing difference between the first extended focusing frame and the first sub focusing frame is smaller than a second threshold value, and the focusing difference between the first extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the first extended focusing frame, the first sub focusing frame and the second sub focusing frame.
In some embodiments, the adjusting unit is further configured to:
If the focusing difference between the first extended focusing frame and the first sub focusing frame is greater than or equal to the second threshold value, or the focusing difference between the first extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold value, determining the focusing difference among the third sub focusing frame, the second sub focusing frame and the second extended focusing frame based on the second focusing reference information of the third sub focusing frame, the second focusing reference information of the second sub focusing frame and the third focusing reference information of the second extended focusing frame, wherein the second extended focusing frame is adjacent to the third sub focusing frame, and the third sub focusing frame is adjacent to the second sub focusing frame;
If the focusing difference between the second extended focusing frame and the third sub focusing frame is smaller than a second threshold value, and the focusing difference between the second extended focusing frame and the second sub focusing frame is smaller than the second threshold value, determining the adjusted focusing frame based on the second extended focusing frame, the second sub focusing frame and the third sub focusing frame.
In some embodiments, the adjusting unit is further configured to:
and if the focusing difference between the second extended focusing frame and the third sub focusing frame is larger than or equal to a second threshold value, or the focusing difference between the second extended focusing frame and the second sub focusing frame is larger than or equal to the second threshold value, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the focusing module 1003 is further configured to:
and if the focusing difference among the sub focusing frames does not meet the first condition, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
In some embodiments, the first condition comprises:
The focusing difference between the first focusing sub-frame and the second focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value; or alternatively
The focusing difference between the second focusing sub-frame and the third focusing sub-frame is larger than the second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value;
the first focusing sub-frame is adjacent to the second focusing sub-frame, and the second focusing sub-frame is adjacent to the third focusing sub-frame.
In summary, in the focusing device provided by the embodiment of the application, based on the color difference between the area where the target object is located and the area where the focusing frame of the target object is located in the shooting picture, the first focusing reference information of the focusing frame is determined, and then the second focusing reference information of each sub focusing frame is determined according to the plurality of sub focusing frames of the focusing frame and the first focusing reference information.
It should be noted that: in the focusing device provided in the above embodiment, only the division of the above functional modules is used as an example, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the focusing device and the focusing method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the focusing device and the focusing method are detailed in the method embodiments and are not described herein again.
In an exemplary embodiment, there is also provided an electronic device including a processor and a memory for storing at least one computer program loaded and executed by the processor to implement the focusing method in the embodiment of the present application.
Taking an electronic device as an example of a terminal, fig. 11 is a schematic structural diagram of the terminal according to an embodiment of the present application. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
Generally, the terminal 1100 includes: a processor 1101 and a memory 1102.
The processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1101 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 1101 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1101 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one program code for execution by processor 1101 to implement the focusing method provided by the method embodiments of the present application.
In some embodiments, the terminal 1100 may further optionally include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102, and peripheral interface 1103 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1103 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, a display screen 1105, a camera assembly 1106, audio circuitry 1107, a positioning assembly 1108, and a power supply 1109.
A peripheral interface 1103 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1101 and memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1101, memory 1102, and peripheral interface 1103 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1104 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1104 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1105 is a touch display, the display 1105 also has the ability to collect touch signals at or above the surface of the display 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this time, the display screen 1105 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1105 may be one and disposed on the front panel of the terminal 1100; in other embodiments, the display 1105 may be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in other embodiments, the display 1105 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1100. Even more, the display 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1105 may be made of materials such as an LCD (Liquid CRYSTAL DISPLAY) and an OLED (Organic Light-Emitting Diode).
The camera assembly 1106 is used to capture images or video. Optionally, the camera assembly 1106 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1106 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing, or inputting the electric signals to the radio frequency circuit 1104 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 1100, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1107 may also include a headphone jack.
The location component 1108 is used to locate the current geographic location of the terminal 1100 for navigation or LBS (Location Based Service, location-based services).
A power supply 1109 is used to supply power to various components in the terminal 1100. The power source 1109 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 also includes one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyroscope sensor 1112, pressure sensor 1113, optical sensor 1114, and proximity sensor 1115.
The acceleration sensor 1111 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of gravitational acceleration in three coordinate axes. The processor 1101 may control the display screen 1105 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1111. Acceleration sensor 1111 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may collect a 3D motion of the user on the terminal 1100 in cooperation with the acceleration sensor 1111. The processor 1101 may implement the following functions based on the data collected by the gyro sensor 1112: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1113 may be disposed at a side frame of the terminal 1100 and/or at a lower layer of the display screen 1105. When the pressure sensor 1113 is disposed at a side frame of the terminal 1100, a grip signal of the terminal 1100 by a user may be detected, and the processor 1101 performs a right-left hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the display screen 1105, the processor 1101 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1105. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1114 is used to collect the ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the display screen 1105 based on the intensity of ambient light collected by the optical sensor 1114. Specifically, when the intensity of the ambient light is high, the display luminance of the display screen 1105 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1105 is turned down. In another embodiment, the processor 1101 may also dynamically adjust the shooting parameters of the camera assembly 1106 based on the intensity of ambient light collected by the optical sensor 1114.
A proximity sensor 1115, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1100. The proximity sensor 1115 is used to collect a distance between a user and the front surface of the terminal 1100. In one embodiment, when the proximity sensor 1115 detects that the distance between the user and the front surface of the terminal 1100 gradually decreases, the processor 1101 controls the display 1105 to switch from the bright screen state to the off screen state; when the proximity sensor 1115 detects that the distance between the user and the front surface of the terminal 1100 gradually increases, the processor 1101 controls the display screen 1105 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 11 is not limiting and that terminal 1100 may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In some embodiments, the electronic device can also be configured as an electronic device as shown in fig. 12, and fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 1200 may vary widely in configuration or performance, and can include one or more processors (Central Processing Units, CPU) 1201 and one or more memories 1202, where the memories 1202 store at least one computer program that is loaded and executed by the processors 1201 to implement the focusing methods provided by the various method embodiments described above. Of course, the electronic device may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein.
The embodiment of the application also provides a computer readable storage medium, which is applied to the electronic device, and at least one computer program is stored in the computer readable storage medium, and the at least one computer program is loaded and executed by a processor to realize the focusing method in the embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer program code stored in a computer readable storage medium. The processor of the electronic device reads the computer program code from the computer readable storage medium, and the processor executes the computer program code so that the electronic device executes to realize the focusing method in the above-described embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (15)

1. A focusing method, the method comprising:
Determining first focusing reference information of a focusing frame based on a color difference between a region where a target object is located and a region where a focusing frame of the target object is located in a shot picture, wherein the first focusing reference information indicates whether pictures corresponding to all sub-blocks of the focusing frame belong to the target object or not, and the focusing frame is determined based on the region where the target object is located;
Determining second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to the target object in the focusing frames, and pictures corresponding to the focusing frames are not overlapped with each other;
And adjusting the focusing frame based on the positions of the sub focusing frames in the focusing frame and the second focusing reference information of the sub focusing frames, and focusing the target object based on the adjusted focusing frame, wherein the number of sub blocks belonging to the target object in the adjusted focusing frame meets a target condition.
2. The method according to claim 1, wherein the determining the first focus reference information of the focus frame based on a color difference between an area of a target object and an area of a focus frame of the target object in the photographed picture includes:
Acquiring color differences between the region where the target object is located and each sub-block of the focusing frame based on first color information of the region where the target object is located and second color information of the region where the focusing frame is located in the shooting picture, wherein the first color information indicates component values of each color channel in the region where the target object is located, and the second color information indicates component values of each color channel in each sub-block of the focusing frame;
The first focus reference information is determined based on the color difference.
3. The method according to claim 2, wherein the obtaining the color difference between the region where the target object is located and each sub-block of the focusing frame based on the first color information of the region where the target object is located and the second color information of the region where the focusing frame is located in the photographing picture includes:
And determining the color difference between the region where the target object is located and the target sub-block based on the component values of each color channel in the region where the target object is located and the component values of each color channel in the target sub-block, wherein the target sub-block refers to any sub-block of the focusing frame.
4. The method of claim 2, wherein the determining the first focus reference information based on the color difference comprises:
If the color difference between the area where the target object is located and a target sub-block is smaller than a first threshold value, determining that a picture corresponding to the target sub-block belongs to the target object, wherein the target sub-block refers to any sub-block of the focusing frame;
And if the color difference between the area where the target object is located and the target sub-block is greater than or equal to the first threshold value, determining that the picture corresponding to the target sub-block does not belong to the target object.
5. The method of claim 1, wherein the determining the second focus reference information for each of the sub-focus frames based on the first focus reference information for the focus frame and the plurality of sub-focus frames for the focus frame comprises:
Determining the number of sub-blocks belonging to the target object in a target sub-focusing frame based on the first focusing reference information, wherein the target sub-focusing frame refers to any one of the focusing frames;
And determining second focusing reference information of the target sub-focusing frame based on the ratio of the number of sub-blocks belonging to the target object in the target sub-focusing frame to the total number of sub-blocks of the target sub-focusing frame.
6. The method of claim 1, wherein the adjusting the focus frame based on the position of each of the sub focus frames in the focus frame and the second focus reference information of each of the sub focus frames comprises:
determining focusing differences among the sub-focusing frames based on second focusing reference information of the sub-focusing frames;
If the focusing difference among the sub focusing frames meets a first condition, determining a plurality of extended focusing frames at two sides outside the focusing frames;
And adjusting the focusing frames based on the positions of the sub focusing frames in the focusing frames and third focusing reference information of the extended focusing frames, wherein the third focusing reference information indicates the number of sub blocks belonging to the target object in the extended focusing frames.
7. The method of claim 6, wherein the adjusting the focus frame based on the position of each of the sub focus frames in the focus frame and the third focus reference information of each of the extended focus frames comprises:
Determining a focusing difference among a first sub-focusing frame, a second sub-focusing frame and a first extended focusing frame based on second focusing reference information of the first sub-focusing frame, second focusing reference information of the second sub-focusing frame and third focusing reference information of the first extended focusing frame, wherein the first extended focusing frame is adjacent to the first sub-focusing frame, and the first sub-focusing frame is adjacent to the second sub-focusing frame;
If the focusing difference between the first extended focusing frame and the first sub focusing frame is smaller than a second threshold, and the focusing difference between the first extended focusing frame and the second sub focusing frame is smaller than the second threshold, determining the adjusted focusing frame based on the first extended focusing frame, the first sub focusing frame and the second sub focusing frame.
8. The method of claim 7, wherein the method further comprises:
If the focusing difference between the first extended focusing frame and the first sub focusing frame is greater than or equal to the second threshold, or the focusing difference between the first extended focusing frame and the second sub focusing frame is greater than or equal to the second threshold, determining the focusing difference among the third sub focusing frame, the second sub focusing frame and the second extended focusing frame based on the second focusing reference information of the third sub focusing frame, the second focusing reference information of the second sub focusing frame and the third focusing reference information of the second extended focusing frame, wherein the second extended focusing frame is adjacent to the third sub focusing frame, and the third sub focusing frame is adjacent to the second sub focusing frame;
If the focusing difference between the second extended focusing frame and the third sub focusing frame is smaller than a second threshold, and the focusing difference between the second extended focusing frame and the second sub focusing frame is smaller than the second threshold, determining the adjusted focusing frame based on the second extended focusing frame, the second sub focusing frame and the third sub focusing frame.
9. The method of claim 8, wherein the method further comprises:
And if the focusing difference between the second extended focusing frame and the third sub focusing frame is larger than or equal to a second threshold value, or the focusing difference between the second extended focusing frame and the second sub focusing frame is larger than or equal to the second threshold value, not adjusting the focusing frame, and focusing the target object based on the focusing frame.
10. The method according to any one of claims 6 to 9, further comprising:
And if the focusing difference among the sub focusing frames does not meet the first condition, not adjusting the focusing frames, and focusing the target object based on the focusing frames.
11. The method according to any one of claims 6 to 9, wherein the first condition comprises:
the focusing difference between the first focusing sub-frame and the second focusing sub-frame is larger than a second threshold value, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold value; or alternatively
The focusing difference between the second focusing sub-frame and the third focusing sub-frame is larger than the second threshold, and the focusing difference between the first focusing sub-frame and the third focusing sub-frame is larger than the second threshold;
the first focusing sub-frame is adjacent to the second focusing sub-frame, and the second focusing sub-frame is adjacent to the third focusing sub-frame.
12. A focusing device, the device comprising:
The first determining module is used for determining first focusing reference information of a focusing frame based on a color difference between a region where a target object is located in a shot picture and a region where the focusing frame of the target object is located, wherein the first focusing reference information indicates whether pictures corresponding to all sub-blocks of the focusing frame belong to the target object or not, and the focusing frame is determined based on the region where the target object is located;
The second determining module is used for determining second focusing reference information of each focusing frame based on the first focusing reference information of the focusing frame and a plurality of focusing frames of the focusing frame, wherein the second focusing reference information indicates the number of sub-blocks belonging to the target object in the focusing frames, and pictures corresponding to the focusing frames are not overlapped with each other;
And the focusing module is used for adjusting the focusing frame based on the positions of the sub-focusing frames in the focusing frame and the second focusing reference information of the sub-focusing frames, and focusing the target object based on the adjusted focusing frame, wherein the number of sub-blocks belonging to the target object in the adjusted focusing frame meets a target condition.
13. An electronic device comprising a processor and a memory for storing at least one computer program, the at least one computer program being loaded by the processor and executing the focusing method according to any one of claims 1 to 11.
14. A computer readable storage medium, characterized in that at least one computer program is stored in the computer readable storage medium, which is loaded and executed by a processor to implement the focusing method according to any one of claims 1 to 11.
15. A computer program product, characterized in that it comprises at least one computer program that is loaded and executed by a processor to implement the focusing method according to any one of claims 1 to 11.
CN202211642431.9A 2022-12-20 2022-12-20 Focusing method, focusing device, electronic equipment and storage medium Pending CN118233746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211642431.9A CN118233746A (en) 2022-12-20 2022-12-20 Focusing method, focusing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211642431.9A CN118233746A (en) 2022-12-20 2022-12-20 Focusing method, focusing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118233746A true CN118233746A (en) 2024-06-21

Family

ID=91513005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211642431.9A Pending CN118233746A (en) 2022-12-20 2022-12-20 Focusing method, focusing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118233746A (en)

Similar Documents

Publication Publication Date Title
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN110602101B (en) Method, device, equipment and storage medium for determining network abnormal group
CN111028144B (en) Video face changing method and device and storage medium
WO2021208723A1 (en) Full-screen display method and apparatus, and electronic device
CN110839128B (en) Photographing behavior detection method and device and storage medium
US11386586B2 (en) Method and electronic device for adding virtual item
CN110096865B (en) Method, device and equipment for issuing verification mode and storage medium
CN111144365A (en) Living body detection method, living body detection device, computer equipment and storage medium
WO2021238564A1 (en) Display device and distortion parameter determination method, apparatus and system thereof, and storage medium
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN106982327A (en) Image processing method and device
CN112184581B (en) Image processing method, device, computer equipment and medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN111901679A (en) Method and device for determining cover image, computer equipment and readable storage medium
CN115798417A (en) Backlight brightness determination method, device, equipment and computer readable storage medium
CN112184802B (en) Calibration frame adjusting method, device and storage medium
CN112990424B (en) Neural network model training method and device
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN118233746A (en) Focusing method, focusing device, electronic equipment and storage medium
CN112329909B (en) Method, apparatus and storage medium for generating neural network model
CN116782024A (en) Shooting method and electronic equipment
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN113763486B (en) Dominant hue extraction method, device, electronic equipment and storage medium
CN114615520B (en) Subtitle positioning method, subtitle positioning device, computer equipment and medium
CN110660031B (en) Image sharpening method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination